Micron Technology, Inc. (MU)
NASDAQ: MU · Real-Time Price · USD
746.81
+100.18 (15.49%)
At close: May 8, 2026, 4:00 PM EDT
757.35
+10.54 (1.41%)
After-hours: May 8, 2026, 7:59 PM EDT
← View all transcripts
Investor Update
May 3, 2017
Okay. Well, welcome to, to New York and the, 1 World Trade center a perfect venue for us to do a very exciting announcement, from Micron. And I think, we've got a kind of a fun packed morning with a lot of information in it here for you. My name is Darren Thomas. I'm the general manager of the business unit.
At, at Micron, and I'll be your host for, this morning's event. I also wanna, to tell you that we've got a a panel of, very interested, interesting customers with, examples of how they transfer formed their industry. And, we've got a, an esteemed group of people from all over the industry will be doing our presentations for the, the first half of this morning's, event. The, The the second thing is I I'm gonna be walking you through kind of a setup here to give you a little bit of a taste of what's coming so you can you'll see what the the whole story is. And so I'm gonna keep you from having a guest till the end, what what the message is.
So the message is gonna be clear. It's about insights. And, I'm gonna start with, kind of the ubiquitous slide on how big the storage business is and how big it's getting. And I I I've done this slide probably about a million times in my career, but I've never seen this number before. So I've gotta tell you the numbers are getting.
Astronomical in their size. Matter of fact, 163 zettabytes. A zettabytes is a 1,000,000,000 gigabytes. I'm sorry, it's a 1,000,000,000 gigabytes. And so this is a very, very large number, and it, it creates a daunting challenge for our IT, professionals as they try to, upload and analyze this amount of information because it's coming by 2025 in 2016, we were only at 16 Zettabytes.
So this is a 10x growth in a very, very short period of time that, that is being driven by a couple of new trends. So one of the trends is, one that that's been around for a long time. Mobile phones and CPUs. People have been creating content in this, in this cloud space and, and the content people have had to analyze for a long time. What you're seeing now is machines are creating content.
This is the This is the new wave. You're seeing, devices. These are IoT devices. You're seeing a significant number of the population of the US, 70 or, I'm sorry, of the world, 75% of people are connected and not just with their cell phones. So the kind of connectivity we're talking about is refrigerators and cars and thermostats and security cameras, And these are all what they have in common is these are machines creating content without a human interaction, much at all, if any, And as a result, the data is going into, storage solution.
And what's happening is it needs to be analyzed. The data In order for it to be valuable, it has to be analyzed. And because it's real time, it needs to be analyzed real time because people are going to make decisions off that information in in a very short period of time and the information is not valuable. And so we're entering this age. We're now machine creating content, faster than humans are creating content, and that content needs to be analyzed, and it needs to be real time.
So we've got to go from raw data to insightful real time analytics. And, some good examples of this are Like, there's, there's a, a good banking example would be, you have a bank. Well, in the old days, your bank, you know, you filled out a form when you opened the account and they never seems you changed. But in this new world, you might start off as a high school student, you know, saving for college, go through college, take out loans, You might go, you know, get married and have a life and a family and have children. Now you're you're creating, savings for them, saving for your retirement, and then you retire, you go through this life cycle, and then even within a year, your requirements change, well, a bank that had this kind of information about you would be able to keep up with you.
Another example is the, is the internet. Every everybody searches the internet, all these search engines. Well, now there's all this content that even you don't know what you're going to search on very, very, most recently. And what happens is the search engine now need to be able to do almost predictive analytics. They need to be able to, look at what you've been doing, whether you're wearing a watch or whether you own a thermostat or you have a security system and understand what you might search on next and be prepared for that search because the data is too great for you to wait while the search is going on.
And, another example is, content providers is, as you've seen, content providers have, pretty much dominated the entertainment industry. And now, with this kind of content, they could tell by your watch or your cell phone that you just moved from your hometown to New York. And if you've been downloading something, you might wanna download it from here instead of there, and they might push that new content that you're downloading out to a site here that's not where you were, last night. And so lots of opportunities for these, for this analytics to be analyzed real time, and those are just, some simple examples. Why why micron?
So I know, we're all here. We've invited you. The question you might ask is, is, how does micron fit in this? Well, in this world of insight, it helps a lot if you understand the DRAM and the NAND. They they these devices are different how you address them, how you talk to them are different.
We've made subsystems out of them and inside those sis subsystems. We have put abstraction layers to make the data more accessible as a subsystem. And now as you get into these operating systems that are used to seeing hard drives not SSDs or not NV dims or some other form. As you get into that, what happens is when you take those layers back out. In order to do real time analytics, you can't have all these abstractions.
You need to take it back out. What company knows that better than a company that owns NAND and DRAM. And so what you're going to see today is the level of insights that we bring into this, as well as our teams understanding the operating systems, the applications, and the workloads our real customers use. So the answer to why Micron is we are the company that has the insights from 40 years of developing memory technology and now flash technology and, the subsystems to go with that. And now we're moving into the next phase, which is to help with the analytics and the insights, about, the solutions.
And that includes us going from completely from raw data all the way to this real time analytics, insight. So with that, it's my great pleasure to introduce Laura Dubois She's the vice president of the, enterprise storage, servers and infrastructure at IDC. She's going to, walk us through a brand new study, that we, that we work together with. And, and then we're gonna have the rest of this, August group of people, come up here and and and then I'll come back at the end and close. So, Laura.
So good morning. It's a pleasure to be here with all of you today. So thank you so much for joining us. As Darren said, on Laura Du Bois, and I run the, infrastructure systems and software research for IDC. And today, we're gonna be talking about the impact that data is having on traditional infrastructure.
And the challenges with data growth and the pressure that's increasingly placing on, operations and line of business to accelerate performance to meet increasingly business imperatives. Business imperatives of transforming business models, business imperatives of improving and transforming customer experience, And lastly, in driving new types of operations internally and improving and driving greater efficiency and operations. And it's increasingly a focus on data that makes the difference between a company just merely surviving, and a company that's gonna thrive and outperform its peers in the industry. And so increasingly, we see the the focus on data as being a differentiator, and the need for increasingly real time data analytics real time analysis and mining and intelligence that's applied from the understanding of data that is driving this increase importance markets. So I'd like to walk you through 4 examples of companies that are upon by large banks, some of which are in this audience.
We see, data driven strategy being embarked on by large retail organizations. By folks like Starbucks. So Starbucks is increasingly looking to provide the customer experience on the digital platform the same way they provide the customer experience in the retail location. And so they're increasingly using data combined their their customer insights their, customer retention data and frequency, customer frequency data, applied with content from partners rather that be the New York Times or music sources from from from Pandora and others to provide a digital experience for customer that really makes them want to stay with Starbucks. Now that's a large company example.
We certainly have smaller company examples as well, so someone like Pure Fishing. So Pure Fishing is a $700,000,000 company that sells bait and tackle to both and consumer or individual fishermen. And what they wanted to do is collect data from the the the site which is the fishing site and be able to provide a variety of different data, whether it's the depth of the cast or the temperature of the water or the the the the reach of the cast or the weather that day, pulling in historical information as well. I'm providing this information to the customer to improve facture, very conservative company in the Midwest, a physical product, what how are they going to transform and become a digital company? Well, they've even put intelligence onto their engines, and they're collecting this information, a variety of different telemet factors and they're able to provide, analyze, and provide this information in a dashboard through this intelligence capturing on the engine itself and be able to provide this information to their customers, which are resellers.
And those resellers can then stand up new services to be able to service that, say, home generator before spike and a big snowstorm. So you know that home generator is gonna work before the event. Last example is, McCormick's Spice Company. And they're in the process of transforming and being a food experience company. And they've taken, over 100 of different vectors of, taste preferences.
And they've combined these taste preferences, obviously, for their particular products and spices, and created an algorithm and an API called flavor print. That they now leverage and sell this information to downstream food manufacturers. And this has been so successful that actually spun a separate company from it. So these are examples of 4 companies taking a data driven approach to transform their business. Now, if data is at the center of this transformation, you might ask, what does that mean in terms of infrastructure strategies.
Well, increasingly, CIOs are telling us, and in fact, 72% of a 1000 global CEOs told us that digital transformation and business and data growth are having a a fundamental and the leading factor in driving infrastructure strategy and decision making. And this is very consistent in what we see CIOs wanna do, certainly 1st and foremost, to security and mitigate risk. Secondarily, reduce costs and increase efficiency. And lastly, is align with the business and support new business initiatives, such as the ones that I for. Now if this is the objective, is driving enabling growth, and the growth is coming from the the importance in data driven approaches and strategies, you might ask, well, what is the volume of data growth?
And Darren already articulated this, and I'm just gonna go through it briefly, which is there's never been a lack of data and there never will be. By 2025, there'll be over 163 Zettabytes of data created. And just to give you an example, that's growing by a factor 7. Off of a 2016 baseline. So that's at an industry level, in terms of consumers and individuals creating data, businesses creating data, data created by applications, data created by use of social media, data created by increasingly the digitization of content, machines creating data.
And you might ask, well, what's the growth in data for a given organization? Well, we embarked on a study in partnership with Micron to understand across 800 global companies, across 9 countries, what their challenges are around data growth, what pressure this place is on traditional infrastructure, and what their objectives and and and strategies are to do or what they're doing to address those challenges. And then lastly, how that the role of Flash, impacts those strategies and how they're using Flash? So let me share some insights. So pretty much everyone is facing data growth, only 1% said they didn't have any data growth.
On an annualized basis that growth in data is 24% annually. Now, we've seen this moderate a little bit certainly in the last 10 years. Because of the efficiency of storage technology capabilities, just dedup and compression. Nonetheless, it's growing at a very significant double digit rate. Now there's variance in growth of the data across company size, but irregardless, this growth in data is coming from a variety of factors.
Certainly it's coming from business growth. Certainly it's coming from increasing numbers of users. Certainly it's coming from the consumerization of IT. Certainly, it's coming from the increasing, intelligence out in the endpoints and on the edge locations and capturing more and more disinformation. Whether that be about a user, a system, or a refrigerator, as Darren talked about earlier.
And if capturing all this information and it's analyzing and it's analyzing it to drive better business outcomes. Now all of this data growth is driving challenges. And so we wanted to understand, okay, this data growth, what what does this mean to your environment? How are you able to keep up with this data growth? And as you can see here, the top 3 challenges in with in keeping up with this data growth are around performance.
So number 1, leading 1, can't keep up with performance, expectations. This could be latency specific, could be bandwidth specific, this could be throughput specific. Secondary factors are keeping up with backup data protection, right? You start your backup, you can't get it done before you need to do the next writes back because the data is growing so much. And that's a bandwidth intensive kind of scenario.
And the last one is around data and the ability to understand and be able to make decisions around what data to keep and what data to dispose of. Now with these as the challenges because of the data growth, what are strategies that firms are embarking on to address these performance issues? Well, certainly we see investments in new architectures. Now, these new architectures could be hyper converged infrastructure, these architectures could be software defined infrastructure, certainly the rise of cloud computing and the use of either public cloud or private cloud. There's also this massive double digit adoption of all flash arrays.
All flash arrays with mixed workload consolidation, the move of workloads onto these all flash arrays to drive up levels of performance, drive greater levels in IOPS, drive reduction in latency. See into the mix, right? Now, if you're adding more HDV based capacity, that certainly helps you with your growth issue, but if you're using storage, efficiency technologies like deduplication, you may wanna think again in terms of what impact that has on your performance. Certainly, as it relates TCO. So these are the actions that companies cite that they're taking.
Certainly, Flash is 1st and foremost, I would assert in the leading challenge, investing in new technologies, increasingly we see Flash embedded in all of these new technologies, whether it's on premise or off. And so because of this, we see, Flash being used on a variety of replaces in the IO stack. When we first started looking at this market, Flash was being used in the host to accelerate small pieces of data database indices. Excellent performance you got from that. You move the database to all flash, the full database.
Now we're seeing many workloads move to to Flash based, shared Flash based storage. And, certainly, we see Flash being used and infrastructure as a service, high IO instances, and AWS. I'm Flash being used as a mechanism to drive greater levels of innovation, And now we're beginning to see the rise of rack scale architectures, disaggregated and compostable infrastructure that increasingly has a low latency, high flash, high performance flashed here in these rack scale architectures. And lastly, a software defined infrastructure. So that could be, open source based.
It could be a commercial or proprietary software stack. That creates a a, a storage pool out of, individual systems with local or direct attached storage. And increasingly these environments are being, used with new types of applications and new types of workloads. So we refer to as cloud native workloads, workloads that tend to be horizontally scaled by nature, stateless, based on microservices, tend to make heavy use of open source technologies and the like, and often use Flash as a persistent steer. And so because of Flash is really perpetuated or influence all of these technologies, it's really why we see by 2020 that 77% of storage systems spending will be Flash based.
So we wanted to understand what the adoption of NVMe was because as we seed customers either augmenting or display HDD based technologies with Flash, we see we're now on the next wave of adoption with lower latency protocols and higher performance, flash arrays based on NVMe and NVMe over fabric. So we wanted to understand what's the level of adoption of NVMe today And this is among existing Flash users, so 48% to say they're using NVMe flash, NVMe or PCI flash. Now this is predominantly in the servers, I would say, almost exclusively. And when we look at the drivers for why they're using NVMe, It's, largely a function of economics, of performance, greater concurrency, higher bandwidth and throughput, and, greater business outcomes that they're looking to try to achieve through use cases such as, and where we see NVMe being used, is in use cases such as real time analytics. And let me give you an example of that.
A large, sneaker company, probably one we all know. Wanted to do a collection of customer sentiments across millions of customers over social media, during a global sporting event, the Olympics, a World Cup. Collect all this customer sentiment analysis, sentiment information, tens of 1,000,000,000 tens of petabytes of information that they're collecting and analyze that in concert with past purchasing behaviors, combined with micro events going on within the broader sporting event. So within the next 30 seconds, what's going to happen If Joe Smith makes a goal, we're gonna make this offer. But the offers are individualized to specific and individual customers.
So what what the offer is that Cindy Smith gets is different from the offer that, you know, Joe Blow gets. And so to be able to deliver this kind of real analytics, they need to be able to deliver performance in the sub-two 100 microsecond kind of range. And so these are examples. That's an example of where we see NVME flash systems getting deployed. Now, this is very early stage in NVMe over fabric, but it will definitely be a game changer in driving the way of accelerated adoption of next new, new, flash architectures.
So we see flash strategies being evaluate across a number of key vectors. So first one being adding more capacity. So more capacity in terms maybe it's 3 d Crosspoint or 3 d NAND. Certainly, devices getting more more, capacity rich. And we see certainly the use of flash and and SSDs across traditional interconnect SaaS and and SADA.
And now we increasingly are looking at starting to track the adoption of PCIE and MBME, both in the server and in a shared array. Now, with the use of Flash, among these data driven companies, you'd ask, well, what's the business outcome that they're getting from the use of Flash? What what's what does it yield for the business? Because certainly there's operational benefits, but let's talk about business benefits as well. And so we asked, what was your rationale for investment and what was the outcome that you got?
And the leading factor here was efficiency, right? So efficiency in terms of reduced power and cooling costs, reduce, floor space, greater levels of density, you know, doing more with less, right, taking multiple racks and consolidating it down to half rack, for example. Right? And you get further efficiency because you're able to use right minimization technologies to reduce the amount of data you actually even persist in store. And so in terms of the the average cited gain in efficiency, 20% increase in efficiency on a mean or a blended average basis.
Now, the second area is around customer satisfaction in. Now, one call center I talked to that put their call center app on an all flash array was able to reduce their call close rate by 30%. So very big deal in a call center we're closing, you know, keeping those calls short is everything. But you can also, from an operations perspective is not uncommon to reduce your operations time internally in terms of troubleshooting performance issues, by 15 to 20% on a weekly basis by investing in Flash. That's a very common statistic I hear.
So 32% said greater customer satisfaction So whether that's an end user customer or an internal customer. Certainly, there's a correlation between page load time and and and performance on the back end. And lastly, is reducing cost. So, 55% cited a noticeable increase in customer satisfaction, customer, experience because of the investment of And the last one is around, reduced cost. Reduced cost in terms of the amount of capacity you purchased, reduced cost, power cooling, floor space, reduced license, potentially reduce software licensing costs.
And so the average cited was 45% said they saw a decrease in costs in the range of 10 to 20 5%. So very real impact from a business perspective, starting with the imperative around data, in using data to transform or drive better business outcomes. So in closing, I just wanted to say the use of Flash, yes, it's about performance and IOPS and reduce latency. And certainly with the rise of NVMe and NVMe over fabric, you can get the same latency that you would get with a local, persistence, but the benefits of shared storage using NVMe over fabric as it continues to evolve. But it's understanding the business outcomes that you want to achieve, and you all should be evaluating how you can begin to determine what workloads are best suited for future architectures that are based on NVMe.
So with that, I'd like to thank you for your time. Feel free to reach out to me, online or or after this. And with that, I'd like to hand it over to Eric Enderbrock, who's going to talk about the strategy that Micron's embarking on.
Well, thanks, Laura, and thanks, Darren. I get the best job here. I always love this part. I get to go talk about the new stuff. We kind of set the stage a lot about data, what we're doing, but before I did that, I thought maybe it might be a good time to thank everybody sincerely here in the room and online.
We have people watching, hopefully, all over the world, but tuning into this. And and I thought if you guys would just humor me for one second, a guy from Austin via Boise, Idaho these days, I'm never gonna get to do this anywhere else. And I've always just wanted to say live from New York, but, you know, And, yeah, thank you. And you know what? The funny thing was they didn't get it on the subway at all last night.
I don't know what was going on. So, of course, it's pretty crowded. I somebody doing it. Anyway, hey, we, we saw the insights. We see it's about the data.
We see it's about a new economy. And if there's any confusion, hopefully not with Micron being up here. We think Flash is really that technology foundation that's going to take this to the next level. And I want to kind of build up a little bit of crescendo. It's not going to take long.
Just wanted to walk through. Well, what are we doing around Flash? How did Micron get here? Are 10 years perfecting our flash technologies and even more perfecting memories in general, and and what does that mean? And so we spend a lot of time focusing on the workload.
And if you've talked to any of our team anywhere, hopefully they haven't started with what kind of SSDs do you want. Hopefully, it started What are you running? What are you trying to accomplish? What are you doing in your solution? We spend an awful lot of time on that.
And it's really focused. If you think about our portfolio, number 1, Laura talked It's always about capacity and storage, of course. So over the last year, we've seen capacities double, even triple in some case and so the use of capacities is really jumping. Maybe best evidenced by our 5100 SSD line where today, I think we're the only vendor who's got 8 terabyte SADA SSDs in the market. And and, you know, we obviously have NVMe and other portfolio of products moving well past that, even with line of sight to 25 terabytes plus.
And so we see this massive capacity adoption and increase in market. And then you think about, well, flexibility through our Flex Pro Architecture, we're making it easier to adopt SSDs to use them to optimize them and to connect them into your workloads. And so what that means is I can control the capacity real time through its software. That's the beauty of memory. It's not a spinning media.
You don't just get 15 k, 10 k. You whatever we want it to be. And so as you can start tuning that flash and the media through your SSDs, you can adapt it better to the environment and the workloads that you wanna that. Obviously, there's always performance. NVMe, we see as the future, it's a very low latency, high speed technology, We're always going to be very focused on bringing solutions like that to market to solve some of the customer problems we've discussed.
And then you enter this interesting one with security. And then there's table stakes, like things like encryption and TCGE, but then there's beyond that into FIFs. We're rolling our portfolio? How do we harden our portfolio for very ruggedized environments, making sure that it's supporting of the most demanding pieces. And then you take that even one step further.
Micron is a company with a recent announcement we're doing with Microsoft, where we'll connecting IoT at the endpoint through secure connections all the way back to the cloud, in this case, with Microsoft Azure. So spending a lot of time focusing on how do I put this together in a way that matches your security needs. And so you bring all that together. Obviously, we think that Flash brings the most valuable economic advantage. So that's sort of our SSD path.
And we said, alright, we've got the great technology foundation that we build, and it always amazes me turning sand into something exciting like Flash. You put it into SSD as a portfolio. Our logical next step, last April, we launched our our reference architecture series. We called these Micron Accelerated Solutions. They had companies coming to us saying, look, when we get that Flash is year, but that's all we get right now.
And we started working with those companies, say, okay, how do I take it to the next level? How do we start illustrating one flat can do. And and, you know, these were these have been great solutions, and we learned a lot through it as we helped other companies learn about it. VMware worked with us. We created a vSAN solution out standing performance for taking advantage of the infrastructure that our customers already had, a very heavily virtualized environment.
So that was great. We learned a lot. In CELF, and we're continuing these, by the way, next week, if you happen to be at Openstack in Boston, we're revving our CELF solution at the storage level to an all NVMe flash offering, working with our great partners, micro or sorry, Red Hat and Supermicro here. Doing some great work around, I guess, extending out what we can do in some, some more cutting edge storage workloads. And so we learned a lot through this, but then the next step is, it says, Alright.
Well, where do I go from these reference architectures? And he kind of took a step out and he said, well, that's software. So the next frontier there was All of our software and ecosystem is still written for hard drives. There's tremendous inefficiencies throughout the operating systems, throughout the the file systems, the block layers, all of this is causing so much
I guess I'll just call it
a barrier. It's blocking taking the really true advantage of what Flash can do in some of the media. And so we're spending a lot of time with the software ecosystem our application writers, and even ourselves rewriting and tuning all the way from the top down to the bottom through our firmware. How do we advantage. How do we strip out some of the inefficiencies that are preventing us from taking advantage of Flash?
And so very excited about that and moving it forward. And then so you say, okay, I'll take my flash and my SSDs and some solutioning and some software. And I put all that together and kind of start thinking about platforms, but even that doesn't get you all the way. So we kind of bridge through this last year of really accelerated learning And you start saying, well, the problems that Laura was describing, the connectivity that Darren talked about when you think about those kinds of challenges facing us, you say, you know what, just flash alone isn't going to solve that problem. What I have to do is I need to connect it.
I need to use a fabric to connect that to reach the insights that I'm looking for. My data velocity, my variety, my volume, all of those are are just coming together in this confluence that isn't addressable. And so, NVMe over fabric, we think is one of those key solutions that is going to connect and enable all of these pieces together through the software but maybe rather than me talk about it, we produce a little video. I'd like to show you just a couple of seconds and then we'll come back in and talk about our solution to some of
Well, hopefully that was more
siding than, than me explaining it. NVMe over fabric. So I like to introduce, with that in mind, our architecture based on NVMe over fabric and a lot of collaborating. We call this solid scale. This is an architecture designed to address the solutions or the challenges, sorry, that we're seeing in facing in the industry.
The extreme latency challenges, the consumption of data, how to bring all that together into a solution. As you look at this, let's kind of dive into solid scale and its architecture. Well, solid scale is, is really meant to be an architecture overall. So it is both the platform. You can see the pictures of it in a, in a scale out mode here.
Sorry, scale out mode. It's, it's designed to be both scale up and scale out. And so when you go out into the lobby, you'll get a view of this. You can stop by. You can look at it in action, see what's happening with our solution, but you basically get a view that says, look, whether you're extending your existing IT investment and you need to scale up or if you're already modernized and you have the right kind of applications and scale out as your focus, we've designed an architecture that's supporting of both of those.
Really targeting low latency high performance to start with. We figure you can go fast and then you can add a lot of different capabilities to it, but in today's world, going fast with throughput and performance makes a lot of sense. And so whether you're doing real time analytics, you're accelerating databases, you're doing I IoT consumption at the edge needed a solution that solved those kind of data problems. What we're announcing today is our early adopter program. We've built this architecture in conjunction with our partners.
We focused and put a lot of effort into solving and path finding many of the technical challenges that go with it and you can talk to us about that out in the lobby, but there's still more work to do, and we're putting out a call. We want to work with our partners both in the software world, our hardware world, our customers, and we'll be hearing from a few of the customers testing these solutions real time right now, but Here and in line, we'd love to invite all of these different groups to come be part of this as we, as we take this forward to the next level. And so kind of diving under the hood a little bit, let's just describe what this solution is. So foundationally, The fabric is a key piece of this. Our great friends at Mellanox have been working with us on putting together, a very low latency fabric that puts this together.
We're using a 100 gigabit a second Ethernet and to connect up within the rack, all of our storage solutions that's coming from the device I showed you, the solid sale platform. To coalesce it around to centralize the management, we're working with our friends at Accelero, great software technology that basically is ground up built for the lowest latency, the highest performance the most simplified management experience to bring this together. And we feel like putting all of these pieces together into a complete solution that we can deliver to you in Iraq or into component form based on the building blocks, enables us to really accelerate adoption in use cases, moving this into real time production. And so we're doing that. And again, you'll hear more about what we're doing in real time with our customers here in a minute.
So, what's under the hood here? A little bit more. Let's kind of get into some of the performance. And performance isn't everything, but, you know, you kind of think about I talked about putting Flash into any product doesn't necessarily solve the problem. Today, flash most systems and all flash array typically, maybe 4,000,000 IOPS out of our 3 node system that's highly redundant, fully available, will get 11,000,000 IOPS, and we see this number maybe even increasing.
So definitely the power to feed any of the most demanding process or systems and moving it away from direct attached storage I guess maybe that's probably a place to stop. Our design goal, I probably should have said this upfront. Our design goal was how do I build a system that breaks you free of having to put direct attach storage and a server to be close to that CPU, but being able to get the the shared aspects and the economics of pulling that together and putting it together in a solution and and really being able to management it, manage it as 1. Now, we've tested this to 250 six nodes very scalable solution. We think it's architected to go well over a 1000.
So I think it's a very scalable solution for moving forward into the future. Obviously, IOPS is just one measure of performance. The other design goal here, and I think we heard Laura mentioned it kind of this 200 milliseconds, Micro seconds being very important to us. We want to have latency. Our goal was very little if no overhead to being local.
And we've kept it within 1 percent. So if you think about what that means, we're around 200 microseconds end to end transaction within the solution. So the round trip to the fabric very little overhead, very predictable, and this is now unleashing this supply of of, storage to be a multi tenant environment. And so I'm really thinking very compelling in terms of producing the raw performance of the solution. And what that looks like in an application, just take Cassandra, Apache Cassandra here.
You can kind of see, and these are different views of running it local versus running on a solid scale solution. I won't go through each one. But the the main message here is that on any given transaction, whether it be updating a database or loading it or reading from it, you'll see that we produce very similar results to being local, but you're not throwing away 50% of your capacity 50% of your IOPS that we we estimate are stranded in any given local server. So you've unleashed that much freedom within your environment by doing this. And then lastly, Microsoft SQL, it's in the Gallagher zone here, Microsoft SQL, And this is all on Linux, by the way.
We're looking and we fully saturate our 100 gig links. We can produce over 11 gigabytes a second throughput. If you think about the most demanding modern workloads, the consumption of storage data sets that are so large that I can't even push them close to the processor with a solution set like solid scale, I'm enabling those workloads now in a way that has never been possible before. So We're very excited about what this brings. And this is just the beginning.
Solid scale is an architectural umbrella. Today, we are focused Linux solution and very high performance. We absolutely expect we'll be expanding our operating system support here shortly. We, we have ideas about how we do more extended memory type solutions for in memory databases and really tire other part of our legacy of Micron in of the DRAM side into this. And then there's, and then there's capacity views as well.
And so this isn't the last step. This is a performance step on the road to much bigger things that we think address the future of storage problems and the challenges Laura described. And so out of this, you can hopefully see our technology, our path from where we've been to where we're going. And And this is our attempt as we see these challenges to step up, take advantage and start bringing solutions like this that have meaningful relevance to the challenges that IT managers are facing. So that's today's problem, but you can't just think today, you got to think tomorrow.
And so It's going to be my privilege to to bring up Brad Spires. Once you get up here, Brad is like many of you in this room. He's from Wall Street. Hold the cops. Hold on a second.
He's from Wall Street. He's an expert. This is the kind of thought leader that, Micron has been bringing into our company to help solve these and is somebody who's walked in the real shoes, has solved real problems, and, and all about him talk, he can do a better job evangelizing him than I But thank you all very much for being here. Thanks, Eric. So
good morning. So as Eric mentioned, I spent about the 1st 20 years of my career on Wall Street trading a bit, but also building systems. For those of you
who are not aware of some
of the things that I've accomplished, so back in 2003, right, we can take a little walk down history lane. And as some of you know, right, the main challenge on Wall Street used to be valuing the derivatives, right? How do I get enough compute into my data center in order to get the right valuations by the next morning's open. So back in 2003, I actually suggested that we start looking at graphics card processor instead of traditional compute in order to be able to accelerate that. So that's one example in which we looked at how to match the silicon to the problem.
So once we deliver that into production, right, then they said, Brad, why don't you go take a look at external cloud? Go make that secure. Right? So I partnered with the security team. We worked together on some of those challenges, and we were awarded 3 different patents on how to make secure.
But the underpinning of those patents, right, how we would do that is called the hardware based root of trust. And what that really means is that once again, we were actually matching the silicon to the problem that we faced. So after that, they said, tell you what, why don't you take a look at the internal cloud, right, start to take a look at how we would use software defined infrastructure. Software defined storage, software defined compute, and also software defined networking. Well, to do that, right, what we really wanted to do was run as many containers or virtual machines as possible in the smallest possible footprint.
Well, there we learned that the real bottleneck was in fact DRAM and fast storage, not compute. And so we loaded up our servers, which was a major change. Right, we went from 64 or 128 gigs per server to more like 5.12 or 7.68. And so once again, right, what you're seeing this pattern of, we were mapped the silicon that was in our devices, right, to the problem at hand. Now as we start to take a look forward, right, and we start to take a look at machine learning, or as Darren and others have been mentioning, you know, how do we derive insights from this deluge of data?
So to help me talk about that, like to bring up Steve Pawlowski. So some of you might know, Steve spent 31 years at N well. So Steve was part of the teams that did the very first PC chip, right, the very first server chip right? And he was a senior fellow while there. So
Thank you, Brad. Of course, Steve. It was really 32 years, but who's counting that?
So can you tell us a little bit, Steve, about this wall that we're the memory wall, right? And why is it that maybe traditional architectures might not enable us to get across some of the challenges that we face as we start to look to provide some of the insights or things from our data.
Sure. So, so a little bit about my background, I spent most of it doing or design, research and design. And, after spending 12 years in Intel Labs, I was asked to go back to the product group turned it down, got a call from CEO and said, you might want to rethink your decision, which, you know, a day later, I ended up in the product group. And I ended up with the first meeting I had was with an HPC company that basically looked at me and said, we hate your multicore strategy. And I said, well, it's nice to meet you too, you know, it's my brand new job.
I said, no, we hate your multi course strategy because you're adding cores, but you're not improving memory performance. You're not increasing capacity and you're not increasing bandwidth per core. So I kind of let that go because HPC, these were big systems. They're 2% of the market. Most companies are losing money.
They're kind of like the Ferrari. And then, you know, you have the YUGO, which is the standard high volume server. But then, 2008, started spending a lot of time on Wall Street. And the reason was is because, there are certain, you know, individuals within Wall Street that that were getting insight into what was happening with platforms and with the competition and were giving us that insight up to 2 years before the mainstream OEMs were giving us that And so, and I asked him if this is okay, Jeff Byrnebaum cornered me in 2008, which he was likely to do many times. And said, let me show you this diagram, and he put up 2 slides.
One of them was our the current generation server processor with the Fusion IO, Flash Cash, And the next one was the next generation of IO processor, and he was showing the, the IOPS. And the Fusion IO just overwhelmed. I mean, we saw greater improvement with the greater number of cores in the microarchitecture, but the Fusion IO solution was overwhelmingly better. And he said, where am I going to spend my money? So, you know, you don't have to tell me my dad used to say, you had to tell me something three times before I figured something out, but I heard it twice.
And we started focusing on What are the real aspects of, performance that need to be and what has been neglected over the years, and it's clearly been the whole memory and storage subsystem?
Thanks, Steve. So I guess one of my questions is, so how big is this difference, right, in terms of, you know, the memory and storage subsystem? So if we're to think about, you know, difference in energy between compute and fetching the data. How big a difference would that be?
Well, so I spent a lot of time with the National Labs and especially the guys that do the, the nuclear codes. And for years, they profiled that, for floating point instruction, there are 7 instructions. 1 is floating point, the other 6 are for data So when you go ahead and profile, what it takes to bring and in the memory devices, what I learned about going to Micron. So I really gained a lot of insight into memory. When you're doing a read, even if you want a byte, you're reading 2 kilobytes.
And if you don't use those 2 kilobytes, they effectively get written back and that energy is wasted. So when you profile doing a floating point operation, the ratio is almost a 1000 to 1 in terms of energy of doing the compute versus energy of doing the data movement.
So it might be better to move the compute to the data rather than the data to the compute.
Yeah. And where I happened to be at that time, that probably wasn't the most popular position to take. But as we started looking at improving the architectures, we started getting to architectures that look very much like, in fact, on the drawing board, and I still have those, Micron's hybrid memory cube, where there's a logic layer on on the bottom and we stacked memory on top. And that memory could either be DRAM. We were looking at DRAM or at the time flash and maybe 3 across point.
So, that coupled with, in 2013, I spent a lot of time in the supercomputing world. I was at supercomputing in Denver, and I was given the paper on the autonomous processor. And my specific task was, is this something that we should actually worry about from a competitive standpoint? My read was no because there was a tremendous amount of software that was a different programming paradigm. But looking at that problem from the memory perspective versus the processor's perspective, gave me insights I would have never gotten.
So it was clear to me that if we were going to change the game, we had to do it starting from the memory and storage subsystem and working out.
Interesting. Now, the Tahoeinter processors are So it worries about pattern
matching,
right? And then it's a bit of
a different architecture. Can you say a bit more about the style of architecture or maybe some of that different styles benefits? Well, it does benefit pattern recognition very much, and there's people that are looking at. I mean, there are certain agencies and certainly individuals here, when you're looking at text, you wanna look at certain, you know, you wanna do some distance matching and and character recognition in terms of looking at those levels of text. I would say it probably was the beginning of what our concept would have been a, rudimentary artificial neural network.
I think there was a lot of work that we would have to do in terms of making them as robust in terms of the classification, but it was the beginning of that style of computation, which Now, everybody's talking about machine learning and artificial intelligence. And all those are a lot of those are really based on the artificial neural network technology that's been around since 1980
Thanks, Steve. I think there's a number of people here in the room as well as online who are worried about classification and
a
number of inference problems. Now, as we start to look forward then even further to then apply that, right? So what are the ways in which our partners and customers can then drive insights from that? So can you describe some of
the different ways that people use that? Well, we, we actually, in 2015, purchased a small company in Seattle, Pico Computing who builds FPGA modules. And the purpose for doing that was, as we were coming out with new technologies like hybrid memory cube, like different flash architectures, we could actually integrate them and get them in the hands of users. So we actually had a solution that we could, we could sell with the system. When I first started with Intel, I worked for a solution in the solutions group that would take those processors and their job was to, you know, get people to start using those.
I know it's a pretty successful one. So that's what we did with this company. And we actually have an FPGA card, implemented with HMC that we use for machine learning. So different companies and different elements are using that in order to build we've got a machine learning algorithm that sits on top as a demo of it side, but we're looking at different things like, we're starting a collaboration with, a health sciences university that's doing genomic sequencing, and they wanna use machine learning to understand and get some insight into the different gen patterns of people that happen to share common diseases. So we're using that and we're using it as a test bed to see how the memory is used.
Because it's understanding the application, how the application uses the memory will give us greater insights into how, memory can actually be more beneficial.
So there's actually an for a wide range of customers, right, whether they're doing, you know, things with, genomic sequencing, right, if it's, you know, actually even people are ID tracking, right, things like traffic and navigation. But we should also say a word, Steve, about how we actually use it ourselves. Right? So we at Micron are sort of with you, our clients, we also use machine learning to increase the yield within our own apps. So we have more than a petabyte of data that we analyze, as Darren was mentioning, we need to do it in real time.
So as you think about the wafers that produce these SSDs that are part solid scale, right, each of these wafers in the chips has dozens of different layers. And we need to make decisions in real time about whether or not to move wafers on within our fabs. So very interested.
It is. You know, and, one thing I did want to point out is As we've been going through these, the the acquisition of Pico is kind of interesting because their founder is probably one of the most brilliant people I've ever met. And, someday I'll give you my treatise on the brain average the average use of of energy is in the brain is 20 watts. And so some people theorize that people that are socially very, very capable use the majority of that for social network and people that aren't used the majority of that for computation and I've got some examples where that might be true. But anyway, he actually came to me and said, you know, sort is a big problem.
And when I talked to the guys at oil and gas in my previous life, they said, you know, compute is great. I have a problem because I can't sort. So it came up with an architecture of building a key value store in a sort table inside the the the DRAM. So as of an element's being written, we sorted at the same time it's actually being written. So we're not incurring that overhead of constantly having to update the sort table.
So we can say sort by 50x. Save a significant amount of power. It takes very little die area. But what's unique is when you look at the classifiers, we're doing it. They're the exact same classifiers.
We can, you know, use character recognition and machine learning. So now we can kind of overlay that technology inside a DRAM device and a NAND device as well. In fact, NAND is actually better because we can store the weights in the floating gate, and they can be persistent. So what you're saying in terms of my workloads is,
you know, back when I studied computer science, right, I was sort of suggested that n log n was pretty fundamental for how long it would take to to sort or even n if I knew something about the data. You're saying you can do it at DRAM speeds now? We
can do it at DRAM speeds, and we're actually, so, so we're not eating our own dog and being falling in love with our idea. We're actually collaborating with the university in Europe that is gonna take that and start running the prototype that we have FPGA prototype and running it through a range of algorithms to make sure that, yes, we haven't missed anything. And the advantage is is that with a simple library call, people can use it with their software today. And then because of the capabilities there, software can evolve over time. You know, a brand new architecture that this is one of the things that worried me about automata is it needed new programming paradigm.
And it takes in this industry, 2 Olympic cycles, summer Olympic cycles, 8 years for new hardware to actually be picked up by the software community. So you wanna have something that works in the standard implementation. And when that hardware becomes pervasive, the software starts to take over and then they start using a new capability. But at the same time, you wanna get paid for it too, so that's always a difficult trade off to be made.
Well, thanks, Steve. And now I'd like to turn it over to Mark Laskow, who's going to walk
us through a discussion with some CIOs. Thanks.
Thank you, Brad. Thank you, Steve. Well, these are certainly very exciting times for the industry and for Micron, specifically the nearly 40 year history of Micron is, is proof that this is a company that knows how to not only evolve, but change and innovate. And we continue to do that today, as you, as you just heard from American Interbox in pretty exciting news, with regards to storage solutions that we think are going to be fairly game changing in enterprises of all types. So we're very excited about that.
And, you know, one of the things that that, as, as Micron continues to evolve, one of the things that they've done for nearly 40 years now is our go to market has been through the large OEM partners. And that is not going to change. We are going to continue to work with our OEM partners But one thing that we saw a need for about 3 or 4 years ago was to create a sales team that is customer focused, that is out having conversations with CIOs of major corporations that are trying to solve these really big tough workloads and and do it in an optimal way that not only optimizes power and cooling, but real space in the data center all the while keeping up the performance and keeping the end user happy. So, we're excited about some of the opportunities, but, One of the things that, like I said, that happened in the last 3 years is is the number of customer conversations that we're having that we can bring back to the engineers that continue to innovate and make these these exciting solutions. And, and then the the sales team that I run is the one having those conversations And when we when we create that demand for Micron, we pull it through the OEM channels that have been there for now, like I said, almost 40 years.
So very exciting stuff. Today, we have a number of panels that I'm going to invite up here in a minute that can tell us directly from their worlds. What are they seeing from, from a Flash perspective and how it's changing their data center, how it's enabling them to do different things with regards to some of the really super demanding workloads that keep coming at us in many different forms. But before I bring them up, I want to take you through a couple of quick slides. I've I've been afforded the opportunity to fly really around the world and talk to customers almost every continent.
And these are some of the things that we see out there that's, that's very interesting. So as Darren mentioned, huge amounts of data being shipped in different storage formats across the globe in 2015. The total flash penetration was 5%, in 2016, it was 10%. So a couple of exciting things inside this, this first data point. Number 1 is almost 100% growth, year on year for Flash.
That's exciting. But the other exciting part is, it's only 10% of the total market, of storage being shipped on what is the total available market on planet Earth. These are exciting numbers for us. We feel like we at Micron have a lot of room to run. And a lot of, a lot of expansion opportunities.
So again, you'll see, you'll continue to see us innovate and create solutions that allow us to to grow these numbers and to expand and to build upon this legacy of, of a great company. The second thing that we see are the workloads. They boy, they are changing. If you look at the the super rigorous demands that are created by multi tenant architectures, cloud in every form, public, private, hybrid. These are these are big, big technology opportunities companies to gain efficiencies in the data center, but they are not easy to solve for.
And, I love what one CIO said when when he placed one of our, a set of our NVMe drives into his data center. He said it was like aspirin for his data center. Everything just sort of felt better. The CPU utilization approved. The IOPS went up.
The latency reduced. His power and cooling reduced. The data center real estate being used was reduced. All of the all of these things are good things. Was like Aspen for his data center.
So I've stolen that. It's it's good. The third thing, continue to drive and bring data closer to the CPU Again, the workloads are demanding this. This is, this is, it's very hard to achieve some of the machine learning and big data items, in environments that are running on a storage area network where the request has to go across the network and hit an actuator arm. There's just a lot of latency there.
And so Micron continues to, to exploit these opportunities as these new workloads come at us and the random IO patterns get even more variable in the mixed workloads. These are all very exciting opportunities for us. Micron specifically, is, is very uniquely positioned. In fact, I'm all the storage companies that I've known in the, in the nearly 3 decades that I've been in the storage business. I don't know that I've seen a company better positioned with the market coming at that company like the markets coming at Micron.
So many opportunities. In fact, I heard just the other day, the CEO of SoftBank Masayoshi Son mentioned that our tennis shoes will have NAND and DRAM in it so that it can measure not only the steps that we're taking, but our weight our hydration, all sorts of telemetry data about your body. I mean, this is, this is outstanding stuff and Micron stands to benefit from the, from the internet of things. What's amazing is that there's only 4 makers of man flash on the planet. And so we are one of very few, and the barriers to entry into this business are pretty significant.
We don't see that growing by a whole lot. There's 3 makers of DRAM on the planet, and we're one of them. In fact, that's what we've been doing for now 38 years. So super excited about that. Our ability to continue to innovate in a market that is driving demand and it's actually having explosive demand.
There's only 2 companies that do 3 d, 3 d cross point and that also do volatile and non volatile memories under one roof, and that's Micron, and Micron's one of the 2. I won't mention the other one. I mean, this isn't their show. But, yeah, so that, that, that makes us very uniquely positioned. And then the last one, there's only one that does NAND DRAM 3 d cross point and it and now solutions, actual storage solutions that our partners, our OEMs can benefit from, And again, take to market and it's like Aspen for the data center.
Everything just sort of starts to feel better. So uniquely positioned. We're excited. Let me let me go to the next step here and bring up our panelists. First up, so Justin Stottomire, where are you?
Here's right there. So Justin has a very interesting background as an IT professional. He's been at eBay, PayPal, Facebook, Shutterfly, and he now works for Intuit. So Justin, thank you for joining us. Next up, Don Dewat.
Don, thank you for being here. So Don, was partner at Goldman Sachs for 10 years. He was at the company, the firm for 28 years and in charge of their global data infrastructure. So, Dom, we appreciate you. Look forward to hearing from you.
And then last up, Trevor Schultz, CIO for Micron. Before joining Micron, Trevor was spent time at Broadcom, AMD and Cisco. So Trevor Thank you. Okay. Let's kick this off.
So, Don, share with us with your time on Wall Street in a Goldman surely, you see you've seen a lot of things and had a lot of challenges thrown your way. How has Flash changed things? And I'm not talking just about the increased IOPS and lower latencies. What are some of the other things that you saw that had a really major impact.
Sure. And first of all, I'd say thank you for being here and I'm really proud to be here. I think what we have seen over the many decades has been that innovation, particularly at the hardware layer, has really created fundamental change in the way that we think about technology, the types of solutions and services we could do. I mean, obviously, the presentation made earlier today, showed that at a macro scale. When you step back a little bit and you think about Flash, you think it was only a couple of years ago that it was still thought of as very nascent technology that it really we were worrying about where right ratios, we were worried about the durability of it, we were worried about, would it actually continue to work in a high availability setting?
And could you really get your mind through that? And I think if you snap forward to today, certainly Ed Goldman and I think from any other customers, It is now just embedded in so much of what we provide. It's part of the core infrastructure, part of our core architecture. We've been deploying internal clouds that cover almost 90 in high 90s of our application use case globally around the world. And at the center of that, when you drill down to the servers that are facilitating that, there's Flash as the disc in the storage.
And so it's really gone for being what felt like a really innovative and quite creative solution that had a lot of concerns about how do you enter the market to something now that's come just a core part of the stack and very much in place. And on the business side, I think we really saw the benefits of that early in places where the risk reward was high low latency trading businesses, places where we had congestion in architecture. But now I think we're seeing both the opposite. We're seeing not just those businesses having benefited from the performance in those gains, but we're also seeing that by having this now standard as just part of our offering, that is just creating new applications, new products, new design, we're getting to higher order access to use of information in real time basis is that in the past would have been constrained, absent this being a generic part of our architecture. So it's changed a lot.
And again, again, it's another example, in NVMD, we're helping, we'll follow a very similar story terms of starting off early, but then ultimately really helping enable substantial amount of business change and growth.
Outstanding. So what I heard there, so NAND Flash obviously have been dropping over the over the years, making it more available to a water and broader set of workloads to be able to justify using it on a broader set. But then also on the other side, it's less the push side of it, on the pull side of it, you've got applications that demand a better, faster, more efficient storage
whenever you can offer a better, faster, cheaper. Ultimately, people will adapt to that and that becomes the new normal. And then, of course, people then innovate on top of that and create even harder problems that to be solved. So we're kind of light constantly in that cycle, but it's really been driving that growth, which is very impressive in the short time that it's been in place.
Justin, how about you? Same question.
I've
been deploying Flash now for 6 years. Okay. Since the very early days, it was launched looking at, to Don's point, better, faster, cheaper, that's my Twitter tagline, right? It's always about improving application performance, unblocking engineers, providing business value and Flash does all of those things. In some cases, I've replaced, old spinning rust with new flash arrays or in server in memory in server memory flash with 10 to 1 ratios in the data center.
That type of consolidation just provides massive amounts of business value moving forward. And then the instant that one sort of developers or business unit gets results in microseconds rather than minutes, everyone is demanding the same levels of performance.
Right. You
just can't back away from it once you've delivered it.
It's like aspirin.
Everybody wants it. Trevor?
You know, I think everyone put it really well. I mean, there's a tremendous pressure to innovate. And if you can take advantage of, of an infrastructure change like this going to flat, you know, the better, faster, cheaper just keeps going. And it is the new normal, like, Don put it. So, you know, as a CIO, there's this con in drive to, you know, get more analytics to people faster and in real time, figure out how to reduce costs, figure, you know, total cost it.
And, you know, that that constant drumbeat of innovation, you know, the the flash piece, I think, is is core to a lot of what we're deploying right now.
Great. So, so, Justin, you spent a little bit of time with our brand new NVMe over fabric product like, throw a question out there for you. You've been kicking the tires on that. Tell us a little bit about what you're seeing so far, why you think that might be important to your enterprise value derived from that?
Absolutely. There's a few pieces there that I'm looking at. And it's really about what the next level of performance looks like in the data center. Just as Flash has really disrupted disc. It still has the limitations of some of the previous architectures sitting behind NDMS is really looking to uncork a lot of those problems.
And especially with the way that you're showing your solid scale architecture, you know, the same way that software defined networks are taking over today very quickly and having broad scale. I see that happening with software defined storage. And NVMF is sort of the underlying platform that allows that to happen. And just as, was called out earlier, I do have isolated storage. I do have performance left on the floor.
I don't want that to happen. And I think NVMF is where it's going to be with massive bandwidth and low latency. I don't even measure IOPS anymore. Massive bandwidth, low latency, get results to users. And I'm also looking at deploying this.
And every piece, everywhere I have an analyst block by performance, I want to throw MVMF at it. Wanna unblock them. My data center is expensive. My head count is way more expensive, and wasting time with them is just useless. Right?
I have to get them on court.
Okay. So so, and this is for whoever wants it. I'll just throw it out there. One of the things that I see when I when I run around the and have conversations with c I CIOs just like this is, companies that are able to drive costs out of the data center in, in other words, drive better forms of efficiency, seem to win. They seem to win big.
And, and one of the biggest inefficient things that I see so much so it's now become a game for me because I'll go around and I'll visit with, in fact, I was with a head of IT worldwide infrastructure for a major bank who has several 100,000 servers deployed. And I asked him, what is your average CPU utilization? And with his dead pan straight face, he said 2%. This is a a massive, massive inefficiency. And and we know that Flash helps with that This is also a big deal because most of the big software database companies charge by cores.
So if you have to buy more cores and only use 2% of them, that becomes a very, very expensive problem. Anyone of you tell me, how how we how not only specifically NVMe or fabric might help reducing, I guess, the cost, the overall cost by way of increasing CPU utilization.
Well, I think that's been pretty well proven that you can get denser, then ultimately you get more utilization out of your assets. And so that story at scale plays out to be meaningful economics. And I think that that's important. I come back to though. I do believe that ultimately that from OCIOs, that may be a critical thing for them, but the more important thing is the enabling aspect, like getting to agility, getting to where you can change the world is changing so fast and companies are digitizing so fast.
And these become, again, this is becoming much more table stakes. Now you just can't solve problems without having this of infrastructure in place. I think that that is as much of a either driver as the economic savings that you can get by getting denser and cheaper because ultimately that's what they're there to do is help leave their business forward and help them really grow top line revenue and really improve the function of whatever company So, Ratility
and flexibility.
I agree with Don, you know, the cost is important. But really, if you look at most of my peers when I talk to them, you know, the the mantra is, you know, fast IT. Speed is the new currency of business. You know, data is the new currency, and and it's it's great to have the cost component, but that business enablement is so critical because the business is being forced to make decisions faster and they're they're seeing larger data sets And, you know, the business is getting more complicated across the board, you know, regardless of digitization. So it's cost is important, but the real winner is when you can get that business value and the cost combined.
Right. I would also say I think the fabric story is both fascinating and very important because I think that part of the challenge, we've all learned this the hard way and part of the reason why someone would have a 2% CPU utilization is that we tend often to build things fast for bespoke problems, and you get a very artisanal solution set in your data center. And so starting with much more, I know here's an integrated fabric, that you can get the same degree of capabilities that really enable your developer to solve this problem for that place and this problem for that, but having uniformity the infrastructure layer is so powerful. And so if you were only shipping a one U box, you'd probably have lots of snowflakes of those one U box is having it a form factor that really can be deployed I think in the way that the fabric is going to be incredibly powerful and help hopefully reduce that kind of problem of I've got lots of our digital solutions that are hard now to really get efficiency out of.
Yes. To both of their points, snowflake is a big problem in the data center. Being having something ubiquitous that allows orchestration, automation is key, right, to getting my operation staff out of the way of my developers and out of the way of my business analysts as well. One of the other things that you'll see happening, of course, is the migration today from internal data centers to public cloud. And both of these people are really competing against other internal IT against the public cloud for providing best levels of service.
And affordability. Great.
So 11,000,000 IOPS in 6U are some of the early numbers that we're seeing. I probably supposed to say that. But, yes, we're seeing some very, very large, performance numbers in a very, very small space. We know that NVMe is is is is does wonders for reducing latency. Shared ending the over fabric just opens that world up.
Do any one of you see new application areas that things that we just couldn't do before. Obviously, big data, we know there's stuff happening there. I know Trevor, you're using that in the Micron Manufacturing operations. Maybe tell us a little bit about that. What Flash is enabling you to do from an application perspective that you couldn't do before with a a SAN?
Yeah. So maybe some context. So, There's just to simplify things in a manufacturing fab. You have sort of recipe and scheduling management, which is a certain set of of business problems. And then you have your process control And then you have sort of like the big datasets of analytics that you're working with.
And my team is finding, I mean, we we run hundreds of different applications in cross the board, each workload you look at and go, does this make sense? And more times than not, it does. You know, a a tangible example that middle layer on the process control, in serving IoT type of scenario. We're we have tool data. We have machine data.
We have application data. All going into, you know, streaming into, you know, this isn't big data, but about 20 terabyte, database. And, you know, it used to take us about 5 with the old tier, storage that we used to have to to get to the answers that we're looking at. You know, are things working as they should? And just by making, this infrastructure change, the flash, you know, that takes 20 minutes now.
And we're trying to figure out how to get that down to 5. So, you know, we're talking about orders of magnitude of of change potential. And why does that matter? It's because now we process engineers get access to data faster and make faster decisions, they can actually change things and and increase the output. And if you look at the the big data piece, it's another level of where, you know, Brad mentioned it earlier, it's it's 1 petabyte of, structured data, but it's another 14 to 15 petabyte of unstructured data.
And, you know, people are are digging into this information, this data, and and finding things they never saw before. And, you know, the faster that people can can get to that, you know, get to that information, you know, make an, you know, analysis on that data. The more business value we bring, to a CAU type of of business. So we're we're seeing tremendous business value, just moving to financials.
Well, and some of the tolerances that we have to deal with in the in in the fab, right, or so tight and so tricky. The faster that you can get that information and see those insights to make those changes, to your point, increases productivity. Absolutely. Gets more wafers
to all the people in the room.
Yes. And we know the world wants more wafers. So, Justin, any final thoughts on the end of the meal over fabric? I know you're, you've you've been staring at this for some time and
I've been looking at NVMF for about a year and a half. The implementations are just now kind of hitting the ground. I'm really excited about what this means for, you know, as you talked about CPU efficiency, agility is key, but CPU efficiency and driving that end to end is really important me as well. From a data analytics standpoint, right now I have a hard time driving greater than 60% CPU utilization out of my Hadoop cluster, for example. If I really want to be able to crank that up, I need a bit of balance my IO, balance my CPU, balance heat and cooling across the data center floor and on top of that density.
So the densities that we're starting to see come in can only be taken advantage of by something like NVMF. Without that, they'll be useless and we'll be siloed just like yesterday, just like today. Right. So these things are really incorking what we're going to be able to do. Across the data center floor and across all these analytics platforms.
Certainly, stuff. Don, any front and front of us? Yes.
Well, I think it's similar to Steve had to before about the 2 Olympic cycle. I think in business applications, it probably feels that way too. And I think what's really amazing and it's going to be a very interesting story going to play over the next couple of years is how are people going to really change their design patterns for the business applications, particularly around things like transactionality and asset architecture and how does NVMe in particular really just give you a completely different way to think about that from a concurrency transactional perspective. I think again, you'll see where the early adopters will be people who will see immediate value in getting to much more higher bandwidth type streaming architectures for their business problems that today are trained and more of an asset database driven architecture. But it would be really fascinating to see kind of how the use case patterns and the design patterns emerge over the next several years now that technologies going to be in people's data centers and in the hands of their developer teams.
It should change things. It could be quite dramatic. I mean,
I think, you know, that dream of base architect sure, but the realities of, you know, this is a business and and in finance, obviously, the criticality of records and getting record retention perfect. I mean, I think you have a very landscape now to be rethinking how you build the next generation of applications.
Right. And I'm not sure I really want telemetry data on my my body and my tennis shoes. So I'm not that's while that's acute application, this is this is real world stuff from from from Wall Street and and and massive massive pressures on you and your old job from the users to get to get things done and do it in the most efficient way possible. Do you have any thoughts, final thoughts?
You know, final thoughts are, you know, as I as I talked to my peer set, this is a foregone conclusion. You know, I think, most everyone is is moving to Flash. It's just the the economics around it are, you know, a lot of people have moved early, and and I think it there's there's no confusion in people mind about the business challenges and where this this technology really solves a lot
of them.
Yeah. Well, we certainly feel it at Micron. We, again, we're afforded the opportunity to have so many of these conversations, there's literally oceans and oceans of spinning media out there that as we speak are melting polar ice caps because they really, really do require a lot of energy. And, and so we do see a massive opportunity for, for not only Micron, but for the user market at large, to create completely brand new applications that just were not possible for. So exciting times, gentlemen, thank you very much for your time today.
We appreciate it. Next up, we'll be Darren Thomas.
Okay. So, first of all, I wanna say I hope I answered the question why why micron? Because you could see from our customer set who, who understands this at a technology level, but has real business situations, business concerns that, Laura pointed out. And our customers are reaching back down in the, in the, in their, in their designs, they're reaching back to us. And so you have, this, this movement where large corporations want to understand exactly how the NAND and DRAM and the flash products and all, how they can solve for you also saw when when Steven Brad talked, you could, for those of us that could if those of you could follow it, you can see that the details about the NAND, the details about the Flash, the details about the networking, and everything to connect is what makes this possible.
And having deep insights from memory to storage through the infrastructure all the way to the workload, that's what it takes to have this insight and the product we are announcing and the product that you'll see here in just a second has that kind of insight built into it. So that's That's the Y Micron story, and I hope you, you'll see that. I would like to thank Kirsten and her team for bringing us all together for this at this great venue. I also want to thank, I mean, sitting over here to my left, Tom Evie, the general manager of, our computing and networking team is here. As well as Mark Durkin, our our presiding CEO for now.
And, the reason why I wanna call him out, it was his vision his it was his vision to take a memory company to where we are today as a solutions company. And most of my team and the people you saw talking and I wouldn't be here if Mark hadn't had that vision. So I think tonight or today and last night might be a little bit of the the fruition of many years of his hard work to get from a company that was just a DRAM company, which is pretty amazing, to a company that is a full on enterprise solutions and partnering company. Wanna thank our partners, everybody in the room. Wanna thank those on the webcast for, being with us for those in the room, we will have a gallery that's just out these doors and down the hall.
Lunch will be served there. And, it is a, it it will we will have an enormous amount of my staff here. The design team who built this product is here. There'll be demos. Where you can ask the people who actually run those demos of the people who actually develop that technology.
Mark will be here. I'll be here. Tom will be here. The team we have a good number of our sales team here that can take an order. I know Steve is here.
If anybody wants to do that. So with that, I wanna close by saying thank you very much for coming. I hope we have demonstrated you why Micron is the company that can take you to this level of insight and help our customers get to the answers. Thank you very much.