We are excited about today's discussion on the future of Pure Storage. Let me remind you that we will be making forward-looking statements today that are subject to assumptions, risks, and uncertainties. Actual results could differ materially from those anticipated due to a number of factors, including those referenced in the detailed disclaimer at the beginning of our presentation slide deck and in our public filings with the SEC, which we encourage you to review. The presentation slides discussed today will be available on our investor relations website at investor.purestorage.com.
Hello, everyone, and welcome to our Product and Technology Analyst Meeting at Accelerate. We appreciate that you made time for us in your schedules today, and we intend to use the two hours efficiently. The plan is to spend the first hour on short presos and conversations, and the second hour on informal and interactive Q&A. Charlie, he'll start us off with Pure's perspective on flash, both historically and in the future, as well as our competitive position and the outcomes that we drive for customers. A spoiler alert, this won't surprise some of you, but our Purity operating system, it's key to everything. Partway through Charlie's talk, Coz will come up, he'll use his props, and he'll cover the historical progression of our Purity-managed DirectFlash modules.
Most importantly, as Coz can do, he'll explain how our trajectory with our DirectFlash modules will leave hard disk drives and solid-state drives in the dust. Under the headline of All Flash Everywhere, Amy and Shawn, they're going to cover a number of products that enable Pure to further expand across the data center. This includes our unified block and file and unified file and object platforms, as well as our E family. As of this week, we have an E family. Amy and Shawn will also cover AI, use cases for Pure, as well as our Fusion and our Cloud Block Store software. Rob will conclude the first half of our meeting with a more detailed understanding of what it is that enables each of Pure's four unique, sustainable, competitive advantages.
When Rob finishes, Kevin and Ajay will join us all on stage for a Q&A. I'll hand it over to Charlie to kick things off.
Thank you, Paul. Hello, everyone. It's great to be seeing you again, and thank you for coming all this way to come see us. Really appreciate it. I am going to present to you this morning, and I know that at least a few of you here were here yesterday, and so this may be a bit redundant, but as a former colleague used to say, "Repetition doesn't spoil the prayer," so I hope you forgive me for that. You know, really look forward to having our entire team in front of you today. Without further ado, I'm going to start off with a true story.
1984, last semester of business school, I start doing an independent study with a friend, believe it or not, the independent study was on this new recording media called optical. There was a lot of euphoria about optical at the time, was it going to replace magnetic disk, which was still early in its infancy? In fact, it hadn't come out yet for the PC. It was just in mainframe environments. We did a deep and detailed analysis and determined that magnetic storage had decades of improvement coming in the future, that it was unlikely to be replaced anytime soon, especially by optical, that had a number of limitations.
Well, a little later that year, I had gotten married, my friend had gotten married, and we got together for dinner with our two new wives, and it was BC, or before color photos. My friend had continued that study, as it turned out, with a consulting group, and he came to dinner and said, "You know something? Within five years, vinyl LPs are going to be gone, completely replaced by compact discs." My wife, who just loves music and loved her LPs and her albums, said, "That, that is just impossible. That is never going to happen. What are people going to do? You know, they love their albums, they love the album covers. It has beautiful pictures and all the words, and she was just incredulous.
I think we know what happened here. It only took about six or seven years, vinyl was almost gone, right? All went to compact discs. Oddly enough, of course, vinyl probably outsells compact discs today, that's a more recent phenomenon. You know, in just six or seven years, LPs were gone, vinyl was gone, CDs ruled. Another media, which we're very familiar with, again, an information-carrying media, VHS and VHS tapes. You know, in the year 2000, they were dominant. You know, the way we were able to play our movies, was through our own home video cassette recorders. DVDs came out, another optical recording technology, within five or six years, completely dominant.
What was different about this, though, was it wasn't just a media that changed, it changed an industry. We're all very familiar with this industry. We can look back at 2006 when DVDs became dominant. At that point in time, Blockbuster was dominant. It sold VHS. Now, it had switched to DVDs as well, but there was this upstart, Netflix. Within five or six years, not only did Netflix become dominant, Blockbuster went bankrupt. Netflix today does over $30 billion a year, is over $30 billion in market cap. It can not just change media, it can change industries. What does all that have to do with storage? Well, let's look at storage more recently, because hard disk has been dominant for five, six decades, a very long period of time.
Oops, sorry, that didn't go. There we are. Flash actually has been around for decades as well. Flash has been around for about 30 years, perhaps more, depending on how you look at it. Really, it started entering the consumer space around 2007. I don't know if some of you remember that one of the original iPods actually had a hard disk in it. It was very heavy. Since that time, since 2007, as you can see, Flash has continued to grow, mainly because of the consumer revolution, right? Especially handheld cell phones. It got to the price point, and SSD NAND flash manufacturers were very clever in packaging it, Flash as an SSD. What is an SSD?
An SSD is flash that has been packaged to mimic a hard disk. If you mimic a hard disk, it can go into an existing laptop or an existing desktop, and they don't have to change the software to be able to operate it, 'cause it operates just like a hard disk. It's lighter, it's more rugged, and that's what it has allowed it to take off. It replaced one thing at a time, right? Cell phones, iPods, your DVD record, your cable and/or satellite box at home, and then eventually laptops, desktops. As you can see, flash, and this is the raw flash market, you know, continues to grow year after year. Hard disks are still around, but you can see that's been coming down, you know, quite aggressively.
This is why we believe we're on the verge of something very new here. We believe that in five years, there will be no new hard disks sold because they only sell in two places right now, enterprise, mass storage, and into hyperscalers. Those are the only two locations where they go, and we are breaking that price barrier. What does that mean relative to Pure Storage? Well, let me start from a different standpoint. For about the last 10 years, when Pure was first introduced, we've continued to grow. Not only have we continued to grow, we've been able to increase our market share. Eight points, and this is against the total storage market, not the all-flash market, where we obviously have higher market share, but just against the entire storage market.
Flash, specifically Pure, has been able to grow. The right question to ask is why? Why have we consistently grown year after year against much larger rivals, against companies that had products that had certainly more feature content than us, despite the fact that we didn't have any low-end products that could compete in the mass storage market, 'cause that's still dominated by disk? Why have we been able to grow in this environment? I would tell you that it's for a variety of reasons, in particular, though, the outcomes that we deliver to our customers. I want to remind you here, I am talking about multiplication factors, not percentages. Our products are two to five times more space and power sipping. That is, we are more power efficient.
We use less power, less space than our all-flash rivals. All right? Apples to apples, the same capacity, same performance, flash product by a competitor versus Pure, we're two to five times more space and power efficient. We are over 10 times more reliable. Over 10 times more reliable, not even counting the fact that when our competitors do an upgrade, they have to take the system down. They have to take their customer's application down. In fact, there's an asterisk after the reliability figure that says, "Not counting scheduled downtime." At Pure, we have no scheduled downtime. We upgrade without disruption. We require five to 10 times less labor, this is customer recorded, reported rather, than our competitors.
For the same amount of storage, we require five to 10 times less labor to operate. You put all that together, including the original purchase price, we have 50% lower total cost of ownership, and our customers get their nights and weekends back. In addition to that, we have the most consistent product line in the business. What do I mean by that? Most of our competitors have five or six different models of product that have different operating systems, they have different hardware environments, and, you know, they have to have a different model of product, most of which have been assembled by acquisition to their customers. If I'm a customer buying from one of our competitors, typically, I actually do have five or six different products entirely in their environment.
As you'll see in a moment, we have the most consistent product line. Finally, with our Evergreen program, our products are never obsolete. What do we mean by that? Well, we have a customer that started with us about 10 years ago. Over the years, with our Evergreen program, that product has been continually updated, so that, in fact, just a few months ago, if you were to look at that product, it looked like the product that we sold, you know, this past year. Brand new. Every bit of hardware, every bit of software, brand new. Customer never had to take their application environment down, never paid an additional cent other than the subscription program that they're on. This is unique in all the industry. The question now is, well, how do we do that?
How is it possible for us to be five to 10 times better than everybody else? It really boils down to the following. It starts off with our Purity operating system. What is Purity? Purity is the operating system that we have working on all of our products consistently, which I'll go into in a moment, and is the only software that works on native flash. What do we mean by that? We don't use SSDs because SSDs make flash look like a hard disk. In my opinion, that's like making a personal computer, which is a semiconductor device, look like a typewriter. I think we all know that it's much more powerful than a typewriter. Coz will go into this in more detail. SSDs suboptimize the performance of the flash.
We take all of the requirements for managing flash, we put it in our core software, which we call Purity. Purity and our DirectFlash modules work on our entire hardware product line, both scale up and scale out. Scale up in order to be able to address smaller capacities, but with very low latency. Scale out for higher capacities, very high performance. They all run Purity, and they all use the same DirectFlash modules. In addition to that, we even have Purity operating on the cloud, AWS, and Azure for Cloud Block Store. Of course, we'll continue to expand that capability. All of this is managed by one management system, Pure1. This is for traditional workloads, the traditional FlashArray, flash, FlashBlade, array-based block, file, and object.
Of course, there are new workloads for containers and Kubernetes, and we have our Portworx software to be able to enable that, works on our products, works on the cloud, just as Purity does. Finally, all of these things enabled by that Evergreen capability that I told you about, which is non-disruptive upgrades all along the way, able to be purchased as a service, that is purely as an SLA as well. Sorry, PureFusion, which enables a cloud operating model. Instead of the customer having to operate their storage systems array by array as independent units, they can operate it as a pool of storage across a data center or across a multi-cloud environment. This is the world's most consistent product line for sure.
One operating system, only two hardware architectures, both supported by the same DirectFlash modules, one management system, able to operate both traditional workloads, next generation, cloud native workloads, all enabled by Evergreen. If we go a little bit further than that, you know, all of those were block, file, and object, until recently, we could only compete at what's called the primary storage level. What is primary storage? High-performance storage, transaction processing systems, AI, analytics, ED, electronic design automation, et cetera. Mass storage, truly mass storage, stayed on hard disk because it was cheap. You know, storage is generally expensive. It's a large fraction of data center spend.
If you're just going to store a file away for a long period of time, you really don't want to put it on something that costs a lot of money for storing that file. You want to put it on something less expensive. Flash, by its nature, started off much more expensively than disk. What we're able to do now with our E-series is go after all of the disk estate. That is, that's why we believe it's the disk is going to be done in the next five years. We're now at the point where it really doesn't make sense to buy disk anymore. We introduced our FlashBlade//E last quarter, had our first few sales. It started at about a four PB level. At this Accelerate, we introduced FlashArray//E, which we'll be selling next quarter.
That'll start at the one PB level. At these levels, we're now penetrating, if you will, the upper ends of the all disk market. As you'll see in a moment, over the next year or so, we're going to be able to penetrate all of the disk market from a pricing standpoint. If I were to net this out, this really boils down to four unique and sustainable competitive advantages for Pure. What do I mean by that, unique and sustainable? These are competitive advantages that I don't believe our competitors can come up with anytime in the near future and give us a multiyear head start. One is our DirectFlash management, what I mentioned with Purity.
We are the only company who has built software to manage Flash natively rather than going through this SSD translation. It has been a 10-year journey for us to build this and to build it with all the features, functionality, reliability, et cetera, that's expected in an enterprise environment, and we don't think it's going to take anyone else any less time to be able to do it. The world's most consistent product line. Everything I told you about is block, file, object. It now scales from low price to high performance. It does it with the same DirectFlash modules on all the units, does it with one operating system, all managed by a single management system. As I mentioned earlier, if you buy from our competitor.
You may think you're buying from one company, but you're buying five or six entirely different products, different management systems, different management procedures, different hardware. Cloud operating model, where we announced this two years ago. We introduced the first version of this last year. No competitor has yet even made an announcement about an intent to do this. You need to have consistent APIs, you need to have consistent products to be able to deliver something like this. Frankly, you need to have the vision to be able to do it. This is something that will reduce both the amount of storage that a customer needs, but also reduce the amount of labor that they need to manage it.
Finally, our unique Evergreen life cycle, where products never go obsolete once purchased, and never, you never have a disruptive upgrade as you go from one, from one version to the next, or during a software or a hardware upgrade. With that, I think I turn the stage over to our illustrious founder and leader, John Colgrove. Coz?
Thanks, Charlie. I want to talk a little bit more about our direct flash advantage. To start with, this is a hard drive. It's looked pretty much the same for quite a number of years now. If you think about it's been an amazing journey. In 1970, give or take five years, actually, maybe 65, hard drives became the dominant form of storage technology, and they went on a fantastic set of decades of improvement, exponentially getting better and better all the time. When I started doing this in the mid-1980s, a disk pack was like the size of a large pizza, about that high. Went into a washing machine, held maybe a couple hundred megabytes. You know, today, you know, or 10 years ago, this thing, 4 TB, roughly speaking.
Today, maybe 20, 24 TB, something like that. In the last decade, even five or six times improvement. That seems pretty good. It would. For most things, it's actually not very good compared to the history of disks. By the way, this thing, when we talk about e-waste and things, just so you're aware, is kind of heavy. It weighs as much as I think everybody's laptop that I see in this room. You can actually get some laptops that weigh more, but trust me, this is heavy. 10 years ago, we would have shipped one of these, right? An off-the-shelf consumer-grade SSD. The original FlashArray was 5.5 TB. The whole array. Not one of these drives, the whole array, 5.5 TB. In fact, I have a couple examples here.
You can see in 2013, when we shipped our second generation, we were so proud. We got up to 35 TB in one whole array. 35 TB in one array. Hot dog. In 2014, we doubled it to 70 TB. We were really proud of that one, too. You know, we've been through a lot of changes in this, but one of the biggest ones was the change to DirectFlash. This was originally, as I said, 10 years ago, 512 GB. Now, currently, we're shipping these DirectFlash modules in 48 TB sizes, is the largest one we ship. But this one, which we're introducing next quarter with the FlashArray//E, we'll be shipping it, and the FlashBlade//E, 75 TB. Think about it from a couple perspectives.
2014, 70 TB was something to celebrate on the whole array. Now, one device, 75 TB. 150 times improvement over the last decade. Compare that. Disk, five times improvement, flash, 150 times. If we do 150 times over the next decade, I'll be standing up here showing you an 11 and a quarter PB DirectFlash module. I suspect we won't be quite that dense because I don't think the market will demand it, but it'll be a lot bigger than 75. Now, that 150 times improvement, that's driven not by the hardware. I mean, I'm showing you the hardware because something I can show you, it's driven by the software. Over those years, we started doing this DirectFlash 10 years ago.
Our engineers had to learn how to overcome all the things that that drive did, that this one doesn't, right? We've taken out of this the wear leveling, the garbage collection. When you write to flash, you don't overwrite data in place. You write to that SSD, and you write block one, and then you go to write block one again, and it puts block one somewhere else. Inside that SSD, it remaps the data because it's pretending to be a disk. That remapping requires a bunch of memory, it requires a bunch of compute, and it does extra overhead to the IO. If I opened up that SSD, roughly speaking, for every byte or for every 1,000 bytes of NAND that it has, it has one byte of DRAM.
That means if you open up a 30 TB SSD today, you will see, roughly speaking, 30 GB of DRAM. If somebody were to try to build a 75- they'd need to put roughly 75 GB of DRAM in it. If somebody went and built a 300 TB DirectFlash module, or pardon me, 300 TB SSD, they would need to put 300 GB of memory in it. There isn't room in that thing for 300 GB of memory. One of the things that actually helps us in that is DRAM hasn't been improving as much lately either as it used to. You know, that memory, it adds expense, it adds instability. This thing, we've taken all that out.
For DirectFlash, because we've removed all the mapping layers, we've removed the garbage collection, all of that is done in Purity, in our main controllers, not in the DirectFlash modules. Makes this simpler. It means it doesn't need that DRAM. We need, roughly speaking, 1 byte of DRAM for every million bytes of NAND, and that gives us a big advantage in cost, but it gives us a big advantage in simplicity and reliability. When I talk to a customer, and I talk about putting 75 TB in one device, the first thing they think about is, "Oh, my God, when that fails, is that disruptive?" Well, it fails a lot less often because it is much simpler. Roughly speaking, we have about one-sixth the failure rate of an SSD, and SSDs actually fail less than hard drives.
That allows us to go larger because we have less failures, less disruption to deal with. It's more consistent because it's not remapping, because it's not doing things inside. I don't try to read some data and find that the drive's off doing garbage collection and waits half a second before it gives me back data. For all the speed that people talk about with flash drives, they do have that inconsistency. Every once in a while, you do a read, and it drives off, like, garbage collecting, and it goes to sleep for half a second before it gives you back the data. This thing doesn't. That simplicity, that greater reliability, that allows us to scale. In addition, you know, people, we talk about, okay, 75, and then this. Now, this is a 3D-printed model of a 150 TB DirectFlash module.
We will ship this next year. Same flash chips as that 75, just we've managed to put two boards of them in there. You notice there's a little bit of horizontal space here in this. We've managed to put a second board, same flash chips, just more of them, 150 TB. We don't have to try and invent some new physics like the disk drive guys are going through now. We're just going to scale with that. In another year after that or so, I'm going to go to buy a new phone, and my kids take a lot of pictures. They want more memory in their phone. Guess what? That phone is going to have bigger memory, or I should say, bigger storage capacity, thus a denser flash chip, 'cause there's no more room in the phone.
The flash vendors, driven by those consumer devices, are building denser chips. A year after the 150, around the end of 2025, we'll be able to ship a 300 TB DirectFlash module. It's not going to stop there. We'll go from 300, we'll go to 600 a little bit after that. It's not coming again from new physics. It's coming from what the flash vendors are doing, just stacking more layers. You've seen all their roadmaps. They have the, I think it was maybe SK hynix just shipped a 238-layer flash chip, and they all have roadmaps that go up to 400, 600, 800 layers of stacking NAND. We're going to grow at scale. You know, those jumps sound astounding, but they're actually not.
If you think about it, disks went from 1 TB to 2 TB to 4 TB. Memory DIMMs, 4 GB, 8 GB, 16 GB. We're kind of computer nerds. We like to double things. 75 to 150, to 300, to 600, to a PB and beyond sounds like huge jumps, but it's really just the standard thing happening that we do all the time. It's the Purity software that enables us to drive that, and it really just changes the face of things because now you think about it, 300 TB at that time, maybe, let's say, a disk is 30 TB, 10 times fewer devices. The devices weigh one-third as much. That's 30 times less e-waste. They last twice as long. That's 60 times less e-waste. 10 times fewer devices, the devices don't use as much power.
That's more than 10 times the power benefit. As we go through these generations, because the disks are growing so slowly now in size, today, call it a 24 TB disk versus a 75 TB DirectFlash module, we have a 3x advantage on the device count on, you know, the power and stuff, just inherent in the system, forget everything you put around it. Next year, we're at 150. They go to, let's say, 28. We'll call it 30 to make the math easier, 5x device advantage. Next year, we go to 300, they go to 34. You know, call it a 9x advantage. Maybe a couple of years after that, we go to 600, they get up to 40, where we have a 15 times advantage.
It is a growing advantage because the flash is on a much greater exponential improvement curve, and the disks have fallen off that. That's the reason why we're so confident that, you know, the time is now that, you know, disk is just done, and flash is going to wipe it out and replace it from all of those places. It's less power, less space, less e-waste, better reliability, better performance. You look at the TCO when you factor in all of that space, all of that power, it's not about how much the raw media costs, it's about what does it cost to have it, and disk just has no chance to keep up. With that, I'd like to ask Charlie to come back up and continue.
We've seen the exponential technology curve. What does it mean commercially? Well, if we were to sort of put it in this pyramid structure, the storage, as you might imagine, is a big market. Whenever you have a big market, it's layered, you know, quite a bit, and complex. If we put it into a pyramid scheme, you see your most mission-critical, your business-critical apps, so-called, primary storage on the top, and then slowly going all the way down to what might be called cold storage, all the way on the bottom, all the way to tape, right? Which is the coldest of the cold storage. In between, we generally call it tier two. There are several different layers to tier two.
The highest layer, meaning the most performant layer of tier two, tends to be called nearline. We're at that today in terms of pricing. I did something that, for the first time ever in my entire career on an earnings call, not this past quarter, the one before, which is I announced a price on an earnings call. I mean, you just never do this. You basically just gave your top, the top of your price range, you know, on an earnings call. Everybody gets to negotiate against that, right? There was a reason why we did this, and that is because nobody really believed that flash could get to the price of hard disk.
by announcing it specifically in an earnings call, we said, "No, it, in fact, has reached the price now of hard disk in this secondary tier." If we follow, you know, what Coz just said, by next year, we'll be at the next layer, and by the year after that, at the next layer, in fact, at the same pricing that we estimate that hyperscalers pay for their storage on a per gigabyte basis. This is why we're quite confident that flash, and particular Pure Flash... Because we will be three years ahead of the same price curve for SSDs, and that's a three-year sustainable advantage. In other words, three years from now, we'll be three years ahead of the price curve for SSDs as well.
All right, this, the E Series, which we recently introduced now, really fills in this teal color here at the bottom, that we couldn't really address the lower performance, higher capacity, lower priced, market for disk, but now we can. This really changes, in a sense, our profile as well as a company. Up until now, customers had to buy from our competitors. We couldn't sell at that lower end of the market, which was 50% or 60% of the dollars. We never gave our customers the choice of standardizing on Pure across their entire estate. Now they can. Everything from the highest performance to the lowest price on all three major protocols: block, file, and object protocols.
In addition to that, you know, if you look at our track record, we are well known as being the leaders in high-performance storage. We've sold into well over 100 AI environments now, and now we're going after that low price level as well. We're especially proud, I have to say, though, about our environmental performance, and for us, it really boils down to something very simple. You know, compared to our competitors, up to one-fifth the power and space, and compared to disk, 1/10 the power and space of the existing environment. I mean, this is substantial.
If you follow the math, which is that about 2% of the world's power generation is used in data centers, and you understand that about 20%-25% of the power in a data center goes to storage, and you take. It's all disk, mostly in the hyperscale environment today, and even the majority of data is disk in enterprise, because mass storage, like its name, mass storage, is the bulk of the data. You remove 90% of that, you're basically giving back all 20% of both power and space, in fact, even more space than power, you know, back to the customer. Imagine with one change... I mean, I have to tell you, data centers work on removing a half a percent of their energy through all kinds of difficult engineering.
Imagine with one change, you can recover 20% of your data center power. I mean, this is really quite substantial. That's where our credibility, if you will, comes in the area of environmental. You know, there will be a lot of companies greenwashing this system, because if you plant enough trees, apparently, you can make up for all of the electricity that you burn, right? I think it's much better if you just use less electricity in the first place, right? We should still plant the same number of trees, just not burn as many of them. Less than one-fifth of power, as Coz mentioned, less e-waste, lower operational costs, much higher reliability, ends of reliability.
That is less than five seconds a year on average per array, you know, which is a pretty amazing number. Less than one-fifth the space. So we recover a lot of space. You know, this is, you know, in my, in my book, the best that can be done in the environmental environment. I'll just take us back to these advantages are sustainable. It comes, it's all based on our Purity operating system, but which is where we have the direct to Flash. Because we use it on all our products, it makes them highly consistent. Then finally, I'll just highlight the Evergreen life cycle, which was built in from day one.
Hard to replicate, hard to bolt on the side of something that already exists. Thank you for your time. I'm going to introduce our next set of speakers. Shawn Hansen and Amy are gonna be coming up and speaking about our latest set of products. They each represent one of these two great products, each general manager. Amy of our FlashBlade business unit, Shawn of our FlashArray business unit. Shawn and Amy, please welcome to the stage.
Thank you so much.
Thanks, Charlie. Okay, I'm doing the clicker today, right?
Yes, do the clicker.
Okay. All right, I thought that we'd take a quick second and just introduce ourselves in a little more detail. I think you've seen Coz and Charlie many times. Just for a little more context, my name, Amy Fowler, I run the FlashBlade business unit here at Pure, and been here actually almost 10 years. My 10th anniversary is coming up soon. Been with FlashBlade, really, since even before it, GA'd. It's been a really exciting ride, and especially this past year, has been just amazing and really fantastic of what we're bringing to the market. I'm super excited to be able to share more of that with you all today.
Welcome. Happy to be with you. Shawn Hansen, GM for FlashArray. I've been here about three years, not as long as Amy. When I first met Coz and Charlie, I had just come off of being a product manager for three different platforms that were successful in the industry. When I first met them, I was blown away with the potential of the platform. It's very rare to have a very successful platform with a ton of leverage, and I imagined what it would take for the benefits of Evergreen and what's happening with, Coz has shared with you, to expand into all the other markets.
We hope we can have with you a little bit of an informal chat about the growth opportunities that result as a part of our expansion into these markets. From our side, we're going to walk you through two adjacent markets. We're going to walk you through, besides just the E side, we're going to walk you through what's happening with expansion into the file or the filer market, and also the adjacent market with the cloud. Amy, just kick this off.
Yeah.
It's great to share a stage with you. Maybe you could walk us through a little bit about the great accomplishments that you're most excited about this last year?
When you think about this as a whole, we're really just focused on addressing a broader set of our customers' business and data center requirements, right? We want to extend Purity into more use cases, and that's what we've been focused on delivering. From a FlashBlade perspective, we've been delivering fast file and object, actually unified fast file and object. We were the first to bring that to market for many years now, specifically, broadly used in technical computing, analytics, and AI-type use cases like Charlie just mentioned. Last year, actually about exactly one year ago, at Accelerate, we were really excited to introduce our next generation of FlashBlade, the FlashBlade//S, which has a modular architecture that provides a lot more flexibility and scale and all of the Evergreen forever benefits.
It also uses those same DFMs that we've been using in FlashArray for a few years, so we broadened the consistency of the platform lines. So that modularity, where we actually disaggregated the compute, where the blades used to have both the processing and the storage on them, we pulled that apart so that we can scale those independently. That ensures that we can, you know, leverage the same building blocks and also open the door beyond super high performance, fast file and object, and go into that secondary tier, which we've been talking about a lot this week. That's what enabled us to introduce FlashBlade E.
I know we're going to talk about that a lot more in the next few minutes, but Shawn just mentioned, you know, talking about file, and we've been doing unified file and object on FlashBlade for a while, but I think it'd be great for us to talk a little bit more about.
Sure.
The other unified that we're doing these days, especially as it's allowed us to win, some tremendously large deals in international right out the gates.
Awesome. Now, in the last couple of years, if I were to characterize the most common conversation with a customer, it sounds like this: "Shawn, you've changed my-- Pure has changed my life with simplicity and Evergreen. I've gotten my weekends and my nights back," and the storage infrastructure manager or the IT director is full of almost an emotional response for the impact that Pure has made in their life on the block side. They say, "But what about this other thing? I have this other thing in my data center that causes me a huge amount of pain. There's 30 years of complexity built up with this. It's not simple. It's disruptive. Can't you help me replace this thing?" Typically, they wave over at their filer, which is the NAS side of the market.
We've worked really hard in the last couple of years to figure out how to take the advantages of Evergreen and simplicity for FlashArray and Purity in general, and extend that into this new market. Last month, we were really pleased to announce the general availability of what we believe is the industry's first unified block and file platform. What this does is we really try to do it the Pure way. We're not trying to bolt one onto the other or try to co-couple these things together, but really address this core vision of simplicity. Now, if you're unifying block and file together, the way people usually think about it is, "I have this CapEx savings and place them on the same platform." Really, the savings needs to come from operating expenses.
If you couple these two worlds together, the problem and the complexity becomes even greater. That's the most common complaint that we hear from our customers. We announced this, it came up in the last earnings call, that we landed one of the largest international deals we've ever had in our history, with a missile defense system for a large country outside the United States. What happened in this case is they needed both file and block, and we were very excited to deliver that. This, we believe, is an example of taking one step to the side to an adjacent market, a large growth market, where the platform has great leverage. We believe we can do this over and over again with multiple adjacent markets. We're super excited about that.
Let's go back to another large adjacent market. Let's go back to the E.
Back to the E. Yes. All right. It's one of my favorite topics these days, because as Charlie mentioned, there's so much disk-based data out there for us to be able to go after. There are so many PB that just continue to live on disk, you know, the fact is that there are so many data sets, so many parts of applications that don't really need the performance of all-flash, right? Customers were not going to pay a premium for all of that performance, when that's not really what they needed. Our ability to actually deliver on all of these compelling power, space, cooling savings, along with the higher reliability that Coz explained a minute ago, with all-flash at an acquisition price that is comparable to disk, that really is a huge game changer for everyone.
The reason I love talking about it so much is 'cause, you know, people are really excited about it. We saw that, you know, very fast uptick of it. That being said, as we also started talking through, like, FlashBlade's really optimized as a scale-out architecture for larger scale, right? For, in the case of FlashBlade//E, 4 PB and beyond. I don't know if you remember this, when Coz and I called you to talk about the fact that we were going to do FlashBlade//E, and we were already thinking, customers, there are going to be people who want a smaller form factor, smaller system, and that was, I think, probably the first, you know, the first breadcrumbs towards.
Yeah
towards FlashArray//E, right?
Yeah, that's right. Can you imagine being in a product team that usually, these large projects take 18-24 months, easily, to deploy, and you get a call and say: "How fast can you do this new project, FlashArray//E, and go down and drive the entry point down?" This is really a testament to what Charlie talked about the simplicity and the consolidation of this consistency for Purity. From end to end, low single-digit number of months, we were able to release, and we just announced this, the FlashArray//E, which pushes that entry point down to 1 PB. Many of our customers want to get in.
They have several islands of consolidation they want to bring together, and we think this really shows, for the first time, this contiguous line between one all the way up to 20+ PB with a consistent experience. This is a great example of how we are extending the conversation beyond one workload. One of the most consistent things I hear from analysts throughout the show over the last couple of days, I met with many of them, is: "Pure, we've known you as a single-point solution for so long.
You did such a great job reinventing this space, but now it feels like you have this portfolio sale across the entire data center. Really, what it's done for us, is it's opened up great strategic conversations with our customers at much higher levels, which is really important as we start to rise and grow into the enterprise and stretch both up and stretch down. While the death of AI might be our-
Oh.
Death of disk might be our favorite topic, AI is the other great favorite topic, so maybe we can talk about that.
Just to elaborate a little bit more. You know, AI in the enterprise, which is a big place we're focused, is really all about speed and scale, but also, very importantly, simplicity. There's a lot to do to get these systems stood up, and you know, the simplicity part of our value proposition has been a huge reason that we have over 100 customers who are leveraging us for this use case. I feel like we actually started the AI party, or joined the AI party, really early, like, many years ago, because our AIRI solution, that's our AI-Ready Infrastructure with NVIDIA, I mean, we announced that over, I think, five, six years ago. And we've made it even more relevant with the introduction of FlashBlade// S and therefore, AIRI//s as well.
We've talked a lot about Meta, which is a great example of our ability to deliver at a massive scale, but we're really excited to be able to see continued success across this broad range of verticals and use cases. Oh, oops. Take me back. I hit the button on accident. A great example of this is, like, a university medical center, right? There's one in the Midwest that is doing training. They're training their models with thousands of CT scans, and it used to take them six to eight months to do this, and now they can do it with FlashBlade in six to eight days, right? Pretty remarkable.
We also keep their GPUs busy. If you haven't heard, GPUs take up a lot of power, so the power efficiency combination of FlashBlade and of our entire portfolio is extremely compelling in this space. Another one is MediaZen. They develop voice recognition technology for various industries. In their use case, some of the tasks that involve speech-to-text modeling, that used to take 12 months, can be done in only two weeks. They're leveraging, again, the performance and the flexibility and the scalability to just deliver so much faster for their customers. That's the AI hot topic, but I think the other one that's usually at the top of the list, maybe right after that, is cloud, right?
Right.
You know, there was a lot of discussion about this recently, also in the last earnings call. Let's talk a little more about Cloud Block Store.
Wonderful. Cloud is yet another adjacent market for us. Just to rewind the clock a little bit, I was an early Azure product manager back in the early days when we first started getting going, there was a common conversation around the table and around Azure that enterprise workloads would be the last workload to move to the cloud. Why is that? The enterprise workloads have a huge moat. There's a lot of features around high-end, mission-critical workloads that simply aren't about scale of commodity components. They're about reliability. It's about a different kind of scale, they're cost-optimized in a big way. We had this hypothesis when we began, that Purity, as an operating system, had many enterprise features that would offer value in the cloud.
An example of that is a top 500 healthcare leader, who had this interesting idea. They came to us, they said, "We currently have a run rate of 11 PB a year in Azure storage." That was their snapshot at that point. Azure actually growing very quickly. They had a challenge, in that Azure storage does not have data dedup. Basically, they don't have the ability to reduce data in their cloud. They haven't optimized for that. That's an enterprise feature. Now, if we're sitting here on the stage with you talking about enterprise storage, that's been in place for a long time, that feature set has not made it into the cloud.
This customer came to us and said, "What if you were able to take that and make that available? We gave you the ability to do encryption in the cloud, and you took that, and then saw what you could do with data on deduplication." Almost immediately, we found with early sets that there was a one to five ratio of reduction. Almost immediately overnight, they saw this huge benefit of being able to reduce their storage footprint, and that immediately saved them about $10 million a year. It's a great value proposition to be able to cut almost half the cost out of that. The next question, obviously, is Azure happy with that?
Like, do they want to be able to have that kind of reduction in their costs? Well, they're a great partner. They're a great strategic partner. They're very strategic with their customers. They told us two things. One is: we want these enterprise workloads to come over from on-prem into the cloud. They said, "This is the last workload. We've been working on this thing." Their competition is actually not themselves. Their competition are the optimized workloads that exist on-prem. They have to address the fact that we've already optimized the cost out of those things. They have to be able to move that over. The second is that being able to free this up, this spend in their EAs, their enterprise agreements, allows them to be able to move that to other high-value services as well.
We think this is the beginning of the process. We're just exploring this, but we think that there is a great value advantage of taking one step over and moving these workloads over using enterprise value-added features in the cloud. One thing I just want to illustrate is that now we've talked about this rapid introduction of things that come as a result of the platform. This is leverage. In the beginning, we had Purity working on a block environment, and in the last 12 to 18 months, we've been able to introduce new platforms at a breakneck pace. It's been amazing. 18 months ago, we introduced XL, we have FlashBlade// S. We have the introduction yesterday of the 4th generation of FlashArray// X and //C. We have FlashArray// E.
We have these new introductions of platforms in the cloud that are being deployed. This ability to rapidly move and get leverage as we move up and down through the top of the end, all the way through tier one, tier zero, down through the bottom of the pyramid, you're going to see more and more of this leverage as we gain acceleration from the leverage of this amazing platform. One last thing to bring up is Charlie talked about Fusion a little bit. Fusion is our cloud control plane that allows you to basically bring the cloud operating model to customers behind their firewall. This basically hides the underlying infrastructure and gives someone the experience of rapid provisioning, being able to deploy advanced data services, the most simple being load balancing, all the way up to more sophisticated things.
We've got great plans for Fusion across our portfolio to do this. I'm very excited about that, but I think, Amy, you're probably in a great position to share your thoughts about Fusion in the future.
From our perspective, right? It's been tremendous to see Pure Fusion get delivered for those block workloads, but we certainly want to be able to deliver that across the portfolio. Our teams have been collaborating and working together extensively to develop kind of a common file model across the unified file part of FlashArray that Shawn talked about earlier, and the FastFile on FlashBlade that we've had, as well as Object. The ability to deliver that end-to-end cloud operating model across the entire portfolio of Pure's platforms is something that, you know, we're really looking forward to just the continued collaboration and bringing this to market over the course of the future. We don't have a date yet, but that's a big area of collaboration between our teams.
Great. Wonderful. 100%.
I think that is everything we had.
Thank you so much.
We are going to bring up our good friend and scuba diver extraordinaire. Better than what I told you I was going to say, Rob Lee.
Thanks, guys. You'll excuse me, I've been talking a lot this week, so I'm bringing a drink up here with me. Thank you guys for coming all this way to see us. Like Charlie said, I know you guys have busy schedules. We really appreciate the time. You know, I'm going to try to wrap us up here. We've heard a lot today, starting with Charlie, certainly, Coz, in terms of where we're heading, the efficiency roadmap. Shawn and Amy walking us through some of the exciting growth areas in the product portfolio. I'm going to start by going back to our four unique, sustainable, competitive advantages. Really, the four corners of our moat, if you will.
I'm going to spend a few minutes diving into each of these areas in a bit greater depth and hopefully drive a bit of a better understanding, you know, with you all in the community as far as why we're so convicted that these are both meaningful advantages, but more importantly, sustainable over time. You know, I'm going to start with DirectFlash management, and more specifically, focusing on the software elements. I think Coz touched a lot on the hardware. You got to see the density, you got to see the mock-ups in terms of where we're headed. He did make a comment which I want to start with, which is the hardware is only one piece of it. It's actually, I would say, a minority piece of it.
What enables the hardware, what enables that roadmap is the software, that we've built really over 10, 11, 12 years, that makes that hardware possible. I'm going to start by kind of discussing a little bit what the rest of the industry has to cope with to work with Flash. Everybody else that's working with SSDs, everybody else that's stuck on the SSD roadmap. As you know, as Charlie mentioned, Flash, at its core, you know, from almost a physics level, behaves very differently than disk. It has to be treated very differently. The way that an SSD operates, it has to solve all of those problems within the SSD, within, you know, this small form factor. Coz gave a great example, right?
You want to write data to a certain block on Flash. Some point later, you want to overwrite that block. You can't just overwrite it. You have to write it somewhere else. You have to leave yourself a little note, a little pointer, if you will, and remember that you did that. At some later point in time, you've got to go clean that up. Right. With Purity, with our DirectFlash approach, we can do that in the Purity software running on our controllers, running on our array controllers, our blades. Everybody else is forced to do that within the SSD. Let's take another example.
You know, flash, as you know, the consumer industry, as these guys keep pushing for denser and denser chips, the bits are getting a lot more close together, right? You've got to do a lot more work to kind of keep the data that you wrote healthy. If it turns out, if you write data to, you know, a die, a chip, within flash, if you just read data from one place, if you read it enough, you'll jiggle the electrons enough, you'll change the voltages enough of data that's written next door to it.
You've got to actually go and keep track of how many times you've read data from one place to have that affect how you go and rehabilitate and move data around in flash that's sitting right next to it. Very, very complex. All right? Where does everybody else have to solve that? They've got to solve that within the SSD. That's more work to put it here. We get to do that at the system level, right? As each generation of flash progresses, right, there's a market demand to drive denser and denser flash. The bits are getting closer and closer together. That problem just magnifies. How do the SSDs cope? Right? They've got to do all this work. You've got to do very, very complex firmware, as Coz mentioned.
You need significant resources to run that firmware. You need a lot of DRAM, right? You also, it turns out, need a whole bunch of extra flash, because to do this work, you need scratch space. As you're moving data around, as you're rehabilitating, rewriting data that was written on flash here, you've got to write it into a temporary scratch space, and then you've got to move on. That overprovisioning, that setting aside of additional flash, is just straight up, off the top inefficiency. It's a tax that SSDs are forced to pay in terms of the flash that they deploy. It's flash they have to put into the form factor that cannot be effectively used, cannot effectively be given, to use for, you know, the end application. In this case, our competitors' software systems.
The last thing I'll mention, right, is that because we have pulled the software, we have pulled all of this hard work, right, out of the SSDs, and we've pulled them up into the system software. We get to solve problems once. We get to do it globally. Everybody else is relying on commodity SSDs to solve that problem all throughout their system multiple times. Charlie brought up a great example, or a comparison before between Blockbuster and Netflix. One of the differences between Blockbuster and Netflix is Netflix got to solve the inventory management and distribution problem once, centrally, way more efficiently. Blockbuster had to do it in every store. Same thing going on here, right? These SSDs have to solve that problem all over the place in an extremely inefficient manner.
We get to do it once, we get to do it globally. That's why, right, when we talk about, the densities we can push, when we talk about, you know, the expansion, right, of our advantages in terms of, efficiency, performance, reliability, we're so convicted that, you know, not only to borrow Charlie's words, not only do we have a three-year lead now, we're going to have a three or three-plus year lead, three or four years from now. You know, this curve, right? We've been very vocal in this conference and really over the last quarter or two, our opinions of disk, you know, hard disk is done. What we're now seeing and really trying to educate, the community on is SSDs are really not the answer.
SSDs have been a technology industry's coping mechanism for having to deal with flash. We've found a better way. Let's move forward. Let's talk a little bit about the highly consistent portfolio. Right, this is kind of the second unique competitive advantage we have. And I want to dive a little bit into the software and, you know, unlike Coz, I don't have props. Software is a little bit harder to show. But I want to show what's going on with the software in our system, right? If, you know, as we've discussed before, we have one hardware technology, which you've seen, DirectFlash. What makes that DirectFlash technology work is the common Purity operating system. What does Purity do?
Let's start at the application or the customer visible level. At its most basic form, right, we provide storage. We make it accessible through industry standard protocols, be it block, file, object. Because it's a shared and common Purity operating environment, we make those protocols available and unified across the various platforms we support. In order to serve those protocols, in order to store customers' data, we provide a number of services, whether they're data reduction, data protection, management of the system, right, availability, reliability. At the lowest levels, we translate that to and kind of deal with the work of flash management. Now, one of the things that's important to realize is every layer of this stack, right, is designed inherently and natively for flash.
Well, what does that mean? Let's take a look at data reduction, right? One of the things that we've really pioneered since really the early days, our first products, is this idea of deduplication, right? You store a file, your colleagues store a nominally very similar file. If I can recognize that those two files are very similar, have a lot of the same content with just a few words changed, I don't have to store a whole second copy of the file. I can just store the words that were changed and store a pointer and kind of remember that, you know, your copy has a couple words changed.
It turns out the way that you would design data reduction for something like that, if you're a legacy competitor that has software designed for disk, puts you in a place where you don't make the same decisions, right? If you look at how you optimize for disk behavior, the disk has a spindle, it spins, right? It's not a tape. The magnetic medium basically spins around under a head. In order to optimize for performance, you need to lay out data sequentially. As it spins around, you put the magnet there, you slurp up all the data, and you send it out.
Okay, well, if you're doing that's kind of going directly against chopping up your data and managing, you know, pieces and chunks and, and, you know, remembering that, you know, part of your file is somewhere else, it's a duplicate. You don't design data reduction the same way, right? Conversely, if you have software that's heavily optimized for this, you really can't deliver performant data reduction. Let's take an example in availability, right? All enterprise systems have to protect against potential hardware faults, failures. People can pull drives out. When you design software for magnetic spinning disks, if you pull a drive out, well, now all of a sudden, in order to rebuild the data on this drive, you've got to go light up every other drive, and you've got to spin them at, like, crazy.
They're, you know, creating all kinds of performance, spikes. You're destabilizing the system. With Flash, because we have highly parallel access, because we control what we're doing with the Flash down to the die level, we can fine-tune that process in ways that, to cause this point, drive higher reliability, higher availability. Every layer of this stack of software, even above the ins and outs and bits and bytes of managing Flash, are purpose-built and heavily tuned for the Flash environment. What does that mean? Well, you know, we had the advantage when we, you know, entered the market of only contemplating building for Flash. All of our competitors, right, are starting with software stacks that were de-designed for disk. They haven't been able to retrofit that software to deal with Flash.
They're relying on the SSD retrofit to do that translation, right? I've walked you through why the SSD translation is going to fall behind. This is a lot of work. They don't have it. We've done this hard work. We've done it once, to borrow a term that Shawn just used. We have leverage to be able to deliver this across two platforms: scale-up platform with FlashArray, scale-out platform with FlashBlade, and as Shawn mentioned, an example in data reduction, a cloud platform on top of AWS or Azure with Cloud Block Store. One software base, two architecture, scale up, scale on a hardware, a cloud architecture back end, driven by the same management paradigm, right? Whether you're managing our products through command line interfaces, APIs, Pure1, our SaaS portal, or Pure Fusion, you get one experience, right?
This is something that our competitors simply can't reproduce, and it's something you can't put in bag. If you know, acquired your way into six or eight different products, you can't make them the same, or it's really, really hard. All right, number three, the cloud operating model. I want to start by putting this in the context of Pure's broader cloud strategy. Really see that as four pillars. How do we go help customers run like the cloud everywhere? How do we help them run better when they're in the cloud? How do we help them build for the cloud natively, and how do we power the cloud directly with hyperscalers, right? I'll talk a little bit about running like the cloud.
Really, what we're trying to do here is give customers the best attributes of the cloud experience. Why, you know, what do they like about going to the public cloud? Well, when I talk to customers, they like the flexibility, right? They like the agility, they like the consumption models. They like the fact that, you know, they can describe what it is they want without having to manage the ins and outs of delivering it. When a customer goes to, you know, AWS, they order up a service, they care deeply, they get the SLAs and the performance that they're paying for. They care deeply that there's good hardware behind it. They don't really want to know its name. They don't want to have to manage it at the business byte level, right?
We want to deliver that same experience to customers on their own premises. They want services that are just going to get better over time, transparently, right? Again, you go to AWS, you go to Azure, if you sign up for any one of their infrastructure services, they're doing hardware modernization, they're doing software updates behind the scenes, customer's not managing it, right? That's the experience that we're de-developing with Pure Fusion, as Shawn and Amy walked us through. All right, let's cap off with Evergreen. Evergreen, you know, as you saw in the video up front, I was discussing, Evergreen is essentially our promise, our architecture, our business model, to ensure that customers that start with Pure are never left on an obsolescence roadmap, right?
They're never left obsolete, they're always modernized, and we're able to deliver that completely non-disruptively. Now, Coz and Charlie walked us through, you know, the dramatic improvements that we've made over time in terms of density and efficiency. Evergreen is our vehicle to deliver these improvements, both over the past decade as well as into the next decade, to customers completely non-disruptively. As an example, just to illustrate, some of the stuff that Coz talked about, 11 years ago, in 2012, 5 PB of FlashArray would have looked like that. Would have been about 26 rack storage. Today, we can deliver that in five rack units. Next year or the year after, that's going to be even smaller, right? You've got, you know, 500x, roughly, improvement in terms of space efficiency.
You've got hundreds of new software features, massive, you know, orders of magnitude improvement in terms of performance. Evergreen is our vehicle to go and deliver that to customers completely non-disruptively. Now, you know, if you go back to 2012, right, to be honest, nobody was really buying 5 PB of Flash. What does this look like from an individual customer's journey? Well, you know, the core promise of Evergreen is that we're never going to, you know, leave a customer obsolete. Our products are always going to be modernized. We're able to keep that customer up to date, non-disruptively, without ever having to take downtime.
If we compare this to the industry standard, if we compare this to what basically all of our competitors force their customers to do, our competitors force their customers to go through regular cycles of forklift migrations. Customers start off, make an initial purchase, they'll run for some period of time, and at the end of that period of time, typically three, four, five years, that asset's going to fall off of support. Before that happens, they have to buy the new thing. They have to take a disruptive event. They have to migrate to the new thing. They have to pay double support, by the way, in this overlap, right? You know, that buys them a few more years when they get to do that again, right? They get to do that again and again and again, ad nauseam.
Oh, by the way, this is one array, right? If you're managing a larger fleet, you've got tons of assets that you're constantly running this treadmill on, right? This is a terrible, terrible experience. Now, from our point of view, we love this because every one of these is a rebid opportunity. Every one of these is a sales opportunity for us. We never return that favor. With Evergreen, we completely flipped this on its head. Charlie mentioned our an early customer before. We took a look at what that customer bought and what that environment has looked like over time. That customer actually started with us on one of our pre-production units back in 2012. That's a FA-300. We didn't even have snapshots at the time, right?
That customer bought, and over the years, that customer has gotten, you know dozens of major new software features, hundreds of smaller software features. They've been able to go through a series of software non-disruptive upgrades, full system hardware non-disruptive upgrades. Across the years, we've modernized them across four different generations of FlashArray, modernizing the controllers. At some point, they were able to take advantage of denser drives, consolidate some of their capacity into denser drives, shrink their footprint, store more data in the same physical space. They're now on the latest generation of FlashArrays, never having been forced to take a downtime.
Getting back to the previous slide, versus our competitors, who present us with a rebid opportunity every three or four or five years, this is a customer that we've never put back on the market. All right, I've gone way long. Let me bring this back to the top. Hopefully, this gives you a little bit of a deeper insight into the four unique, sustainable, and competitive advantages we have. Again, the four corners of our competitive moat. With that, I'm going to transition us over to the Q&A section. Paul, do you want to come to the stage?
Thanks, Rob.
Thanks.
Okay, while we transition here to Q&A, I'll echo a little bit of what Rob said. We hope the session, thus far, has helped you deepen your understanding of the outcomes that we drive for customers, as well as the differentiators that drive our four unique, sustainable competitive advantages. Before we begin the Q&A, I'll mention that you do not need to limit yourselves to one question consisting of one part. Please feel free to ask as many questions as you'd like. We just ask that you be considerate of others because we would like to get to many questions today. Finally, I'll also remind you that this is a product- and technology-focused meeting, we won't be addressing financial questions. Let's bring everybody up, and we'll get ready to get started. Where's that?
Kara, do you have your mic there with you? All right. Pinjalim, did you want to start us off? Kara, we'll start right here. We'll go to Ahmed after that. We'll work our way around. Let's start here with Pinjalim.
Thank you, everybody, for the presentation. Hi, I'm Pinjalim Bora from JP Morgan. Thank you for the presentation. I wanted to ask Coz maybe three part, one question.
Okay.
For-
I promise that I'll remember all the parts, so you might have to re-ask part of it.
Sure. Obviously, you have a very, you know, ambitious plan to go to 300, and I think you said 1 PB,
Eventually.
Eventually.
Not as far in the future as some people might think.
Right.
further out than a couple of years.
Right. When you were presenting just now, it sounded like a lot of that would come from the higher density chip, as well as layer, as it's layering in. I wanted to ask you, how much of that is dependent on the new flash mediums as they come in, like PLC or whatever else is going to come in? Second, I guess on the Purity OS, maybe this is for Rob, how much change do you need to do as those new mediums come in? Like, how much can you reuse? What is the incremental? Lastly, maybe this is for Charlie. As the price-
This is a three-part question for three recipients.
Yeah.
I love it.
You remember that easily. Maybe Charlie, as the price points, as we look into 2025, the price points get to $0.05 as you were showing, it becomes attractive for the hyperscaler opportunity, right? How are you thinking about that? On one hand, the hyperscale scalers have that ethos of, you know, "We'll do it on our own." Can you break that? How do you think about that?
Right.
Okay.
I think you're first, Coz.
All right. I guess I'll start first. And I'm sorry, Rob, I'm going to leak into some of your things as I have to. The doubling from 75 to 150 is just us building more mounting points. The flash vendors-
... We're not counting on PLC until quite a ways out. QLC has a long way to run. They will increase the density of the chips, you know, think of it probably every couple of years, so we'll just go up every couple of years with that. You know, maybe it's 18 months, maybe it's two and a half years sometime, but it's gonna run into that kind of a thing. You know, the key difference is you can look at Western Digital's roadmap for flash, SK hynix and their roadmap for flash, Samsung and their roadmap for flash. They all are showing, "Hey, we're at, like, 200 layers now, and we're gonna go to 400 and 500 and 600 and 800 and 1,000." They have a clear roadmap, right?
The disk vendors have been stuck trying to transition to energy-assisted recording for a long time, and there's a lot of problems to solve with that. The flash vendors, they're just going through kind of the same thing that, you know, we've went through for many years with integrated circuit, you know, with DIMMs and DRAM with, you know, processor chips before that. You know, instead of shrinking the cell size, like, we stack more layers.
They're also about five or six nodes behind processors and memory.
Yeah.
They've got, you know, they have an easy path on that.
Yeah. The software does have to evolve a lot. Now, if you think about it, the first FlashArrays were 5.5 TB, and we're now at a point where we're starting to sell FlashArrays that are almost 1,000 times as large. Now, that's raw capacity, no data reduction. The software has had to change a bunch. Right now, we're starting the projects to scale the software to support 300 TB DFMs in two years, because it does take time. You know, there's a lot of work around dealing with the different generations of flash and the different families. We've been doing that for a long time. That's part of it.
One of the things actually that I think Rob and I, neither one of us, in what we were saying, expressed as adequately as an advantage, you know, the storage subsystem takes in data, and it's remapping it, right? A customer has a volume, let's assume it's a block device for a minute of file, and they write block 100 of that volume. The storage subsystem translates that into: "Okay, I'm gonna put block 100 of this volume over here on this device." Then, with an SSD, the SSD is like: "Well, oh, I get block 100 of my device, and I do some other mapping." We only do one level of mapping, right? That's an advantage that I don't think either of us articulated as clearly as we should have.
By having Purity mapped directly to the flash, we cut out a whole layer. That actually cuts out a bunch of work we have to do. It cuts out a bunch of complexity from the entire stack, and that gives us an advantage in scaling software. It is a bunch of work to scale the software, and to have it be as efficient. You know, I don't go tell the engineers, like, "Oh, you're gonna scale that, and you can have twice as much memory," 'cause the twice as much memory costs money, and every component you add to the box makes it less stable. You want to build the simplest product you can that gives you the most reliability, and reliability is so important in scaling this.
I think the only other thing, other thing I'd add, Pinjalim, is, you know, to the other angle of software changes to manage flash, you know, to Coz's point, we've been shipping now, probably dozens, you know, over dozens of generations of flash from multiple vendors. Before DirectFlash, we were already building the understanding of what the SSD was doing internally and building that into our Purity software. What that's given us is the right level of abstraction, if you will, to go and make the tweaks and the tuning that are needed for each generation of each vendor of flash. For us to pick up the next generation, the next vendor, yeah, there's work, but we've done a lot of the general work to kind of make that task a lot easier.
Were any of our competitors or hyperscalers, for example, to head down the same path, they would have to overcome that hurdle, you know. It's a significant one, because it's not just about developing, the flash know-how once. It's doing it in a way that's repeatable with lower incremental costs, and we've got that figured out, because, you know, we've been doing it for 10, 12 years.
Yeah, on the pricing, I just want to remind everyone that when we talk about the $0.05 or the $0.20 , there are different performance levels as you go up and down that range, right? It's not as if all flash storage will be at $0.05 . There's always gonna be a range of price performance. It was only that flash couldn't get to the lowest part of that range because of a variety of factors, including the price of flash, right? You know, frankly, density makes a huge difference in terms of being able to get down to that level overall. As we... You know, as most people suspect, the hyperscalers are able to get the lowest unit cost from purchasing a hard disk, right?
They benefit from that. On the other hand, for example, they don't have data reduction. That allows us to get to even lower costs, right? There's a variety of other techniques in provisioning, and we can go through all of this, that allow. We always think about this at the system-level cost, and we would argue that at a TCO basis, total cost of ownership basis, the way that hyperscalers use storage today, even on, let's say, near-line disk, we're already at a total cost of ownership that's equivalent to what they have in their own environment, let alone in a customer environment, we're far less than what they charge for those capabilities. We do have to fight against NIH. You know, we're in that fight.
You know, now, you know, in a sense, we have to sell at a higher level, because you get to a point, having been through similar sort of things in different companies at different times, where management has to believe that not only do we have an advantage today, but that we're going to maintain that advantage, you know, for the foreseeable future. And two, that we're an organization that they can trust, and that's probably the most important element.
Yeah. You won't hear me say this on the earnings call, but did you have any follow-ups to all this?
I'll come back again.
Okay, let's go to Amit here. Then we'll go to Aaron after that.
Perfect. Thanks. Amit Daryanani, Evercore ISI. Thanks a lot for doing this, by the way. Super helpful. I guess I'll go with two questions as well. You know, maybe to start with, can you maybe spend some time talking about how do customers think about their storage need when it comes to deploying these AI clusters? How is that different from the traditional workloads they do? You know, specifically, I think the big debate everyone tends to have on this is: can they just deploy raw flash and build their own software around it, or are they better off buying Pure's operating solution all in? Can we just talk about how they contrast that, if there's any difference between hyperscalers versus enterprise, that'd be really helpful. Then on FlashArray// E, Charlie, you sort of talked about enterprise, mass storage, hyperscaler, the two buckets.
I'm curious, up until now, where is the solution resonating more? Where are you seeing better traction? Is it on the enterprise or hyperscale side? Do you think you need to have more offerings around helping customers migrate everything they have in drives to get to these FlashArray//E models? I think that's a big choke point that they have right now.
Yeah. All right, let me start, actually, let me start on the AI question. I believe that's a very rapidly evolving space. I think we're all a bit under informed to be direct and honest. I'm talking now about the generative AI. I think we're all talking about the generative AI versus and ChatGPT, versus what we're now, after about five years of real use, calling the traditional AI. You know, in traditional AI, you know, we have a lot of experience in that space, which is they deploy GPUs. If and when they deploy them on their existing storage space, as Rob kind of alluded to, they become upset because the GPUs are only about 20% utilized.
They're very expensive, you know, on their own, let alone, power. That's why we've developed a great relationship with NVIDIA sales force, because they know that when they work with us, that their GPUs will be, you know, fully occupied, customers will be happier. That's generally our FlashBlade product that goes in that environment. As we go into generative AI and ChatGPT, you know, you have to. As you think about that, there are really two different, entirely different opportunities and two entirely different environments. One environment is creating the large language models, and we don't actually believe that there are going to be a lot of companies involved in building large language models.
That the hyperscalers will be some very large other organizations that want to do it on their own, potentially banks creating proprietary model, will build machine learning environments, okay? You have the machine learning environment. Machine learning environment is a whole cluster of GPUs and very fast storage, right? And we are premier at that, but of course, NVIDIA also builds their integrated systems that have storage built in it. And in that sense, it's a bit of a competitor. And that's for the large language model. Once the large language model is created, you put it in what's known as a neural network, but generally using general purpose CPUs against large amounts of data for inference.
That large data doesn't have to be the kind of data that goes into the machine learning environment, meaning hundreds of GB a second. It just has to be available at reasonable performance. That's where we think customers are going to want to get more of their data in from siloed cold storage, where the machines that they're on today are only built with the performance level necessary for their siloed application, and want it to be in a somewhat more performant environment, you know, available for analysis. We think that's a great opportunity for FlashArray//E, because one thing we didn't talk about, FlashArray//E is now available, as we mentioned, at a similar price point to disk, but it's many times more performant. It really opens up a lot of opportunity for us.
You know, I think one other thing on that, this is a point Rob made on the main stage earlier that's really important, AI is changing rapidly. Exactly the use that people need, the, you know, for the different applications as they go to do it, the different ways they need to access their data, flexibility is important. You know, something that FlashArray is good at and FlashBlade is great at, is you don't have to access it in one way to get the best performance. Large accesses, small accesses, lots of files, few large files, you get great performance really simply and easily. Organizations that want to take advantage of AI, they've got to be adaptable. Evergreen is the essence of adaptable, the performance that...
The way we deliver that performance, it's the essence of adaptable, and that's going to be a gigantic advantage over the next several years.
On the question of E, I might ask Amy and Shawn.
Yeah
to jump in. Amy, if you wouldn't mind sharing some of the initial customer interest and uptake of FlashBlade//E. I'm thinking in particular, the customer that took FlashBlade//S and FlashBlade//E, maybe a little insight there. Shawn, if you wouldn't mind also adding some of the use case expectations, perhaps, for FlashArray// A.
We've been having conversations with customers for years, where they're like: Great, I have this 500 TB or this PB of data that I, you know, absolutely value a tremendous amount of scale out performance, and, you know, FlashBlade is a great fit. Like, I'm not going to put these other... You know, like, a great example of this, you know, one great example of this is rapid restore and ransomware recovery. I have, you know, 500 TB I want to be able to restore super, super fast, I also have the other copies of backup data that I don't want to pay that kind of premium for, and that's more like 5 PB or 10 PB, right?
What we're seeing initially is all of those adjacencies associated with the use cases that our customers are already leveraging FlashBlade//S for. They're using something else for kind of that older or colder, relatively speaking, part of the data, and they're really excited at the opportunity to be able to, you know, drive down the density or drive up the density, I should just say, drive up the density, drive down the footprint and the power consumption associated with those workloads. We see the same thing in log analytics, right? People want to collect more logs, keep them longer, but not all of them need to be able to be delivered at the, you know, at the level of speed that we do in S.
having, you know, E as that secondary tier associated with all of that data is a huge interest area for our customers.
I'll just say one thing, and that's that we've been incredibly pleased with the ramp that we've seen in pipeline growth for the E family. We often get asked the question, is what verticals, what sectors we see it striping across all verticals. We think we're very early in stages. We just launched this. We're really bullish.
Just, 'cause one part of your question was, you know, hyperscalers versus enterprise. Enterprise, because they buy finished systems, that's the way that they're used to buying, you know, is obviously going to be first at this. Their decision cycles are, you know, they're a little bit longer today, of course, but, you know, six months is a normal decision cycle. Hyperscaler, it's a co-engineering activity. Decision cycles can be as long as 18 months, possibly longer, because it's all part of their next generation data center design. It's a design win that you have to get into. By definition, that'll be longer, and of course, you know, until we have one of those where we haven't announced it.
Just before you ask Aaron, Ajay, I think you might have something to add on the subject.
I just want to mention that as you think of the hyperscaler, you should think of it in two opportunities. There is the core, which has a lot of the NIH, and the teams really defend that, you know, you've got to go more senior and all that. There are the adjacent applications as well, where a lot of times just our standard systems would be very attractive for them. The Meta win has been more in the adjacent space. Of course, you know, they're really happy, and we've got more opportunity over there.
I think the other thing with the E-family, the big advantage you get with the E-family is, a lot of times, whereas with our traditional high-end performance systems, our competition can actually they've got a standard play against us because they can just lower their price point. The structural cost disadvantage doesn't really come up because there's enough margin in there for everybody. They, you know, will lower their price point, and they'll compete with us. For the E, they really don't have any competitive product. We can actually penetrate with E and then go back up and sell our standard S. You get the portfolio effect with that combination.
Please, Aaron, go ahead.
Thanks, Paul.
It's Aaron Rakers at Wells Fargo. I appreciate you guys doing this today, and great to see you all. I think this is going to build a little bit on these last two comments, Charlie and yourself. I guess when I'm looking at this pyramid that you threw up there, $30 billion-$50 billion from the hyperscale vertical, Charlie, you mentioned it takes, you know, customization and design in these cloud companies. I believe today, 90% of that data capacity is probably hard disk drive-based.
Absolutely.
I'm trying to understand, what do they need to do if they don't use Pure?
Right.
Right? How do they evolve off of their own intelligent layers of writing the hard disk drives?
Right.
Do you see these cloud guys doing that on the flash side, or does this just naturally open up? I'll throw out the secondary question. It builds on your answer there. There's been this discussion around AI RSC being completed.
Right.
Right.
Yeah.
There's other iterations, new data center build-outs at Facebook. Am I to assume that you are equally as involved in those future data center build-outs, expansion within Meta, as well as we look forward?
Right. All right, so, I think the best way to describe, you know, how one, or how the hyperscalers look at flash from a historical perspective, okay, first. Obviously, they started out with hard disk because that's what was available. Hard disk, having been around for such a very long period of time, open source software, the software necessary to manage hard disk environments, had been around a long time. Of course, hyperscalers are expert at using open-source software. All of their software was designed to be able to operate on hard disks, and when they do use flash, they use SSDs, right? Again, they're managing it as if it were hard disk. Easiest thing to do, gives you an immediate performance advantage.
Most of their flash, not all, but most of their flash is embedded in the servers themselves, because servers still carry, local storage, right? So-called DAS. They have the same challenge as our competitors in terms of if they wanted to use flash directly. They would have to build into their core software, which today they didn't. I would argue they didn't really build an expertise around hard disk management, I think that came with open source software. They manage the large-scale use of hard disks.
Now they have to go back to really fundamental basics, start to do the type of analysis that we've been doing for 10 years in terms of analyzing not how flash works, but how individual flash chips, based on a specific run of that particular manufacturer and the specific strengths and weaknesses of, you know, each row and column, you know, inside, you know, that flash chip, and do that for every new generation of flash chips that come around. They'd have to rewrite the software that they use to manage hard disks, knowing those differences now in flash, in order to get all this advantage.
You're talking about many, many years, and you have to ask yourself, you know, is that really worth the investment that they put in it versus all the other opportunities, you know, that they have in place? The second thing I would say is that there are ways that we can modify our product specifically to use in a hyperscale environment that would bring our costs down even lower, okay? The way they structured their storage is quite different from an enterprise. And we know of many ways in which, in their particular environment, we can reduce the costs even more.
You know, really, I believe their easiest path, their least expensive path, their fastest path would be to go with us, which is why I think we have a shot. Doesn't mean we can beat, and certainly not in all cases, their internal teams in terms of, you know, just the not invented here type of point of view.
No, absolutely. I mean, I think, a lot of the big ones have fired up teams already, and they're noodling on this problem and working on it. We know it's a multi-year effort. It's complex technology, and a lot of times when we meet those teams at various Flash Memory Summits and all of that, we know that they've got a ways to go.
The last question.
Just I want to comment on the RSC being complete. What they meant by that, by that comment, was they got what they've built currently up and working. It doesn't. We had talked about before how and they had spoken before about how their plan was to get to an exabyte. They're currently at they've done two phases. They're currently about half an exabyte. There's no reason for us to believe that their plans have changed in that. The complete statement, what meant that they had done that first build-out, and it was up and working. I because there was some confusion in the market, and I just wanted to make that clear. You know, we continue we're we have a great relationship with them.
The deployment is working better than their plan, better than their expectations. We're in a lot of conversations with them, is probably the best way to put it.
Just to be clear, that question was on Meta.
Meta.
For those that couldn't hear you, Aaron. We have Meta with us. Would you like to ask a question?
Ba, bum.
I'll break up the kind of AI questions for a second. Again, Meta Marshall from Morgan Stanley. You know, you talked about, I think, in the keynote, that the cloud was a business model. It wasn't necessarily something people went to for costs, or it was in somebody's script. I guess I wonder, you know, we've gone through this cloud optimization period. You're clearly talking about some of the advantages that you guys have in terms of kind of making on-premise storage more efficient. I guess I'm just wondering, like, where do you think customers are in the...
Are they going to kind of re-accelerate and get more workloads towards the cloud and actually move some of these applications towards the cloud using some of the software that you guys offer, or are they thinking that they're going to leave more stuff on premise?
You know, anecdotally, you're getting all sides of this right now, and I'm sorry.
No, go ahead.
Anecdotally, you're getting all sides of this, and we're kind of in the middle, I think, of the learning process for many organizations as to the difference between developing in the cloud and running production in the cloud. For any developer, if you're not developing in the cloud, I think you're a little bit crazy because it's easy. You pay as you go. Development typically is not a lot of compute time or a lot of storage time. It's when you get into production that the costs really start to build up and come against you. We see customers who all of a sudden get sticker shock by, you know, once they get into production, as to what it's costing them, are starting to pull back.
You still have a lot of customers in the early phases of their development environment, and so I think you're hearing both. I think there's going to be a balance. There is going to be a balance of on-prem and in-cloud as customers figure out the right economic balance.
Shawn, did you want to add any comments on this?
You read my mind. Actually, just two hours ago, I met with a large Fortune 50 company that explained this exact issue. They said, "We've invested, and we've moved things over to the cloud. We're thinking about bare metal deployments. We're thinking about on-prem. We're trying to evaluate how much to repatriate." They walked through their model of trying to figure out this balance between the two, between agility and the long-term footprint. They simply said: Please help us. Like, how do we make this transition happen? What are the technology building blocks to help us get there? I think they're... I mean, they're a very sophisticated shop. They brought in some very outstanding technologists, and they're in the early phases of trying to figure this out.
I did want to add that, you know, I think you made the comment about on-prem, but we certainly look at ourselves as with ambitions to and with strong motions to go into the cloud, with the Cloud Block Store, with Portworx, HyperCell, Cell two. We kind of see the opportunity not just on-prem, but across the entire storage space.
Yeah, I mean, I'll just add one more thing to that. One of the things that I've seen, certainly driven by closer, you know, paying closer attention to cloud costs, is, you know, much more, I would say, balanced thinking from customers in terms of where are they truly getting value from the cloud versus, you know, consuming infrastructure on-premise. I think there's a couple things, you know, that we've highlighted. One I'd add, too, is, you know, I think that a lot of customers I speak to have been surprised at not just the headline cost, but all of the costs they didn't really realize that they were signing up for, the access charges, the network transmission charges, API calls.
You know, I think it's indicative of three, four, five years ago, you had this wave of companies saying, "Oh, we're going all-in the cloud." I think what people are realizing now is just going to the cloud, but continuing to live an on-premise lifestyle is actually really, really expensive. Then the other question I get sometimes is, "Hey, so, you know, there's a lot of hype around AI, there's a lot of people looking at analytics. Is all that stuff going to the cloud?" I think that's another area where, because of the recent focus on cloud cost, awareness, I think people are going into that with an understanding of, hey, the whole game of AI or analytics is get a bunch of data, but then process it a lot.
Well, if I'm getting charged to touch my data every single time I read it, that's a pretty heavy tax to go into, you know, to go into it with. I think there's like I said, just overall, everything, the other folks said, but I think there's just overall a much more balanced, and mature view as far as, hey, where am I getting value versus, you know, where is the on-prem, technology stack, going to do better for me?
I think, Meta, did you have a follow-up?
No, I was going to make.
No?
If anybody's name was Microsoft.
Okay, well, Sidney, then we'll go to Wamsi after that.
I'll go first. Thanks for doing the presentation. One of the pushbacks I'm getting for the low end, the FlashArray//E family, is that hard drive companies are claiming that their cost per gigabyte improvement is going to reaccelerate with the HAMR of the world, or whatever technology, that is going back to maybe just have 10% a year, maybe 15% a year. On the other hand, flash memory prices are... Well, maybe now it's low, but the cost is actually going to be increasing, getting more expensive. Capital intensity is higher and all that. Does that really change the calculus, how you think about... If that is true, let's say that is true, they are equivalent in terms of cost improvement.
Does that change the way your customers think? Is it really just more the energy savings, the space savings, all those things are more important to customers than just the hardware cost?
Let me start. I'm a very big believer, in fact, most of my career, I have just followed exponential cost performance curves, Moore's Law, if you will, for each of these different industries. Just look at the last 20 years, or 30 years for that matter, of price performance improvements in flash and disk. Don't get confused by what it's doing this quarter or last quarter, or even on an annual basis. It goes above and below the line. Any five-year period aligned, you'll see it's very steady. Flash has been decreasing faster than disk for many, many years, and if anything, disk has flattened out. All right? These things don't change. The second thing is more important.
They love talking about exactly what you're talking about, right? You have to remember, there is a single head operating on that disk. Let's imagine you have a gallon bucket of water, and you put up a one-inch spigot on that, on that gallon bucket, right? You can get water in and out of that bucket pretty fast. Now, imagine you don't have a gallon bucket, but you have a town water tower, and you put a one-inch spigot on that town water tower. How well do you think it's going to supply the town with water? You can put more bits on the disk. You just can't get them in and out any faster. That's... If not, the slower density improvement of disk, but the IO is just not going to allow it to keep up.
A few things. If you remember the chart Charlie showed about disk versus HAMR, you'll notice that peak disk was basically, 2012, not coincidentally, the year ship something. Okay, that was a coincidence that we got lucky on that one. You know, disk, yes, the cost is still cheaper if I'm just buying a byte of disk. Power that disk, maintain that disk, the human effort to run that disk, that's where the advantage is just overwhelming. That's not even counting data reduction. When we started with the E family, we talked about pricing and talked about TCO without counting any data reduction. If we get data reduction, we're blowing away the disk by miles.
You know, think about the progression as we get to where we have 1/10 the devices, and the devices are 10 times more reliable. That means one 100th the number of incidents happening in your data center, where you have to go and swap something out, where an array has to rebuild a bunch of data. On disk, that rebuild disrupts the performance tremendously. You know, the e-waste. I saw something from one of our sales people at our kickoff a couple months ago. A mid-sized data center in Europe will supposedly pay 15 million EUR more for power this year than last year. Not 15 million EUR for power, it's 15 million EUR more. I don't think any of us think power costs are going to plummet anytime soon. As flash pulls away from disk in the density-
... All of those TCO advantages get so gigantic. Yes, the flash will continue to get closer and closer, and maybe one quarter or two quarters, the flash price will go this way, and the disk price will go that way. Look at the disk vendors' roadmap, look at the flash vendors' roadmap, look at what we're saying about our roadmap on the density, and you see that gigantic divergence. None of the disk vendors are out there saying, "Oh, yeah, we're gonna do a 100-TB disk in a couple of years. We're gonna do a 200-TB disk in a couple of years." That's what's gonna do it. Space in your data center. Any of you work for a firm that has a data center in Manhattan? I know many of you do.
Go, ask your data center people, what does a 1 sq ft of that data center cost? What does it cost to cool it? What does it cost to power it? What does it cost to hire the people to run it? You know, Amy, I think earlier this week mentioned, you know, we're gonna get to the point where in one rack, we'll be able to put 100 PB at the end of this year, okay? In a couple more years, we'll be able to put 400 PB in that rack. You know, think of how much of a difference that is. It's not worrying about what is 1 byte of disk cost versus 1 byte of flash. That will happen. That will happen several years from now. It's not gonna happen in the next five years.
When we're talking this time frame, that one byte of flash, one byte of disk, that'll be when disk starts collapsing, okay? What's gonna drive that collapse is the space, the power, the energy consumption and wastage, and the human effort, all of those benefits going so much in favor of the flash.
Go ahead.
This is not a topic that we enjoy talking about.
Please, please.
I was just going to say that I know in the short term, just listening in some of the earlier calls, there have been a little bit of confusion on when we say $0.20 a gig, it is, it's the acquisition cost plus three years of support. Sometimes what's coming back is: "But oh, I can get my disk at this price," but that didn't include the support. Let's just.
It's also system level.
Yes, system level, exactly.
I'll just add one point. If you watched NTT, there was a gentleman named Scott on the stage during the first day. I asked him: What is the most significant transformation that you've had as a result of flash, after the meeting? He said: "There's something I didn't talk about, which is," to Coz his point, "what happens with human capital." He said, "Before I began this journey, I had all these storage administrators who were concerned about the failure of storage. Like, they spent their time basically pulling drives, walking up and down the aisles, yanking stuff out, trying to fix it." He said, "I no longer have any storage administrators in my entire NTT data center. They're all automation engineers. They're data engineers.
They're working on a very different set of things." How do you capture that in the price of the individual bit versus byte of hardware? Like, his whole environment is transformed, I think that trend will apply vastly. The evolution of the storage administrator will apply across the industry.
You had a follow-up, right?
I do have a follow-up.
Okay.
Thanks for all the responses. Little bit of financial impact here. Your revenue split today is roughly 50/50 between products and subscription. When the Pure C family starts to ramp up a bigger part of the revenue, should we expect that ratio to be relatively consistent? It is a big, much bigger dollar amount for C, but I wonder if the attach rate of subscription will be similar.
You mean E, the new E family.
Sorry.
Yeah.
Correct.
Yeah. Well, I just want to remind everyone, even though we're not talking about financials, that Q1, you know, is our seasonally lowest quarter, and because of the way subscription works, which is, you know, steady, it's gonna be the highest quarter on a percentage basis, right? As we go through the year, not that we wouldn't like a subscription to continue to, you know, to eventually get above 50%, but you're going to see that more in Q1 than you are, you know, in later quarters. Yeah, we talk about this all the time.
You know, FlashArray//E, I think, which will also be available for subscription, but because we're entering a whole new area, that wasn't open to us before, if it grows really fast, the CapEx portion of that will grow faster than most likely, the subscription portion of that. That could, you know, change the trajectory. You know, our commitment at, effectively, is to grow ARR at 30%, you know, per year or better. We believe we'll stick with that. What percentage it is of our PNL is less relevant.
Yeah, to be clear, that bias on E for CapEx, I think that's really early ramp.
It's early ramp.
Right?
Yeah.
It's going to be really interesting to see how that plays out, because we're, you know, we had a record quarter with Evergreen//One. It was across our entire portfolio. I would be surprised if we don't see some bias over time to Evergreen//One for E-Series as well. I do agree with Charlie. I think the early ramp will be probably a bias to CapEx, but I'm not so sure about medium and long term on that.
By the way, with Evergreen//One, say, for example, with the FlashArray//E, you can even start at a lower commit, a half a PB. You don't even need to buy a PB because the lower, reserve commit is lower. It'll down drive more. Given the macro, I mean, I wouldn't be surprised.
I know this group, so this is not a slide ramp in the financial questions. We'll go back into product and technology, please. Wamsi?
Yeah. Thanks, Paul. Well, maybe I'll go at this, Wamsi Mohan, Bank of America. Charlie, you opened today with your market share slide, and I wanted to just think through that a little bit as we look over the next few years. You had about eight points of share gain that you showed in the storage industry. You've shown some very compelling, you know, features, products, potentially development roadmap. Actually, there was a chart about HDD, SSD, and, you know, DirectFlash, and it's kind of exponential, right? When you think about that, how do you think about the rate and pace of share gains?
Yeah.
Who do you think that comes from over the next few years? Then, you know, to throw in a slightly financial angle to it, sorry, Paul, but I think Kevin, you said about 100 basis points of margin improvement as you think through annually over time. I'm kind of curious if, it, you know, is that a gaining factor when you think of, like, exponential growth opportunities?
As you know, as we look at our market, you know, thinking through the last 10 years and looking forward, I feel like over the last 10 years, we've been fighting with the one arm tied behind our back, 'cause we could only fight for the primary storage market. Our customers had to buy from our competitors, because they had a you know, full range of equipment. Beyond that, remember, you know, when we started out, as Rob had mentioned, we were feature poor, right?
You know, there were many features that customers have come to depend on in the enterprise market, that even if they liked Pure and our value proposition overall, if we couldn't support some set of features that they had come to rely on, we also couldn't win that. It's been a long journey to get to this point. I think we're feature rich. We're certainly not lacking in features. I mean, there may be one or two here or there that might prevent a sale, but I think that's getting smaller every year. Secondly, we no longer have one arm tied behind our back because now we can fight for the high for the low price storage as well.
My expectation of my team, and I want to be careful how I answer this question, my expectation of my team is that we pick up a market share even faster than in the past. I'm saying this in the middle of a economic slowdown. Economic slowdowns favor incumbents, so we're not quite there, but I think we're building up all the muscle necessary to grow faster than we've seen.
That's with the expectation of modest operating margin expansion?
Yeah.
Let's explain a little bit why on that. We always knew with our E Family, we were going to be competitive on price. We've been thinking about that for some time, as well as when we've talked about our long-term expectations on operating margin expansion. Again, I've also talked about where we see those opportunities of expansion, right? Subscription, gross margin, R&D, in terms of our global expansion, as well as sales and marketing, as we continue to penetrate enterprise and hyperscaler.
Just who do you expect that share who do you think would cede the most amount of share? Also, you spoke about, you know, more file-based potential as well. I mean, that squarely puts the filers in place.
Yeah.
Just curious if you think that the relative share gains would be higher from some.
My overly simplistic answer is the weaker players. You know, I we compete with a wide variety of, let's say, competitive skill sets out there. Some of them, you know, more relationship and political, some of them more, more feature and technology-based. You know, the largest player is just a tough street fighter. You know, I think that's an area where getting the opportunity in those accounts tends to be a little bit harder. But I think we're going to get it. To be honest, I think we'll get the share pretty much equally from all of them.
Okay.
Hey, guys. Ati Orazi on behalf of Krish Sankar from TD Cowen. Thanks for hosting us. Two questions, please. One, about the cloud, the other more strategic, high-level question. On the cloud, you guys talked about, in one of the slides, of data reduction ratios of five to one, I think we've heard before, 10 to one ratios. Just wondering how that compares to what the cloud service providers internally have? Our understanding is they also have deduplication software on hard disks. Just wondering if you can give us any color on that. In terms of power, I'm not sure if you're able to share that, one of the clear reasons that Meta chose Pure is because of power savings.
Correct.
You guys haven't quantified those savings compared to Meta's internal architecture, storage architecture. Just wondering, like, also, if you can give us any color about the difference between the two, that would be helpful.
Yeah. Let me, Ati, let me take the first part of the question. You know, the cloud providers, you know, I think the question was: Hey, they've got deduplication. What are your ratios, their ratios? You know, at the end of the day, it actually doesn't matter quite as much because, you know, the issue is, that to the end customer, the cloud providers are not providing that economic benefit, if there is one, of the data reduction back to the customer. The cloud providers are billing the customer based on how much storage they've provisioned. Customer says: "Hey, I need a volume that might be up to 100 TB." "Well, guess what? You're going to pay for the 100 TB.
You don't write a lick of data there, you're going to pay for 100 TB. You write entirely unique data there, you're going to pay for 100 TB. You write deduplicatable data there, you're going to pay for 100 TB." You know, the same set of software features we've developed for our FlashArrays, that same set of data reduction technologies, we can then go apply on top of the cloud infrastructure to drive that additional cost savings. You know, I think the other part of your question is, hey, you know, you've seen five to one, you've seen 10 to one. What's, you know, kind of where's the range? What, you know, what's the deal there? There's really two things that kind of depends on. You know, one is, hey, what's the application set?
Then two, what's the customer doing in terms of, in terms of data workflows? In this particular case, the customer that Shawn highlighted, they were using, databases, right? There it was a database workflow, one, an application we, serve quite a bit on-premise. We know that application worked very well, very high data reduction rates. This customer was also making use of our snapshot and clone, features, right? They were using part of the cloud environment, to do some testing. You've got a database, I want to clone it, make some small changes. Well, guess what? If you do that directly on the cloud infrastructure, you're going to get charged for that entire clone, right?
We can go off of the benefits of our thin clones, if you will, to that customer. You know, that's where you can see a pretty dramatic range of benefits. I think the second part of your question was, hey, on the Meta RSC a design win, Meta's highlighted significant power and energy savings. If I recall at the time, you know, and again, we had early discussions about different ideas. They were evaluating generally disk-based systems, right? I think the benefits were in the order of 8x-10x.
Yeah, I mean, they, you know, they had a certain number of racks that they needed the storage to fit into. They had a certain number of network ports available per rack, a certain amount of power per rack, and then the space limit of the, a number of racks, and they had a performance requirement for the storage. You know, they looked at a number of alternatives. We also when they came to us to talk about this, we were trying to judge who our competition might be. We went out and looked at the spec sheets of all the possible alternatives we could find, and most of them failed utterly in some way, shape, or form. They needed five times more network ports than was available, or 10 times more power than was available.
Some of them could fit into one part of it and not come close on others. But the benefits were clearly there, where we were the only solution externally to Meta that could come close to what they were asking for. It was that combination of the density, the power efficiency, and the performance.
Yeah, their own solution would have required them to get another data center.
Got it. Thank you. Just strategic, high-level questions. One of the areas that's underappreciated for Pure's growth potential is the international market. It's only 30% of sales, but, I mean, all flash arrays market is as big as North America, and your share there is around 10% versus being 25%, 30% in North America. Just as Pure makes a switch from focusing on North America to international markets, how should we think about that transition impacting the operating model of Pure? How are, like, customers internationally different than customers here?
I don't think you'll see it affect the operating model because we're already investing more heavily, internationally than we are in the U.S. As we grow, as we grow internationally, you would expect that to become more productive, over time. I think we have the right level of investment. I do believe that, you know, part of it has been that we've been growing successfully in the U.S. despite international growth as well. You know, and clearly, you know, our investment in Europe, and APJ, was, you know, subsequent to our, you know, coming out in the, in the U.S., so a little bit behind in that area. You know, we're trying to catch up. We're certainly investing heavily in this space and expect to do so.
I'm sorry, the second part of the question was?
No, you hit them both.
That's it.
I hit them both. Good. Yeah.
We're getting a little short on time. We're almost to the end. We're going to do two more questions. We're going to go here and then here.
Yeah, thanks. Jeff Koche with Raymond James. Wanted to sneak in one more question on the AI and cloud. On the TAM slide, you had $30 billion-$50 billion that you thought was an opportunity from cloud. I think you said 10% of that might be, you know, SSD. That's like still $3 billion-$5 billion. I think it's probably pretty safe to assume that those are, you know, that's being deployed in these high, you know, AI, ML workload type areas.
Oh, no, I would say the vast majority right now of that storage is in, let's say, a so-called near-line storage, meaning just warm storage, general disk-based storage for general purpose storage.
These AI nodes and these clusters, they're most likely probably going to be using flash, wouldn't you say?
That is correct, yes.
Yeah. In addition to high-performance storage, you know, you need high-performance networking and GPUs, obviously. What about, like, InfiniBand versus like, you know, RDMA over converged Ethernet? Like, is this been keeping you out? Is there maybe an opportunity for this standard to help, you know, you guys out as well? Thank you.
Absolutely. It's a, it's a great question, you know, we've had this partnership, the AIRI and the AIRI partnership, that Ian talked about with NVIDIA. Certainly the InfiniBand and the fact that at this point, we don't have support for InfiniBand, does, you know, limit us in some of the more commodity, you know, where you're stemming out a lot of opportunities, and it's a standard, you know, NVIDIA offer. As Charlie alluded to, we do therefore, in some sense, compete with them, but we are looking to address that and be over time compliant.
There's no, InfiniBand has some scalability limitations, over time. This is, as Ajay mentioned, certainly InfiniBand or InfiniBand deployments are areas that we're going to look to penetrate, more deeply than today. Yeah.
Okay, we'll take the last question, and then Charlie will have a wrap-up comment, please.
Okay, thanks. Ashish Sabadra , Credit Suisse. I'll ask what the question looks like for right now. You've referenced a lot of products, you know, how you saw over the last few years. I'm curious how you are thinking about R&D spend, maybe some potential holes in the portfolio or areas you'd like to augment or improve on? My second question is, as a theme to a more unified portfolio, how are you thinking about the go-to-market? Is it a change? How are you thinking about the go-to-market? Should I repeat all that?
Please.
Okay.
Sorry about that.
Ashley, that's from Credit Suisse. You've referenced a lot of products and software features that you've released over the last few years. Are there areas that you want to invest more in, augment, any holes in the portfolio? The second part of the question was, with the go-to-market, you have a more unified portfolio now, you're not selling with one hand behind your back, how are you shifting that go-to-market? Or maybe you're not. Thanks.
Yeah. Great.
I'd say, so from a portfolio standpoint, you know, I've been discussing for us to kind of push the density roadmap and go denser and denser, and then evolve the Purity, as somebody pointed out, to make sure it matches up. That drives our core competitive edge. The next adjacency, we are already going after, the whole filer space. That's a huge untapped opportunity, so there's more work that we can, you know, do on, in the filer space. Certainly, you know, beyond that, there are some other adjacent opportunities. The hyperscalers space is a big opportunity over there. We have to kind of modify. To go into the core rather than go sell a complete array, we have to adjust it and sometimes maybe do a software license arrangement.
We sell a core IP that can potentially work with a more bespoke piece of hardware that they can go outsource or, you know, buy from somebody else. There's some flexibility, some flexible kind of investments we have to do that enable that flexibility. That opens up a huge market opportunity for us. Certainly on the cloud side, we talked about Cloud Block Store, and, you know, we have it in Azure and AWS. Somebody mentioned, what about GCP? Tons to do, and the good news is we have a great investment profile, and we invest a nice chunk in R&D. We have a global footprint, so we can leverage the best talent in different locations, and go as high in the upper quartile of the talent pool.
We get really tremendous talent in North America, in Prague, in Bangalore, and using that footprint, we can truly go after all of these things.
Yeah. I would say that another example of things that we expect to invest quite heavily in is Pure Fusion, which we mentioned-
Yeah
which started off in our FlashArray product line on Block, but we're going to be taking through the entire product line. That's an important area. While we expect to get greater product. We're going to keep the same level of technology investment intensiveness for quite some time, but we do expect to get productivity-
Yes
Improvements from the fact that we've gone from only three years ago, for the most part, 100% engineering in, on the Western U.S., to now a substantial investment in Prague, substantial investment in Bangalore. We're going to balance out, you know, where we do that R&D, which is going to bring productivity up a little bit, but the same level of technology intensiveness. Then as we look to the sales force, we've been signaling for the last two earnings calls, this change or this focus on sales training and inspection and management and so forth. A lot of that was in anticipation of, you know, being of coming out with E and across our entire product line.
It really fundamentally changes the sales approach, from selling individual arrays into individual use case opportunities, into selling a portfolio and selling the company, actually, you know, as a partner to enterprise companies. I won't underestimate the task. It's a significant change in focus for our sales team, ergo, the big investment in how we are training them and in the ongoing training that we're doing every quarter.
Yeah, just let me add a little bit more on the go-to-market front end.
Yeah.
And obviously, just on your training piece, but the total cost of ownership advantages are so tremendous.
Yeah
Really getting that cascaded down throughout our field is going to be really compelling for us on a go-to-market standpoint. Then I think we'll really focus again, in terms of expansion on enterprise. Europe, international is a big area of opportunity that we see as well. We think there's opportunity for traction with the channel, especially in the E family, and really getting some penetration, even more so with the channel on that front. Obviously, hyperscaler continues to be a big focus of ours as well. Then, you know, you know, back to your question, Pinjalim, on investments for R&D, Coz and Rob need the Purity investments for 300 TB, so we're going to work that as well.
By the way, on the R&D one, I just mentioned Portworx. Portworx is the area we're focused on.
The questions were fantastic. Thank you so much. Charlie, you want to bring us home?
Sure. Well, again, we really want to thank you all for not only your time today, but obviously coming out, it's a big investment by all of you, both in time and money. We very much appreciate that. I hope that, you know, we do feel that with the ability now to compete for the entire storage estate in our customer's environment, and then it gives us this ability to really come forward with these really sustainable competitive advantages that we believe are compelling. We have to convince, of course, our customers, our channel, most of all, that this is something that they should be investing in as they go forward.
I hope we've been able to convince you that, you know, this is the only business that I'm in where a 10x improvement is still questioned as to whether that's enough. It's a very conservative market. I hope we've been able to convince you that a 10x improvement is enough. We think it's going to be an exciting time for Pure, and we're looking forward to the fight. Thank you all.
Thank you.