Welcome to the Pure Storage product and technology focus meeting for financial analysts. Let me remind you that we will be making forward-looking statements today that are subject to assumptions, risks, and uncertainties. Actual results could differ materially from those anticipated due to a number of factors, including those referenced in the detailed disclaimer at the beginning of our presentation slide deck and in our public filings with the SEC, which we encourage you to review. The presentation slides discussed today will be available on our investor relations website at investor.purestorage.com.
Hello, hello, and welcome to our product and technology-focused financial analyst meeting, which is being held in conjunction with Accelerate. We're so happy to see so many people here in person, as well as those attending virtually. We really appreciate your interest in Pure Storage and your support, frankly, many of you that are here. Onto the agenda for today. The plan is to start with one hour of presentation material, followed by about an hour of Q&A for a total of two hours. Charlie will start us off with Pure's platform strategy and the enterprise data cloud, key, key enablers for us moving forward. Rob will then talk about Pure's fundamental advantages, which frankly are still not super well understood on the street.
Prakash will talk about our customer journey to Pure's cloud operating model, which is also an area that's not super clearly understood, and we hope to increase that understanding here today. Bill will wrap us up with Pure's hyperscaler business. I hear there's just a little bit of interest in that, we thought we'd cover that. As Charlie comes up here to start us off, I'm going to remind you all, as I've been known to do, that at Pure Storage, it's all about one thing: the software, the software. You're going to hear that all day today. Charlie?
Thanks, Paul. Thank you all. Good morning. Thank you all for coming. I appreciate you spending time with us this morning. Yes, I'm going to be speaking to you about our overall strategy and how it all fits together. Of course, we'll be speaking about this. This is substantially the same presentation that I'll be giving this afternoon to our New York customers. There's a lot of similarity to the presentation I gave just three months ago at our Las Vegas Accelerate conference. I'm going to start off by just saying that we continue to outperform the market. Our expectation is that we will continue to do so. We had a very strong quarter this past quarter, as you well know.
Something that I think probably is, the milestone that I saw in this last quarter was something that if you dig down into the financial metrics of us and our competitors, is probably something that might be a bit surprising. We are now spending as much on R&D as any one of our competitors, regardless of size, if not more. We're spending more than most of them, and we're now at the same level as many of them. Since we spend it just on one software environment to cover all of the different areas of storage, we can be very efficient and have a lot of leverage in that investment, unlike some of our competitors. We think this is a major milestone as we continue to scale in this area. There's really no opportunity in the data storage market that we can't go after at this point.
If we look at the metrics of the company, 13% year-over-year growth, strongly profitable this last quarter. In addition to that, now, as you know, we're in the 40% and probably will be through the year in terms of the percentage revenue in our subscriptions. As I mentioned, not only are we continuing to be the company that leads in terms of the percentage of our revenue we spend on R&D, but now also in terms of the total dollars. The question might be, what does spending on R&D give you? One of the things that gets you consistently, and now 11 years in a row, top of the charts in terms of the Gartner Magic Quadrant. The other thing is very, very high customer satisfaction. As you know, we've always led in terms of net promoter score, and I don't want to stand in front of the number there.
We've always led, and we continue to lead. It's consistently above 80%. It's not just based on the customer service that we provide, but frankly, it's on the quality of the products and the quality of the customer experience with our company and with our products. Now I'm going to switch gears a little bit and start talking about what is it really about our products and the direction that we're going in that we think is going to impress our customers the most. I might start from a surprising point in time because I want to address the elephant in the room. What is the elephant in the room? It's AI, okay? This may, so bear with me. The thing about AI that we think is really going to be driving customers as we go forward is the fact that it changes the relationship between software and data.
Before AI, you may remember a famous individual once said that software was eating the world, right? Software has really been the driver of the fortunes of tech companies and the driver, frankly, of productivity in business overall. What's happening now is that data is starting to drive investment and certainly starting to drive the performance advantages of some companies over another. Maybe data is starting to eat the world and maybe even data, and this has been discussed by many of you, is eating software, putting software as a dominant element somewhat at risk. It's really changing the nature of software and data. Of course, data is where we play. How do we help in this environment? Let's step back for a second. Let's talk about enterprise data. What do I mean by enterprise data?
I don't mean sort of the files that you all share with your colleagues in your business. I'm talking about the data that business runs on. The databases, the unstructured data that corporations collect to be able to get a better understanding of their sales force, of their customers, and so forth. Let's talk about how they manage that. They manage that, of course, in data centers. How are these data centers constructed? Let's take a typical application, which might be a database application. The database application starts with the software in the database and the compute that the database is going to be running on. What an IT organization will do is add storage, add networking, and it becomes a stack. Sometimes they call it a full stack. The full stack is made up, as I said, of compute, switches, and storage. There is another application.
It could be email, it could be just a customer database, it could be an understanding, it could be their marketing database, and they set that up. It could be their virtualization environment. Each one of these is set up. A more sophisticated customer will have a standard for their compute. They may have virtualization, a virtualization standard, so they can move their application around between different computers. They are usually standardized on one network environment. It turns out that they choose their storage bespoke to the application. High performance, low cost, block or file. Each one of these can be quite different, as you can see in the picture. What does that really mean? What is the effect of having this kind of architecture, vertical stacks and bespoke storage? First of all, the storage is provisioned individually and manually.
It does not operate the same across all of these different stacks. They cannot share capacity from one array to another. Each array is unique to its environment. If it runs out of capacity, even if there is spare capacity somewhere else, it does not matter. They have to add capacity to that one array. New services have to be configured manually on every array. Any kind of policy that the company sets for their data as a whole has to be done individually, array by array. Your data governance standards are also implemented manually. Now, think about it not from the storage angle. Think about it from the data angle, how customers manage their data environment. Within the same architecture, data is now captive to the application stack. You have to go through the application to get access to the data. The data is not available otherwise.
There are poor records. People can easily copy and paste data in different areas, but the records of that, if they are kept at all, are manually kept. From a governance standpoint, it creates a mess, hard to follow. Raw data is inaccessible to applications such as AI. It has to be copied to an AI-specific data storage array somewhere. If you think about data governance, because data governance is done array by array and manually, it's going to be inconsistent because this is spread out all over a large corporation, different users, different administrators. It is very inconsistent data management. Now let's contrast it. That's not the way that clouds operate. It's not the way that clouds are built, how our clouds are built. Very different philosophy overall. They're built largely horizontally.
They want everything to be the same across their cloud, not only in one data center, but across data centers. It generally starts with compute, one compute layer. The compute layer is treated as a general purpose compute layer. That's all tied together, of course, with a network. The next thing to think about is what they do with data storage. The data storage is the same horizontally across the entire organization, across the entire cloud. The only difference is they may have different price performance layers. They have a high performance tier, they have a low performance tier, usually a tier or two in the middle, and they'll even have, in many cases, an archive tier, which might be tape. What they're able to do here is any application can access any portion of the storage.
Of course, you need to have the right authorization to be able to do so, but there's no physical limitation to what storage can be used for what purpose, right? It is a completely shared environment that is software defined. It's the management system that creates the environment that determines, you know, how to or what storage can get access, much more flexible architecture. This is what we feel is allowing an enterprise storage architecture now to be changed. An enterprise storage architecture, as it exists today, is vertical, right? Vertical stacks. A cloud storage architecture is horizontal, with everything being the same across the horizontal limit. An enterprise storage architecture is largely manual. It is not orchestrated by software. Cloud storage architecture is fully automated and standardized. Enterprise creates data silos. Cloud storage architecture creates accessible data pools. Finally, enterprise storage architecture is very physical, with physical connections.
Cloud storage architecture is virtual. It's all based on software, what gets connected to what. We believe this sets up for the industry a new vision for data, how data will be handled inside the enterprise. In particular, as you might be able to guess now, it's a cloud. We want to bring a cloud operating model to enterprise storage. How do we do that? We've been working on this for many years, as many of you know. It starts with Purity. Purity is a common operating environment for any type of storage with our very famous now Evergreen architecture, which means it never gets old. Purity, we've designed, we started with block, but now provides for block, file, and object.
In addition to that, it supports all types of data, including everything from the highest performance in AI all the way down to low cost, now replacing all disk environments. We continue with our progress about hitting ever lower price levels for low performance storage. We also provide all kinds of services, and that's all embedded within the Purity environment. We continue to add more and more services to that, all managed by—sorry, let me rephrase this. This is what we mean by a unified data plane. That is one software environment that covers every level of price performance, as well as all different types of data. This is all managed by one management system, which we call Pure One, and it is able to be acquired either as a product or as a service with our Evergreen//One management system.
The big reveal, if you will, that we announced in Las Vegas, and we are going to continue to focus on, is this capability that we call Fusion, which now creates a unified data plane. We're able to support all kinds of enterprise apps, all kinds of modern apps. There is really no application environment that our system, that Purity doesn't support today. With this addition of what we call an intelligent control plane, instead of our arrays operating as individual arrays, they operate as a cloud of data inside our customer's environment. All of the arrays communicate with one another, are able to communicate with one another, and are able to operate as a cloud rather than as individual arrays. This fundamentally changes the way that customers can access their data, use their data, and manage their systems.
When we say that the arrays are networked, operate as a cloud, we're talking about a global data cloud, not just within an individual data center. This is what we mean by an enterprise data cloud, which is the architecture that our system now with Fusion allows our customers to build inside their enterprise environment. Just to sum this up, we're taking arrays from being manually provisioned to being auto-provisioned. We're going from dedicated capacity per application stack to automatic load balancing across an entire global network. We're going from governance being manual to global governance of the way data is managed through software control, true software definition of the ways that data is managed. We're going from policy setting from being manual to being automatic, which means protection is not applied array by array, but by policy.
Finally, from data being captive to an application stack to data being globally accessible for things such as analytics, such as AI. Another way to think about it is the enterprise data cloud allows organizations to make their production, turn their production data into their data lake. Today, data lakes have to be just a separate pool of storage. Why is that? They already have the data, it's already sitting in a production environment, it's already real-time. Why not have that be their data lake? It replaces siloed data, if you will, with a data cloud. You remember I started this with AI. The opportunity for companies, one of their biggest challenges is their data is spread out everywhere and siloed. AI is having companies relook at the way their data architectures exist.
We think this is what sets us up for really a fundamental transformation of architecture inside the enterprise. I wanted to speak about one more thing in this area, and that is I wanted to speak a little bit about our launches. I'm not going to go, you're going to see, these are the launches that we're putting out just this year. This is also what our investment in R&D is bringing to the market now. I think you'll see that we have an extensive set of announcements that we made and products that we're bringing into market that cover everything from unifying even more the data plane so we can cover more use cases, more tiers, if you will, of storage, but also the intelligent control plane that allows customers to more easily manage this, not only manage our products, but frankly to manage their data sets overall.
I'm not going to go into this, but I do want to touch upon one thing. Again, it's another area that I know there's a lot of interest in, and that is FlashBlade Exa. The reason why I want to touch upon it is because, and Rob will go into much more detail, it shows how extensible and how flexible our overall architecture inside the product truly is. Exa is based on Purity, and it's based on our FlashBlade physical architecture. FlashBlade is our scale-out architecture. Up until now, we've used FlashBlade as both a storage target as well as the product that is able to handle the metadata that allows applications to get access to that storage. What we've done is we've leveraged the fact that FlashBlade is actually the world's fastest metadata engine.
Now, by attaching additional data nodes to FlashBlade, rather than using FlashBlade as the data store, we now just use it as the metadata engine. Not only can we add sort of unlimited amounts of additional data at high speed to it, but we can replicate and we can expand the metadata engine. We can just keep expanding it and expanding it. What's really magic about this is, and this is something that every other vendor would love to have, it scales linearly. Every time we add a new FlashBlade metadata Exa engine, it just continues to scale the overall performance all the way up to, right now, internal testing, 10 terabytes per second of read, 5 terabytes per second of write, roughly 5x faster than anything that's out there today.
We're able to cover the entire range of AI needs from hyperscalers, which is going to be based on DirectFlash, not on Exa. That'll be covered a little bit by Bill today. FlashBlade Exa, which covers really the GPU clouds, Rob will be covering that. Regular FlashBlade, which is a great product and covers really the entire needs of most enterprises in the AI environment. We're very well covered. All of these, that is Exa S, as is true of all of Pure's products, are covered by Evergreen and allow for the ability to be able to do upgrades, allowing customers to save the investment that they've placed in us. Summing up, the enterprise data cloud now is really a new architecture for our customers to think about how they virtualize their data environment. It does a number of things.
It creates a virtual cloud of data, but it creates a very consistent environment for all of their different workloads. It allows for data to be governed by policy and software rather than manually through fingers on keyboards. Because we build in cyber resilience, and again, that can be done by policy rather than fingers on keyboards, and it's completely software defined. This is a really fundamental architectural shift in enterprise. It's both an opportunity for us as well as a challenge because changing architectures in organizations that have been doing the same thing the same way for decades is not for the faint of heart, but we think we're making great progress here. With that, I would like to turn the stage over to Rob Lee.
Thanks, Charlie. All right, good morning everyone. Great to see everybody here in person. As Paul mentioned, I'm going to spend my time today talking about and going through Pure's fundamental technology advantages. What I'm not going to do is actually go through product by product or unpack the portfolio. Instead, what I want to focus the discussion on is really looking at the various pieces of differentiated technology, IP, intellectual property we've created across the entire storage stack. We'll then transition into looking at, hey, how do we go take those core blocks of IP, leverage them in different ways, package them in different ways to go meet the needs of different markets and pursue different market opportunities.
We'll do a double click into one particular area, which Charlie mentioned, which is the performance and scalability of our metadata, which we see really driving the success of our AI platforms. If you're out there and you've been following us and you're perhaps still wondering, hey, what's that one, you know, critical technology advantage that Pure has that sets you apart from the others? They're in storage, they're an outlier, they do flash, others do flash, there's something different out here. If you're out there still wondering what that is, if I've done my job right at the end of this, A, you'll have the answer to that question. B, you'll walk away realizing it's not one thing.
There's four or five different things that really come together that we're able to now put together, combine in different ways to go after the various market opportunities that we're pursuing as a company. All right, we're going to go deep. Before I do, I want to set some context and really just step back and look at, hey, what's in a storage system? A storage system, any storage system out there, one that we might build, one that our competitors might build, one that hyperscalers might build for themselves. At the most basic level, any storage system has to solve a lot of the same needs. You need physical media, you need somewhere, something to write your data down on, whether it's hard disk drives, whether it's flash.
You're going to need some software to make that physical media work, whether that's OS drivers, whether it's firmware, whether it's purpose-built software. You're going to need some system hardware, right? You need chassis, you need sheet metal, you need power supplies, controllers to put all this stuff in and to run your software on. Once you've assembled all that, you probably want some way to organize, store, retrieve your data. You want, you know, all the protocols, performance levels. You probably would like, you know, the system to be resilient to faults. You'd probably like some protection, you know, snapshots, replication, that sort of thing. At some level, you're going to need a way to manage, monitor, and administer this thing, right? At a 60,000-foot level view, all storage systems have a very similar set of needs.
The thing is, as you dive down to deeper altitudes, as you come down to 30,000-foot level, as you come down to ground level, even though the needs are very similar across the board, hopefully what you'll see through this discussion is the way that those needs are addressed couldn't be more different in our industry. Let's walk through each of these layers and take a look at how the rest of the industry has tried to approach these and what Pure Storage has done differently. We'll start with the physical media and really the software approach that's been taken to control the physical media on which storage is placed. This is perhaps the most readily apparent difference that we have relative to the competitive set. Everybody else out there is reliant on SSDs to consume flash. If we take a step back, what is an SSD?
An SSD is a technology coping mechanism to make the physics of flash work in a world that is built with software that's designed for hard disk drives. Flash behaves very differently. All the software in the world was built prior to us with hard disk drives in mind. The industry came up with the SSD as a coping mechanism, as a translation mechanism to make that software work for flash. What does that mean? It means that the SSD has a tremendous job that it has to perform. It has to do a ton of work internally to translate flash into a world where it behaves and looks like a hard disk drive. It has to manage the flash at a physical level. It has to perform a lot of remapping, a lot of complex translation of instructions. It has to do background work like garbage collection.
It's really complex software. An SSD is basically like a little computer, right? It's got an ASIC in there, it's got a processor, it's got DRAM, it's got complex software. They just call it firmware. It's basically a computer. The challenge with that is really twofold. One is, you know, as with anything, if you add translation layers, if you add a man in the middle, so to speak, you're fundamentally adding inefficiency. You're adding extra work to be done. That extra work comes at a cost. That cost can be measured in components, in performance, in reliability, in limitations. The second challenge with this architecture is that as flash advances, as the industry builds denser and denser flash, it becomes harder to work with. As you try to put more flash into the drive, you're complicating that task even further. The SSD architecture has reached its limits.
What has Pure Storage done? We had the benefit of not being saddled with retrofit architectures. We did not have software built for hard disk drives. We started our software top to bottom, built specifically for flash. What this has allowed us to do is to build a media software layer, a layer of software to manage the physical flash that doesn't add all this translation work, right? We simply avoid this duplicative work by designing a software layer that allows us to expose the best properties of flash directly to the software that's running above.
This also has another side effect, which Bill will talk to in a bit in his discussion, which is that as we look at different ways of packaging the software, we're now taking on the task of making all the different types of flash and NAND out there in the world behave and look the same. For everybody else, it's relying on SSDs. The SSD manufacturers have put that work of qualification, put that work of the different behaviors of firmware on the consumer, whether that's the enterprise OEM vendor or whether it's the hyperscaler. Let's move up the stack a little bit. Let's take a look at the system hardware. These are the chassis, the enclosures, the power supplies. This is an area where our competitors frankly treat this as an afterthought, right? You can look at this, you can tell this by based on what they ship, right?
It's largely third-party OEM hardware. It's repurposed servers. You know, one of our competitors at their top-of-the-line product, if you look at their manual, will say things like, you know, when you install a storage array, you have to leave extra rack space above and below this thing to allow for sufficient airflow to cool the thing so it doesn't overheat and to allow us to go and service it. That's how much care and thought they have put into the system hardware. Obviously, that results in complexity, bloat, operating costs. We've taken a different approach, right? In our world, our system hardware is integral to what we do, right? It's integral to the Evergreen promise. Evergreen at its most basic level is a promise that once you're on Pure technology, you're on a path to non-obsolescence, right?
You have the ability to non-disruptively evolve, upgrade, scale to future needs, whether it's future technology, whether it's more capacity, more performance. The only way we can go and deliver that is a hardware platform that has the capability to be forward and backwards compatible. It's designed for a decades-long lifetime. Our system hardware is core and integral to what we do. Also, by owning that system and really investing heavily in it, it allows us to create significant advantages that come through in reliability, performance, and simplicity. All right, let's move up the stack one click, take a look at the main layer of storage software. This is where I would say some of the biggest differences really start to come out. If I look at the competitive set, this is where the retrofit architectures rear their ugly head yet again.
Most of our competitors are building on retrofit software stacks designed in an era of hard drives. They were designed in an era where hard drives are basically the same speed they are today when flash has gotten 100x- 1,000x faster. They were designed in an era where networking was 100x slower. They're designed in an era where CPUs and processing were 10x slower. As you might imagine, the choices in how you build software, how you architect, how you design software in that world are very different than how you would design in a world of hardware resources that we have today. I'll give you a very simple example. If you're designing software to place storage on a hard disk drive, it's a physical moving platter.
If you want to optimize for that, you want to lay out your data sequentially because it's a sequential mechanism. You want to lay out your data sequentially so as the platter moves around, you very easily read it. In a world of flash where it's highly parallel, it's semiconductor driven, if I want to go and optimize that, I want to place my data on multiple chips and I want to go and access those entirely in parallel. These are very orthogonal decisions. This is one of many examples of orthogonal design decisions that are central to software stacks that are designed either for the hard disk era or for the flash era. As you might imagine, these are incredibly hard to unwind.
One of the other offshoots of the software that was designed in the era of hard drives is that because performance was so difficult to achieve, the competitive set generally had to build different software stacks for different purposes. Charlie mentioned this before in terms of how different storage arrays have historically been provisioned for different applications. The competitive set generally has had to build separate products and software for each different need to meet that performance. Those of you who cover software will realize that it's incredibly hard once you have that fragmentation to bring it all back together. What has Pure done? We have not been saddled with, we did not grow up in an era of retrofit software. We were able to design for flash since day one. We were able to build from the ground up for the speed and scale that flash was capable of.
We made those hard investments to really take advantage of not just the performance that was available to us back in 2009, but where we knew flash was going. More importantly, we've very, very intentionally invested in keeping a unified and single software base and single hardware technology. This now brings us back into and is a key enabler for Evergreen as well, right? If you think about the core Evergreen promise, which is non-obsolescence, the ability to non-disruptively grow, scale, evolve in different ways, how do you do that in a world where you have six different software stacks to meet different performance levels? It's really, really difficult, right?
One of the things that is critical to enabling Evergreen, besides the system hardware, is the flexibility and the dynamic range that our software allows us, our single software base allows us to meet the entire set of needs from AI to archive. All right, click forward. Thank you. All right, so let's take a look at management monitoring. Again, this is an area where, if I look at the competitors, it's generally thought of as an afterthought. It's a bolt-on. Frankly, most of our competitors' management tools are more focused on all the tuning knobs and configurations that are required to make those products perform. If you look at how we treat management monitoring, it's central to not just how we deliver a customer experience, but also to how we build a full as-a-service model. Let me step back and unpack that.
If you look at all of our arrays since day one, they've phoned home data to Pure One. Our intel, the intelligence that we can gather from the arrays, we manage, you know, if your home thermostats, if your AV system at home is able to be cloud managed and that service is able to be improved over the air, why can we not do that in the enterprise? That's been our focus. That has allowed us to have the NPS score, the customer satisfaction, the proactive support experience that we've been known for. That also allows us to better understand the arrays in time and transition that into a full as-a-service offering. You heard Charlie talk about the intelligent automation. Prakash is going to go into this quite a bit more. All of this stems from our investment in the management and monitoring part of the stack.
All right, so we've unpacked, let's move forward. Thank you. All right, so we've walked through each of the layers of the stack. What I want to transition us to now is looking at how do we put these pieces together? How do we leverage the investments we've made in this differentiated technology at the different levels to meet the needs of different markets? The four markets that we want to focus on are number one, the core enterprise. We want to take a look at what we're doing on the cloud and cloud-native workloads, the scale AI, which is kind of more the NeoClouds, the GPU clouds, and then certainly take a look at what we're doing and how these pieces fit together for the hyperscalers. The core enterprise workloads, and this is all the workloads you'd find in an enterprise data center. This has been the North Star.
This is where all the pieces come together, and it makes sense, right? Our enterprise clients aren't coming to us with a partial solution. They want us to provide the whole kit and caboodle, right? This is where, by tightly integrating all of the layers of the stack, the differentiation we create at each of these layers, we're able to create a very compelling value proposition across all these layers. You see this coming out in our flagship products like FlashArray and FlashBlade. If we shift a little bit and take a look at what we're doing for customers who perhaps are running on top of public clouds or running cloud-native workloads on top of third-party infrastructure, this is where, in that case, the public cloud, the CSP might be providing the physical infrastructure. They're providing the physical media. They're providing the system hardware.
Customers are seeing value in having the storage services, the same enterprise capabilities, the data protection, the reliability, the sameness of management across on-prem, hybrid, and cloud. You see this model coming out and these areas of IP coming together in products such as Cloud Block Store or Portworx. Shifting gears yet again, if we look at the needs of the ScaleAI community, the NeoClouds, the GPU clouds, the largest foundation model builders, these are folks that have their own higher-level software management monitoring. They have their own higher-level data flows. In a lot of cases, they're specifying their own physical hardware, ultimately kind of optimized for high performance. What they value, what they get from us is the storage software.
The storage software that gives them that scalability that unlocks the performance of the hardware underneath us, but also allows us to fit nicely into their software ecosystem, their management monitoring systems. All right, and then last but not least, hyperscalers. I'm not going to go super deep here because Bill's going to come up in a few minutes and unpack this further. This is where, if we step back and we look at the hyperscalers, they've got their own management monitoring. They think about storage reliability and storage services at data center, cross-data center scale. They design their own system hardware, but they've got a challenge, right? They've got a need to move to flash. They're largely running on hard disk drives today. The SSD architecture I've already explained has significant limitations. We have the capability, we have the means in our DirectFlash software to dramatically improve their path.
The means are really driven by the software, right? There's nothing, our DirectFlash modules, there's nothing, there's no secret sauce in there. The enabler there isn't how we solder chips to a board. The enabler there is what we have put into the software to make that work. That's really key to the integration that's driving the hyperscaler business, which again, Bill will unpack further in a few minutes. If we step back, hopefully this helps illustrate how starting with a core enterprise business where we focus on building a tight integration between all these layers coming together to really meet the full needs of an enterprise across their entire set of applications has led us down the path of building core differentiated IP blocks.
We can then go take those IP blocks and leverage them in different ways to meet the needs of other markets, whether that's folks that are running on cloud, whether it's cloud-native applications, whether it's the NeoClouds and ScaleAI, or whether it's the hyperscalers. I mentioned I wanted to do a double click into one area, please. Thank you, which is the scalability of the metadata. Charlie mentioned this up front. I want to unpack this a little bit. We talk about metadata, we talk about data. What does that really mean? In its most simplest terms, imagine you're trying to load a file. You're trying to load a file and transfer it somewhere.
There's the work of looking up the file by name, figuring out what directory it lives in, figuring out do you have the permission to open it, figuring out, once you've identified that file, where are all the bits of the file located on flash or on disk? Then there's the work of actually reading and transferring the file. There's the administrative work up front, the metadata, and then there's the actual data transfer work at the back. It turns out both are important, as you might imagine. It turns out that the metadata, the administrative work, scaling that, making that performance level super high is actually in many ways more challenging than scaling the data transfer performance.
It's more challenging, and one of the reasons it's more challenging is you've got to figure out a way to organize that metadata, keep track of all those bits and pieces in a way that's highly optimized for parallel access. You've got to keep all that information in a way that makes it easy for you to scale and add performance into the system. The second challenge you have is once you've organized your metadata in that way, you've got to optimize the coordination. You've got to allow that data to be accessed in parallel without, you know, the different processes stepping on each other, without creating wrong results, without creating corruptions, without creating inconsistencies. These are fundamental challenges. How have we solved these? It turns out that we have focused in this area since day one. I mentioned before, we were not saddled with hard disk drive-based software.
We focused on the parallelism of flash since day one. One of the key enablers that has allowed us to do that is how we organize metadata in the system, not just file metadata, but our internal metadata. We've organized it in what's called a key value store. Frankly, we've borrowed concepts that come from the database community. We've borrowed concepts such as key value stores, such as distributed transaction engines. If you open up our software and look under the hood, it actually looks a lot more like a high-frequency trading system or a distributed database than a traditional software stack. What this means is that the scalability of the metadata, performance we can deliver, all of that administrative work to pair with the data transfer work is extremely high.
We see these needs as super acute in technical computing, whether it's chip design, whether it's fluid dynamics, or whether it's AI. This has really been the driver of strength for the entire FlashBlade portfolio since really its inception. Whether it's FlashBlade S for the majority of the enterprise, high performance needs, or as Charlie mentioned with FlashBlade Exa, where we're keeping that strong kind of scalable high performance metadata core and then pairing it with an open hardware architecture to meet the needs of some of the world's largest training and high performance environments.
As Charlie mentioned up front, this now completes our portfolio of serving the needs of AI at the entire range of scales, from the smallest research groups to what the enterprise is doing, which is largely served with FlashBlade S, your NeoClouds, your tech titans, what we're calling the ScaleAI community with FlashBlade Exa, or what the hyperscalers are doing, which is really more of a horizontal high performance design, which we're serving with DirectFlash, which again Bill will go into greater detail on. With that, I'm out of time. I'm going to turn the stage over to Prakash, who's going to walk us through how we're extending the management and monitoring plane to deliver on the enterprise data cloud.
Thanks. As we think about our customers, our goal is to help them move towards this cloud operating model. Charlie unpacked what an intelligent control plane is, but if you think about Salesforce or any SaaS solution on the market, realistically, how they're able to get economies of scale and SaaS margins is by standardizing operations. When you run everything, when you control everything as a SaaS-managed thing, your environment is very standardized. If you go into a customer data center, it's not. Charlie mentioned it's very siloed. It's very fragmented. For anyone who's been in a data center with the number of solutions and the number of operating systems that have historically existed, it's a mess. If you want to start moving people to an enterprise data cloud and get SaaS economics, you need to standardize the components and create standardization at scale.
What that allows you to do is provide instantaneous value where you can spin things up at the need of what your requirements are. Our cloud operating model is driven to create the standardization. As I unpack this management and monitoring, historically, people thought about this box-to-box storage management. It's a capability where it's like, I need this type of data, I need to provision it, block file systems, volumes, etc. That paradigm to move to a SaaS operating model requires a paradigm shift. You can't think about these low-level objects in management anymore. You have to think about how do you move to policies and fleets. You need fewer components that are more standard with simplified policies where you can move beyond low-level management to higher-level management. This is fundamental in the shift we're driving in our customer and install base.
We're the only vendor that's driving this innovation to this policy-centric view. That's a step in the direction, but when you move beyond these policy and management objects, you then get into the world of what AI needs, which is what are my workloads, what are my workflows. Managing data sets is the ability to take a look and assign these policies and these management principles to workloads. You want all of your SQL workloads to operate this way. You want all of your tier one applications to operate that way. All of these policies create standardization in a data center. It's the reason why a lot of hyperscalers who've tried to do go on-premise have failed because they expected standardization in a world where there was none. You have to take people along this journey. Now, once you get there, there's two approaches. You could do this.
We can provide the capabilities with our management and monitoring for customer teams to deliver customer SLAs. Secondarily, if you don't want to worry about it, why not trust us as the vendor to deliver vendor-managed SLAs? When we talk about our Evergreen//One offering, it is a vendor-managed SLA. It's a vendor-managed service with standardized building blocks. You can get whatever storage you need, block, file, or object, any type. You can get it deployed wherever you need it in your data center. When we run our service, think about it this way. We're paying for the power and rack space we use. We're just running our service and saying, oh, this is our service. It's a managed endpoint. We're running it in your data center. With our Cloud Block Store, you can get it in the cloud.
We have partnerships with MSPs where you can get it in the colo, and it's available in 32 countries around the world. The standardization, when we talk about that standardized operating model at scale, what do you want to standardize? Charlie mentioned performance levels. Without a single operating system, you couldn't get unified performance and actually deliver this at any reasonable margins. If you want uptime, zero data loss, like we've had a lot of people think about data center consolidations right now. We all know about the power crunch. We have a site relocation SLA where we can help customers consolidate their data centers, their physical locations, because our service can deploy new hardware and move the data seamlessly, transparent to the customer as part of the site relocation service. With our Evergreen architecture, we have zero planned downtime.
We have guaranteed watts per terabyte with our energy efficiency guarantees and cyber recovery SLAs in case of a ransomware attack, where we guarantee the time it takes to get up and running for customers based on their data sets. We were unique in approaching this problem set, not from just a financial lens of cash or credit, how do you want to move to recurring revenue, but fundamentally changing the operating principles of how do you standardize data centers with this cloud operating model. Now, it's built on the uniqueness that we talked about. You could not do this without Evergreen. Because a fundamental approach of a SaaS solution is for the same unit economics, your offering gets better over time. You expect to, if you're paying a subscription to Netflix, you expect more content.
If you're paying a subscription to a storage system, you need to ensure it gets better over time. The Evergreen architecture allows us to have always upgradable hardware. This intelligent control plane is fundamentally changing the game by linking together two disparate spaces: observability, where you collect data, and automation, where you can actually take action on the data you collect. Let me drill into this paradigm shift, because I think this is very important for folks to understand. Historically, we've collected about 70 petabytes of data in Pure One on an annual basis. Over the last year, we know everything from how people, like how the system operates as low-level NAND. Bill will get into some of that. We also know how people use storage. Think about Tesla. Their advantage is in miles driven on the road for self-driving.
Our advantage over 15 years is we've collected more information on how people use flash than anyone on the planet. If you take that information and you combine it with Fusion, where our arrays are now networked, Charlie mentioned the network effect of arrays, you can go ahead and actually take action on different things, saying, hey, your environment is unbalanced. Let's dynamically rebalance your environment. You're completely creating standard building blocks and optimizing the landscape. This paradigm shift allows customers to manage fleets, manage sites, these logical sites, have policy-driven workload management, whether it be protection, tiering, etc. The ability to do it continuously is like a CI/CD pipeline for developers. It's not one-time, it's not manual, it's continuous. Our hardware has been getting better over time. Now your operations are getting better over time. That's the power of the intelligent control plane.
This Evergreen//One offering we talked about, you get block, file, object, on-premises, cloud, colo, and all these SLAs are built on this concept of this intelligent control plane. We need to continuously think, you know, you're running a service, you have SREs running the service. You need to be able to go ahead and have the capabilities to say, you know what, if you want the service to run, operate efficiently, how do I change things dynamically in the environment? You need this approach to say, let's keep the site rebalanced, let's ensure that we can offer these ransomware recovery SLAs through policy. As Rob talked about, the core bits of IP, our service offering to deliver vendor-managed SLAs is built on this management and monitoring paradigm with this differentiated intelligent control plane. Now let's think about this competitively. We invented this Evergreen architecture as a core fundamental building block.
Early in our company's history, this wasn't a bolt-on, it wasn't an after-effect, but our Evergreen//One offering now allows us at the end of a service life to continually just say, oh, we can keep this hardware upgradable and current. It gets better over time. We guarantee in our contract you will never have a data migration once you move to Pure Storage. That's part of this offering. Most of our competitors at the end of the service lifecycle contract are like, okay, now you have to pay for migrations, or they'll have to subsidize that in their unit economics to migrate from one generation of technology to another. Our service definition isn't starting with a box and a lease. We define these standard building blocks and SLAs that are set up by site. Here's a service site. This is what I need at that site.
I need, you know, this performance tier and this capacity tier. We have a simple contract. All of our SLAs are transparent. It isn't like, oh, go, you know, all those terrible warranty claims that you have to go and read exclusions, etc. Most of our competitors have exclusions for planned downtime because they don't have this Evergreen hardware. They have exclusions for data migrations. All of these exclusions exist, you know, very similar to, I think I said this to you guys last time, it's like all the fine print on a drug commercial. Transparent SLAs, an SLA has real teeth because you have real financial penalties and customers can monitor and see it in your product. Our management and monitoring in the product shows you any SLA violations. They're triggered right from our product. Finally, because we're running a service, we pay for the capabilities that we're doing.
We're renting space in your data center, so we'll pay for it with rack space and cooling. These are some of the unique advantages that make Evergreen//One unique. While we see people on this journey to these SLAs, some customers prefer to do it themselves. With our intelligent control plane, customers have large service delivery capabilities and they're building customer-managed SLAs. Increasingly, we're seeing strong adoption where customers prefer, you know what, I'm not going to be able to standardize. I've got too much variability in my environment. I'm going to trust you as a vendor to offer vendor-managed SLAs in my environment. We've now crossed over 1,000 customers in Evergreen//One in just about four years. This offering is now starting to see broad adoption across a wide variety of sectors.
This enterprise data cloud creates a network effect because once you're on this platform and once you have these policies and once you have the standardization, it's a one-way street. This unified data plane gives you a single operating environment that you can program with our intelligent control plane right into the automation and build in your business. When you can provision things, you get built into customers' workloads and workflows, which creates a platform effect and stickiness for Pure Storage as a vendor. This allows us to make sure the next purchase that a customer thinks about isn't a jump ball. It's not subject to just NAND economics and dollar per terabyte because your value is critical to a customer's workflow. If you think about getting built into a customer landscape, how do you think about vendors who do that well? Salesforce did that with force.com.
It was a platform where you could build extensions right into the environment. If you think about platform companies, platform companies are unique in their ability to get embedded directly into the core operating environment of a customer's core IP that's revenue generating or cost optimizing. You're driving integration directly into their businesses. This platform effect with this enterprise data cloud allows us getting built into real customer workflows and become core to standardizing their operating environment. I'm going to summarize and talk about this paradigm shift. Our competitors keep talking about this world, storage management. Some of them are even like, oh, I'm going to package up database management software with this stuff. They're fundamentally missing this paradigm shift around how do you drive standard operating procedures at scale? That's the fundamental thing we need to bring into the data center environment.
Charlie Giancarlo talked about the horizontal nature of unlocking the value. If you don't have standardized components and you don't have standardized operating policies, you're not going to unlock the value of your data. Customers can deliver that themselves or they can trust us to do it. Increasingly, we're seeing customers trust us to do it. With that, I'll summarize and we'll bring up Bill, and he's going to talk to you about something I'm sure you're all excited to hear about, which is the hyperscaler business.
All right, thank you everybody. Thank you for your time coming out today on the last presentation. Hopefully, it'll be quick. I know we're getting fatigued. I'm going to talk about our technical advantage, our supply chain advantage, and our structural advantage of our hyperscale offering today. First off, I wanted to just step back and talk about what the offering is because sometimes there's a lot of confusion about that. I talked about it last year, but I just want to go over it one more time. It's not an enterprise storage array. We're not taking our product and putting it in the hyperscale and just asking them to adapt to our solution, which is something that others have done, and you can make some money doing that. This is an offering that is much easier for them to consume.
We're taking our DirectFlash modules and the part of Purity that controls the DirectFlash modules, bundling that and offering it in a way that's easy for hyperscalers to consume. It's not a point product. It doesn't offer one service. It can be used across their entire production environments. We take parts of the DFM, which is like a next-generation SSD, the parts that control data placement, management of the NAND, we elevate it to software and offer that as a solution. We've scaled the solution so that it can be consumed in very high quantities, which is something that's necessary in a hyperscale environment. This solution has been fully released to production. It's available today. I wanted to talk about the elephant. Charlie talked about the elephant, and the AI market is affecting this market in a big way.
AI has, you know, there's a tremendous amount of CapEx spending going on, and it's created this problem in the industry. GPUs are in allocation. Everybody's fighting to get a GPU. You heard about HBM memory. Imagine all these companies spending money to try to get HBM memory to attach to the GPUs. It's created a run on HBM memory. It's happened to the DRAM market. This tightness now has hit the storage media market. There's a tremendous amount of pressure on the hard drive market and on the flash market. The hard drive industry hasn't done much over the past decade or two decades to increase capacity. They're sold out. There are 52 week lead times, and they're under a tremendous amount of pressure.
This has also come now to the flash market, as the hyperscalers are looking for any solution they can get in order to get storage media and increase the size of their estate and address these AI needs. This has created a tailwind for us. There's a tactical tailwind now where hyperscalers are open to new solutions much more than they were in the past because they need to increase this capacity any way they can. That's a market condition that we're taking advantage of. In addition to a short-term tactical tailwind, we have the long-term structural tailwind of NAND flash versus hard drives. If you think about a hard drive, it's a complicated electromechanical assembly. It's a very mature technology that was first released in the 1950s, and now it's become very mature.
The cost efficiencies in hard drives, what they're doing in the factories and how they're improving things, is raising the capacity very slowly, and it's lowering the cost per bit very slowly. NAND flash is a semiconductor commodity item. Process improvements, technology improvements in semiconductors lower the cost very quickly. These innovations have led that over the long term, we see a 2x decrease in NAND prices compared to hard drives. This is a tailwind that's behind the basis of the entire offering. We don't need NAND prices to become cheaper than hard drives in order for this offering to work because the offering has inherent value. There are many parts of the offering that are just much better than any solution that they can buy today. Versus hard drives, we're 10x the density.
We have 5x less space, lower power, we have fewer failures, and then we last twice as long. What does this lead to? This leads to a total cost of ownership that's much, much lower for the NAND flash. It also means that the ancillary equipment around the solution is lower with flash. Imagine if you buy a bunch of hard drives, how many controllers, network ports, racks, power supplies, cooling you have to put around this archaic architecture, right? Against flash, it's much more efficient. We have this other advantage. If you look at this graph with all its interesting and precise numbering, the hard drives today are scaling to about 30 terabytes. Our module this year, we will announce a 300 terabyte module. It wasn't that long ago that the average module SSD or DFM that you could buy was smaller than a hard drive.
This year, we will be 10x larger for one unit. We expect that to continue when we release our 600 terabyte module next year. The hard drive market is a very weird place. The global supply of hard drives, 74% of the entire global supply of hard drives, is consumed by a small number of companies. Think less than six companies buy all 74% of the hard drives produced worldwide. This is a market that's ripe for disruption. As we work with these hyperscalers, as they move and change their architectures, this is going to have a major effect and immediate effect on the hard drive industry. Let's talk a little bit about where our solution goes. In the typical hyperscale infrastructure, they tend to have these horizontal storage pools that Charlie Giancarlo talked about, and they address the data needs of cold, warm, and hot tiers.
A hot tier is something that's getting access to all the time. You need very high. High
Performance, maybe you need low latency. Warm tier tends to be a combination of SSDs and hard drives, and the cold tier tends to be hard drives, and sometimes it gets super cold and becomes tape, right? Our solution has a way to span all of these tiers. Because we've elevated the solution to software, we can tune our solution and meet the needs of any of these tiers. Different sizes of DFMs, different software tuning, different way to integrate with the hyperscale, and we are the only company that has one solution that can move across all of these tiers. We're not just competing with hard drives, we're also competing with SSDs. As Rob was talking about, we don't really make an SSD. A DFM is not an SSD.
We've taken the functions that are complicated, that change a lot, that control the NAND flash, and moved it to software. Effectively, we've taken something that's found in hardware, that's hard to change over time, that's complicated, and we've elevated it to software. That's made it easier for us to do things like qualify different NAND vendors. One of the things that I didn't expect when we started this journey was that hyperscalers really value the agility. If you think about what's going on in hyperscale, it's a huge software development environment dominated by software engineers. They change their infrastructure, meaning their software, and how they control storage all the time. They really value our agility. SSD companies don't have that system expertise. They don't talk in the same ways that hyperscalers do about the storage stack. In our enterprise solutions, we've done the entire storage stack.
That's not what we're offering here, right? We've done the entire storage stack. Our engineers think at the system level. They can talk about system ideas. Our agility, our ability to change the offering really quickly, offer different protocols, different performance levels, how we handle metadata, how we pass errors forward, it's really been a very strong plus for us over using an SSD. In the technical advantages, I talked about higher capacity, density, the flexibility of the software, our ability to work with different NAND vendors. Also, on the structural advantage, I talked about this tactical win we have right now where there's a lot of froth in the market and people are very open to new solutions, and that's giving us a tactical advantage. I haven't talked at all about our supply chain advantage.
When you're purchasing something at very, very high volume, the way the hyperscalers are, supply chain is super, super important. They're not interested in a solution that's hard to buy, they can't get quickly, they can't get a lot of it. We have signed definitive supply agreements with Kioxia, Micron, and Hynix. Those came out in press releases over the past year. In the total NAND market, there's only six or seven suppliers of NAND flash, and there's only five of high volume. We have three of them signed up. If you look at the total NAND market here, for each one, how much NAND flash each one makes: 15%, 13%, 20%. All of these are in our solution. With our solution, we offer to the hyperscalers access to almost half the NAND flash in the world. With one solution, we're the only solution like this.
If they use an off-the-shelf SSD, they get access to like 15%- 20% of the market. This is an environment that's really under a lot of pressure. We're the only solution in the world that offers access to almost 50% of the market. That's highly valued by our customers. Thank you. I'll bring Paul back up to wrap us up.
Thank you, Bill, my hero.
As they bring up the chairs for Q&A, Charlie Giancarlo is going to have just an initial comment about one particular topic.
Yeah, thanks, Paul. As everybody's coming up, I am going to make an announcement. Of course, we're online so that, you know, this is an open forum. As you might imagine, as we're working with our hyperscaler customer and hyperscaler prospects, they're very sensitive to their own intellectual property, their own confidential information. As we've talked more and more to them, they would like us to be as sensitive as they are to having that information out in the wild, if you will. From this point on, whether it's here in our earnings calls and so forth, we're no longer going to refer to any of our customers or prospects by name, and we're not going to be explicit about any specific numbers relative to those hyperscalers, that is shipments, etc. We're working very closely with them.
They've made their interests and concerns known, and we're going to respect our customers' intellectual property. I want to put that out there because I know there'll be a lot of questions here, and we've been a bit perhaps more explicit in the past, and we're going to be more cautious as we go forward with their information.
Okay, let's jump into Q&A. Let's start with Howard.
Great. Hello.
Yeah.
Great. Thanks, everyone. Howard Ma with Guggenheim Securities. Thank you for a very informative and tightly packed presentation. When I think, Charlie, when I think about the evolution of your product portfolio to what is enterprise data cloud today, I can't help but think about Jensen at Nvidia talking about a fundamental re-architecture of enterprise data centers. I believe he said it's something like a 10 to 15-year opportunity.
Yeah.
When I think about the advantages of EDC, and I wrote some of these down, just to say it back to you, unified software management plane, removing data silos, different storage tiers for AI to archive, which I really like that phrase, infinite scalability, multi-cloud, governance, cyber resilience, these make for a highly compelling and I believe truly differentiated approach. When your sales teams and your technical teams go into your enterprise customers today, which I believe one of you called it the North Star, what is the level of resistance versus acceptance of this automated approach that you're delivering? Are we at a tipping point, do you think, where the use cases compel this automated approach? Finally, when you talk about the optimism in your fiscal year guide, when you raise your fiscal year guide, how much of that is due to EDC increasing pipelines?
Yeah, multi-parts are okay here.
Yeah.
Thanks, Bill.
It's a great question, and things are moving very rapidly in the market in terms of your question, you know, what is the resistance? What's the openness? Things are moving very rapidly in this market. Data storage has been traditionally a very slow-moving market. Resistance to any modification of the way that customers bought or even sellers sold data storage was very high. One of the reasons why I opened the way I did is that AI is causing, regardless, there's been a lot of hype around AI in terms of how much you're going to sell into AI. The benefit of AI for us right now is customers are rethinking their architectures. Okay, it's not about how much we sell directly into AI. It's that how open are customers to rethinking their overall architecture in their environment? The discussion around enterprise data cloud has really opened up.
It is not correct. It would not be correct to say, yeah, now a majority of our customers are, you know, flocking to this whole idea. What we are seeing is leading-edge customers, you know, really listening, and it's starting to affect the way they are thinking about purchasing. This is at higher levels inside an IT organization, not at the lowest levels. I don't think it's really yet fundamentally changed, you know, individual storage admins, although I think it is going to in the near future. Certainly at the higher levels of an organization, we are speaking to higher levels in the organization, and we are finally talking to them about franchise opportunities rather than individual use case opportunities. It's making a difference.
Aaron, please.
Actually, let me just finish.
Oh, sorry, multi-part.
That's what's giving us some optimism about, you know, it's not just about optimism. I mean, obviously, we have pipeline analysis and all of the individual financial analysis that we go through when we do forecast. We are seeing this turn into bigger opportunities.
Let's go to Aaron and then Jason.
Perfect. Thanks, Aaron Rakers at Wells Fargo. I appreciate you doing this day. Paul, you should have never said that about multi-parts. My first part of my multi-part is I can appreciate that we don't want to talk about the hyperscaler customers. That's no different than what we've heard from a lot of other vendors that participate in this market. The comment that you've now released the solution, fully released the solution to production, can you just remind us again on the pace of deployment? I know last quarter you talked about pilot deployments, and importantly, how you see the breadth of these opportunities evolving beyond maybe just that one customer without naming any customers, you know, as we look into next year. I'll throw my second question out there, just kind of building on that earlier question, this Fusion layer, just, you know, Tarek, you're in the stage.
How does that get priced? Is there a software element to what's evolving or deepening in the Pure Storage story that we should all be thinking about here as we move forward?
Let me touch upon that second part first, and then I think, you know, between Kaz and Bill, et cetera, we'll touch upon the first part. Fusion is not, we are not monetizing Fusion directly. Fusion now is part of the Purity software. You know, it is, we call it out because it's a unique new addition to the Purity software, but it's part of Purity software. When we sell software as part of our, you know, that's embedded in the product, we don't charge extra for it. I can think of a number of analogies. It creates a network effect. There had never been, there has never been in storage a reason why the next array that a customer buys should be the same vendor, unless it's economics or a TLA of some type, right? In fact, they often, every use case, as Prakash mentioned, is another jump ball.
You know, it's feature versus price versus whatever. We think this creates a network effect, which is valuable for the customer because it makes their lives easier. It creates more consistency. It will create more efficiency for them. For us, really, if we reduce the amount of effort that it takes, you know, to sell the next array, it'll give us a larger market share or a larger wallet share.
I made a comment in my presentation that we're released to production. That is a commentary on our status, about the maturity of our offering, and we've gone through our testing, and we've released our solution to production, which means it can be mass produced. It's not a commentary on the status of where our customer is in their journey. I can't talk about their status. We continued to have engagements with others. I can't go into too much detail about what point we are with each one, but we are having conversations, and we continue to work with them.
We are more convinced than ever that this is the time, you know, the transition's starting, and over the next few years, there's no reason for any of the hyperscalers to not do this switch. You saw what it's going to do to the disk market. It's just going to wipe out disks because flash is a better solution that is advancing at a faster rate.
I think another way to think about this is you only have to believe that we are a third alternative to what are two alternatives today: hard disks and SSDs. The question will be, what type of market share out of that do we get? Right now, it's all opportunity for us, right? I mean, two quarters ago it was literally zero. It's all opportunity.
Okay, let's go to Jason, then we'll come over here to [Asya], and then, okay, Erik, after that.
Yeah, thanks, Paul. Jason Ader with William Blair. Good to see everybody. My question is on the hyperscalers. Can you just walk us through maybe like historically what the split between HDDs and SSDs were for the hyperscalers? Fast forward a few years, Kaz, to your point, how does that makeshift change between HDDs and SSDs? What piece of the SSD market do you guys think you can get, and where do you fit in into that mix? I imagine they're going to continue to use a mix of media.
Yeah, I can talk about that for a minute. All right. Obviously, in the distant past, they were all hard drives. Recently, let's say within the last 10 years, they moved almost all of that hot tier up at the top to SSDs, to TLC-based SSDs. TLC is a version of NAND flash. It's the fastest. That is sort of the first layer that many of their accesses go through. Most of that in the hyperscale has already moved. You get to the warm and hard drive space. They're starting to integrate more and more technology, including ours, in the warm space and then growing down into the colder and colder spaces. They are in the transition of doing it today. Does that answer your question?
Yeah, I mean, I'd probably guess single digits for SSDs as a % of exabytes.
Yeah, obviously it depends on which hyperscaler it is. Some are further on their journey than others, but there are some hyperscalers that are still in single digit on the NAND flash.
You guys are a total fit. The TLC SSDs, they don't go away.
Right, but our solution can be used in that layer as well.
Okay, you are actually more in the warm layer, the QLC SSDs.
Warm is where there's a lot more exabytes, and that's sort of the best opportunity for us right now. Yeah, we want to take the hot layer, but the warm layer is much larger, and the economics are just fantastic for us. You move down into the cold layer. If you're a hyperscaler, you're going to start at probably looking at the warm layer because you say, all right, I already have SSDs here. I get less of a boost by switching that. You're probably going to start looking at the top of the warm layer and again move down. As we do a good job for them, move up. Charlie's right that we're never going to have 100% market share. None of these guys will ever bet on a single solution. They always will keep their optionality.
Our job is to be the best supplier and get a huge portion of the market share, but we'll never be 100%. We'll never be 90%, right? They just physically would not ever, ever do that. We want to capture every bit that they have, we want to play for, and we want to capture as much of that as possible.
Jason, I was going to just add one more thing to that. A part of your question was, historically, how much of the mix of deployment in the hyperscalers has been hard disk drives versus flash? I think this is an area where the hyperscalers have almost paradoxically lagged the enterprise and mainstream adoption of flash, right? They have been able to, through R&D, through just extraordinary means, let's say, be able to squeeze the last bits of optimization out of hard disk drives. We're seeing that start to change, and when it changes, I believe it's going to happen in a very quick way, right? The hyperscalers get incredible economies of scale through standardization. It works against their operating principles and economics to shift gradually over time, 5% here, 5% there, over time. They design their data center architectures in generational horizontal designs for a reason.
Once they design that, that now gives them a template to go and replicate and operate with uniformity. To Bill's point earlier, the hard disk drive market is an interesting place right now. We clearly have our view on how that plays out over the next couple of years. I think it plays out in pretty discrete steps.
One other thing I'd add, you asked the question that opens all the ads. In talking with some of the top engineers at the hyperscalers, they will point out to us how much difficulty they have in using all the capacity on the hard drives, right? As the hard drive grows in capacity, but it doesn't grow in performance, they need the data to be colder. That's another advantage that, if you remember Bill's chart, as our capacity goes up and up and up with the flash, they can use all those bits. If the hard drive vendors came along tomorrow and said, "Hey, we've got a new drive that's 40% larger," the hyperscalers are like, "I can't use those bits because I'm not getting more performance," and they lose the economic benefit.
It also means that they'll tell us things like, "We're overbuying the number of hard drives to get extra performance by 30%- 40%. As we start to look at flash, we can shrink that number, so that would further dampen or dampen the growth of total bits." Given that there's like a thousand exabytes to play for, we have a long ways to go before we're ever worried about that part of it. You have to recognize, disk as a technology is coming to the end of its life. It is not improving the way it used to, and people have been twisting themselves in knots trying to use it better for a decade now. That's why this transition is going to happen and why it's going to flip.
One quick last one, Paul. The TLC, the hot layer versus the QLC, is the TLC SSDs, does that have higher performance than what you guys are offering in the QLC?
Bill?
TLC offers lower latency access than QLC. One of the things that we're seeing is you can address the very high performance workloads with QLC. Our AI offerings, we talk about Exa and FlashBlade. FlashBlade is one of the highest performance storage arrays in the world. It's all QLC. We're confident that we can address a large portion of the market with QLC. In some ways, it doesn't matter. We have expertise in TLC. We've been shipping it on our DirectFlash modules since 2017. If that's what it takes to hit that, we'll do that.
Thanks, Jason. Good multi-partner [Asya].
Yeah, sure. If I can just dig into the NeoClouds, you know, opportunity doesn't really get a lot of attention. I mean, I heard a lot about your ScaleAI. Just help us understand, you know, there's some competitors there who are obviously dominating a little bit in that space, at least in all the media. I've heard their presentation. How does Pure Storage play in that space? Can you maybe identify that opportunity and how big that could be for you guys as you kind of scale this out? Thanks.
Feedback. It's an interesting new market. You know, in the past and also in my earnings calls, I've tried to put it in perspective in terms of overall size. You know, in a $50 billion storage market, not even counting hyperscalers, it's still just a, you know, a few billion dollars, right? It's an interesting new area. GenAI evolved from the HPC space, right? The high-performance computing space has always been an interesting niche in a number of different businesses, everything from, you know, the compute area. You remember, you perhaps remember Cray computing, right? That was, you know, became part of HPE. Also, you know, in terms of storage, it was a niche space as well that, you know, a couple of private storage companies played in. One of the things about HPC was it was historically sort of like Hotel California.
An organization would go in, could never extract themselves because the values in the HPC space, which were largely focused on scientists, were very different than what enterprise values were. You would get caught up in the benchmark competition that was taking place in that environment, which really didn't merge very well or suit itself to an enterprise market. We chose early on to stay focused on the enterprise market. GenAI takes off. All the practitioners come out of the HPC market. We were behind. I'll be very direct. You know, we were behind in that space. We have always, I think, exhibited great discipline in making sure that we did not build, you know, bespoke software for every new segment that came along. We looked at what we had, what capabilities that we had in the company, and we were able to then rally around our metadata engine.
Using that as a way of entering the market, we introduced FlashBlade Exa earlier this year. We believe FlashBlade Exa is now benchmarked as the fastest AI storage engine in the world by several factors. You know, we still have work to do in order to really get to the head of the pack from features and so forth. Rob, maybe you want to take it from here.
Yeah, absolutely. Just to add a few comments to what Charlie Giancarlo said, just reinforcing the point, we've intentionally, through the company's history, not started with a focus on HPC, but rather, as I mentioned in my presentation, the needs of the broader enterprise. What we have seen through now multiple new markets is the opportunity to leverage those investments, the IP we've built there in different ways to go address the needs of different markets. Said more simply, if you look at the HPC space, companies that start there have a lot of trouble moving into the broader enterprise. We've been able to go the other way and say, hey, let's start in the enterprise and now take some of those fundamental IP blocks and go address the needs of the NeoClouds in this case.
One of the things that really was behind kind of a driving principle of how we built FlashBlade Exa from the IP based on FlashBlade is, we stepped back and we looked at what are the core strengths of FlashBlade. It's the software. It's the scalability that it delivers in the metadata. For the enterprise, right, and it's like any design decision, you design for a certain space of possibilities. We deliver really, really good performance, really, really good economics, really, really good efficiency that covers the needs of 99.5% of the needs out there. What we have now done is to say, hey, if we go and take the software that's in FlashBlade, that scalable metadata engine, pair it with an open hardware architecture, we now have a very compelling offer to now supercharge the performance there to go after that last half percentile.
That's really the evolution that we've gone through in terms of addressing the needs of the NeoClouds. As Charlie Giancarlo said, we just went generally available with that solution in Q2 and in early discussions with a lot of the top names and really good initial signs of interest.
Did you have a follow-up, [Asya]?
Yeah, I mean, how should we think about the ramp of that, like as it goes through? I know you've put a lot in R&D and investments this year. Typically, when you've had a strong year of R&D investments, that's kind of then, you know, led to strong margin improvements the following year. Just give us a sense of this investment that you're putting in this year, how we should think about the ramp of that in terms of profitability into next year.
We are in conversations with, you know, up to roughly, you know, a dozen NeoClouds in terms of Exa already. I expect that we'll win some of those and increasing, I think we'll increase our win rate, you know, over the course of the next 12 months.
I might also step back and just look at the broader context of our R&D investments, investments as a whole, right? I spent a good part of my discussion really unpacking where we add differentiated value across the stack and how we go and package that for the needs of different markets. If we step back and look at where are we deploying R&D, where are we deploying engineers, it's into layers like Fusion. It's into layers like DirectFlash. It's into layers like our Purity software. Yes, a lot of those benefits flow directly through to, you know, the ScaleAI players, the NeoClouds. A lot of those benefits are driving the adoption of enterprise data cloud, as we discussed with Howard. A lot of those benefits are directly transposable into the needs of the hyperscalers.
It's a little bit hard to draw a, you know, a direct line from R&D investments to the growth of a specific product. As Charlie mentioned before, we've been very, very intentional about how we direct R&D investments to get maximum leverage out of them to address the needs of the multiple different market segments that we're addressing.
I think we look at the enterprise data cloud as a gigantic opportunity for all of our established business. We have several years of investment we're going to put into that, new features, new capabilities coming out, evolving it, and making it better year after year. The biggest focus for R&D for us is hyperscale and enterprise data cloud. AI is in Exa, really, because AI is broader than just Exa. Exa is probably the third.
Right. Yeah. I'll mention one other thing that may not be so obvious, but our focus on supporting the larger, if you will, breadth of enterprise and enterprise features is, in fact, starting to affect the way that purchasers specifically for AI, like the GPU clouds, are thinking about things. What I mean by that is, remember, one of the things that we're well known for is Evergreen, the ability for customers to be able to upgrade, downgrade, you know, relatively, you know, non-disruptively to their environment without data disruption. We're able to do that, or we're promising that that part of Exa will be that we'll be able to do that between FlashBlade, regular FlashBlade, FlashBlade S, and Exa.
This is allowing us to have a much broader range of AI and consistent range of AI capability for smaller clouds, larger clouds, for clouds that are growing and are not quite certain of what their needs are going to be tomorrow. It's boosted interest in our overall product line, not just in Exa.
Yeah, I'll comment on that. I was recently speaking with one of these research labs. If you think about how, in university research labs, things get funded, there is grant money tied to projects. There is funds allocation to specific projects. You buy it, you sweat it, and five years later, it's old, right? Then it's about raising grant funds. This concept that Charlie Giancarlo mentioned of bringing Evergreen into that environment, when they get a taste of that, they're like, okay, I've sweated it and it gets better and it gets current over time as part of that same grant proposal and funding, is something they've never seen before. We've never really, no one in the supercomputing market has approached that value proposition to that thing. I'm starting to see that with the grant kind of funding that goes into some of these AI labs.
Let's go to Erik and then Wamsi and then Tim.
Awesome. Thank you guys for hosting this today. Super helpful. Erik Woodring from Morgan Stanley. I wanted to readdress the hyperscale question. Kaz, Bill, Charlie, from my perspective, I guess, when we think about price and performance, performance is very clear with DFMs and Pure Storage. When I look at the HDD market, if performance was the factor that could disrupt HDDs, so to speak, maybe you would see something like greater adoption of dual actuators in HDDs. You're not necessarily seeing that. It still seems like it comes down to price. I would love if you could maybe just elaborate on how you are increasingly able to compete on price as we look at this opportunity in the warmer and kind of colder layers of the hyperscale tiers, because clearly, if you could bring price with performance, that seems like a no-brainer to me.
Let me start, Bill, and then I'll let you go at it. I'd like to change the conversation because price, what price are we talking about, right? If you're talking about the raw price of the raw bit in the hard disk and then the raw price of the raw bit on flash, there's no competition then. That's consistently what a lot of the hard disk manufacturers will keep on arguing. It's not that price. It's the total cost of ownership of the system after you've put the whole system together. When the total cost of the system for a hard drive environment is dominated by everything around the hard disk and not the hard disk itself, we're not actually competing with the hard disk. We're competing with all the equipment that goes around the hard disk. Bill, what's your?
Yeah, so if you pay attention to the AI market right now, you'll notice when they talk about data centers, they're not so much talking about how many GPUs go in there. They talk about how many watts. How many gigawatts? I'm going to do a 10 gigawatt factory or a 10 megawatt this or something, you know. The reason that the conversation has changed is power is highly constrained. The cost of power is very high, right? Because it's so constrained across the country and the world. Also, the opportunity cost if you overrun the envelope, right? If your data center uses more than it's available to it, now you have an exponentially high cost that is very debilitating for these people who are setting up data centers. The power part of the equation, of the cost equation, is huge, right? Our solution is so much more power efficient.
Think about fitting within that power envelope and then lowering the amount you spend on cooling and then shifting that power resource to something else is very valuable to our customers today.
The flash modules are roughly 10x the size of a hard drive now. The $20,000 server or $30,000 server in which you house some hard drives, you need one-tenth as many of those. The top-of-rack switches that you use, you need one-tenth as many of those. The power supplies, the power distribution units, you need one-tenth as many of those. The amount of human effort that it needs to replace broken hard drives and the disruption, you need less of those. Because the hard drives break more often, you actually have extra inefficiencies in the erasure coding that you use to make the data resilient. You gain efficiency there. All those efficiencies add up to where the cost and the flexibility is much better. That's with what we're shipping today. You go forward and you say, today we're 10x the size. Next year we'll be 15x the size.
That's assuming they grow 33% and the people could use the extra space. The year after we'll get to 20x or 25x in size. You just completely wipe out that cost difference. The power budget, if you overflow the power budget, the cost is gigantic. If you think about an enterprise data center, they all have power budgets coming into their data center. When you have to contact ConEd or PG&E or whoever your utility is and say, I need another transformer and I need you to dig up the street and upgrade the power for them, forget it, your costs are millions and there's nothing. Storage is, generally speaking, the number two consumer of power in data centers. We're taking that 25%, 30% of power in the data center and taking it down 80%, 90% or more. That's a gigantic advantage when everything is budgeted in watts.
Eric, let me just put things in perspective for you because we have to think about the hyperscaler industry as a very capital-intensive industry, right? You know that most of them spend about $7 billion- $10 billion per year on CapEx. All of that spend, for the vast majority of that spend, goes into data centers. When you are someone who is managing that level of investment, you want that investment to last for a very long period of time. All those costs that Kaz spoke about create a limit to how long your data center investment is going to last for. We're pushing that limit out because we are a lot more efficient in the technology we provide that addresses the power constraints that data centers face today. We have to think about it in those terms. From a capital intensity standpoint, we better the capital intensity of hyperscalers.
Our technology has a much longer lifetime. We've been shipping DirectFlash modules for over nine years now, and the ones we shipped nine years ago are still amazingly reliable compared to hard drives. Whenever we talk to people about our annual failure rates, we're including those as well as the brand new ones. What we're seeing is that extra lifetime gives them a lot of options that most people aren't even yet taking advantage of, which will further extend the gap.
Okay. That's super, super helpful. I'd love to maybe just put price and performance to the side and say, where do the bottlenecks exist? When we talk about go-to-market and disrupting all of this, right? This has been an established market for years. You guys are the new entrant. You're the disruptor. How does the disruptor, when you put those factors to the side, not to say they're not important, but how do you make your entrants? What is the selling path into these hyperscalers, and how long does that take? I'm not asking about any specific customers, but just, what is the bottleneck? Maybe if we touch on that.
Okay, thank you guys. First off, I want to just start with we've been doing this for years. It feels like we started last month, but we've been doing this for years. We set up a separate line of business, hyperscale line of business, two years ago. Completely separated it because we had had a bunch of success and we were well into this. The hyperscale, getting into a hyperscale involves showing a value and improvement to the engineering community. That's really how they think about things. You have to go to their engineers and their engineering leaders and get them bought into the design and the approach. As far as disruption, most of the hyperscalers are very into disruption. They want to see a new technology that transforms and gives them a competitive advantage. They're very open to those talks of like, hey, what if we changed everything?
They do like that. The nice thing is our technology doesn't require them to change everything, right? It's easy to slot in. We have an interface that they're used to seeing. If they want it to be a little different, we can change it for them. We can disrupt the cost and the performance and the reliability without being hard to ingest.
I will say this is the, that was the most optimistic I've heard Bill talk about this space. We do battle, as we would in any organization, any enterprise organization, a not invented here syndrome or a, well, we've always done it this way. This is going to change a lot, a lot of things. We have that in almost every situation. Of course, they have their own pace at which they are doing their own, when they design a next generation data center, it's a product for them. It's like Apple developing another, the next generation iPhone. Everything is engineered for iPhone 16, iPhone 17, right? Everything from the camera to the software, to the marketing and everything else. That's how the hyperscalers design their data centers. We also have to get on their cadence, right?
You either get at the front end of their cadence and therefore can be part of that design process, or you've missed it and then you have to wait for the next time that comes around.
Remember, a hyperscaler never changes their data center. They plan a data center. They design every bit of equipment that's going to go into it, every watt, every bit of cooling. They design it, they build it, it stays the same. Five, six, whatever years later, they got the data center. They might keep the shell of the building, but every system in it changes, all the compute, also the cooling, everything. That's very different. In enterprise, they have a data center and they're rolling in new equipment all the time. It really is intercepting that process of designing the data center, which, as Charles Giancarlo said, it's like a product. That does also take time.
Great. Let's go to Wamsi.
Yeah, thank you so much. Wamsi Mohan Bank of America. Thanks everyone for doing this. A few questions, I guess. One is just on following up on the go-to-market question. If we think about, you know, the product disruption, I mean, I think Pure Storage has shown product disruption now for many years, but when we think about market share, the incumbency that some of the legacy players have in the storage market has been hard to overcome in some ways. Do you think that now is the time where this re-architecting of the data center has reached a tipping point where you're able to, you know, go and really make a change and drive incremental share? Do you think that your go-to-market actually has to change based on the product portfolio improvements? How do you go about capturing incremental share today?
You know, briefly, I think, and then I'll let some of the other executives speak. I think yes and yes. One is I think that AI has been a boon in a sense because I think it's forcing customers to rethink their overall architecture at a higher level. I mean, the one thing about storage, it was pushed down to the storage admin on decisions. Nobody, no one in the higher levels of IT really wanted to discuss it. Part of it was because it was so specialized, right? Now they have to rethink their overall data architecture. That's a senior level, you know, strategy discussion and planning. Those conversations are opening up this opportunity. I think, you know, we're fortunate, you know, we were going down the path of Fusion and Enterprise Data Cloud many years ago because it's a better way of enterprises managing their environment.
AI has come along at a point in time that's also getting them to consider these things at a higher level. I think that's been very positive for us. At the same time, yes, the go-to-market now is very different. I'll go back to storage being bought and sold the same way for decades now. Your best sellers, you know, are used to speaking about it in a certain way. They actually know for a fact that that's the best way to sell storage. What we're saying is, no, that's no longer the best way. We now have to speak the language of data, data management, and all of that. The training, the retraining, the focus on more of a franchise sale rather than, you know, an individual use case sale, this is all very different. It's a lot of investment. We haven't talked about investment in technology.
There's a lot of investment in the go-to-market that's going on.
Yeah, no, I want to elaborate on what Charlie said because this is a beautiful question. The world is changing because AI provides the impetus for the change to standardize storage, right? Therefore, in the way we go to market and the way we sell, we have to have a conversation that enables a standardization over the Pure Storage platform, which means that if you are going to sell to a large organization, I'm picking your bank, for example, then the conversation has to also do with the legacy environment that exists, right? Part of bringing together franchise deals is helping customers migrate away from the legacy environment and dealing with potential write-offs that may ensue should they standardize over the Pure Storage platform. It's a very different sale. You no longer sell down deep into the organization to the IT department.
You have to involve the CFO because there's a lot of financial engineering required. There's a lot of financial implications attached to the sale. We are on that journey.
I'll again double-click how we project it changing over time, right? As you think about this platform effect and you think about what SaaS companies have gone through, yes, there's the landing, the change disruption cost. Given our Evergreen and our long-lived hardware, people have asset depreciation cycles, five years, etc. For some of our existing customers, we've already seen them adopt longer asset value depreciation cycles. When you get to seven or ten years, it fundamentally changes the unit economics on cash value and asset depreciation. Then you think about asset utilization in SaaS companies. When you think about SaaS companies, you end up with customer success driving expansion utilization.
The go-to-market, as you start thinking about platform effect, will evolve into more of this land, expand, customer success motion, etc. The capital asset efficiency you get on some of our long-lived components with Evergreen is tremendous. You have to get them into that world, but it creates this nice platform effect.
I'm going to follow up on the SaaS company comment. A lot of software names have announced zero copy data sharing. Is that something that you think impacts the footprint that you have at these companies at all?
It's a great question. We announced something similar to that. Two things. One is, as I mentioned, Data Cloud, we can allow customers' production data now to be available as their data warehouse rather than copying the data. We also have zero move tiering, which is a similar kind of concept. Customers can have a balance of performance and low cost for storing information and leverage that across the board. I guess our point of view is, however we're able to help our customers economically to make better use and more efficient use of their data, it's going to in your or to our benefit. By the way, we've been doing this for years. Every time we come out with a new major software release, we generally improve upon our data reduction, our data compression.
Effectively, what we've done is we've taken a customer that might have bought 100 terabytes five years ago, and now it's storing 200 terabytes. We didn't sell them another 100 terabytes. We gave them free software upgrades that doubled their overall capacity. We're happy to do that because it gets us great scores and allows us, we grow nonetheless.
Wamsi, would you please pass the microphone over there to Tim?
Hi, thank you, Tim Long at Barclays. Two as well, if I could. First, I wanted to touch on Evergreen//One and the SaaS offerings kind of relative to hardware. It's been a good transition, but I'd say it's probably been a little unpredictable. Can you just talk about the transition in the overall model you're talking about today? As AI gets to the edge, does that change? I'm guessing the decision now is, you know, CapEx versus OpEx, not necessarily technology. When we move forward into this new era, do you think that SaaS versus, you know, CapEx sale changes? Separately, I did want to just touch on, I think Pure Storage traditionally does very well with kind of higher-end enterprise SaaS companies and banks and things like that.
Just curious how those companies are looking now because some of those SaaS companies could start to look like NeoClouds or, you know, much larger in the future. Just curious what path those companies may be on.
Thank you. Let me start and Prakash maybe pick it up. It's a very dynamic market for sure in many ways. You're right. I mean, our ability in the past to be able to predict, you know, what % of our sales would go down the route of Evergreen//One versus, you know, traditional product purchase has not been, we've not been able to be highly accurate. A lot of it, part of the issue there is that it's driven a lot by macroeconomics. It's driven by, you know, everything from interest rates to uncertainty in the market, uncertainty of growth. The more the uncertainty, the more they like Evergreen//One, the more certainty, or at least the more the sense is that they have certainty, the more they like a standard product sale.
We're still in the early phases, I think, of enterprises appreciating a SaaS model in a private data center environment. That understanding, when I say that understanding, I mean up through the financial organization inside the enterprises, is still very immature. We're also seeing the immaturity, if you will, of a market that tends to make prediction or forecasting a little bit more challenging. I would say all those things have played into it. It does continue to grow, but the specific growth rate quarter in, quarter out, even annually, is buffeted by all those things that make it difficult for us, even for us to predict.
Yeah, like if I kind of break it out into the two parts of your question, right? We've had broad adoption across a wide variety of verticals, but some people have been adopting for the financial value, other people for the operational value, right? If you tie it to the financial value, it's tied to all the, like, as economic uncertainty increases, the offering is much more compelling on the financial side of it, right? I do see this enterprise data cloud approach and this entrance of AI starting to elevate the operational value more. This AI insertion point is creating some clarity on this operational insertion point. The place where the operational insertion point was most pronounced is where you mentioned. We got six verticals from high-tech SaaS to some of the, you know, more forward-leaning technology guys that were coming from the standardization.
If you're coming from a SaaS world, you're like standardization at scale is everything, right? We didn't need to sell them on that. That is the way they operate. We were fit for purpose into their operating environment. They get the idea of reducing variability. There has been, you know, if you couple that with, you know, messages around, hey, cloud, going to cloud, cloud repatriation, those things you've seen that kind of in the press, et cetera. As you get to certain economies of scale, we've also seen a driver of some of that standardization in high-tech SaaS being repatriation workloads as well.
I might add one more thing to that, Tim. We talked about go-to-market before. There's another element of introducing the full as-a-service model that we need to consider, which is it's a new motion for people. It's a new motion for buyers. It's a new motion for sellers. It's a new motion for partners. We've seen really nice progression over the last several years as we focused on educating our sellers, really all of the parties at stake to the benefits of this model. Put more simply, several years ago, I would go out on a sales call and our salespeople, very well meaning, might present an offer to a customer and say, hey, this is the traditional product sale, or if you want, you can buy it OpEx. It looks like this.
When you, to Prakash's point, present the offer in that manner, you're missing out on a ton of the value, right? Somewhat surprising to me anyway is that although the enterprise is well used to consuming in a service-oriented model in the cloud, it's still a relatively newer motion when that equipment sits on their own floor tiles. We've been making steady progress both with our sellers, our partners, and I think now, to Prakash's point, some of the technology kind of market factors are illuminating the buyers to, hey, there are additional benefits here with the full as-a-service model.
We're getting close to the top of the hour, so Nehal, you'll have the last question.
Thank you. Yeah, Nehal from Northland Capital Markets. Charlie, in your discussion of evolution of Pure Storage's entry into the high-performance market, you talked about that there are still things that you want to do to get to the head of the, what I'll call the AI-native data management market players. Can you talk about what are some of these features that you perceive that you guys need to bring to the market still?
Yeah. I'll speak about some of them. They're all quite straightforward. You know, we provide one type of, for example, high availability, and there's a market for two or three different types of high availability. That's, you know, we're going to be developing more and more of that capability that we're putting out. This, by the end of this year, is multi-tenancy. That's an important part.
In any market, there are sets of features that different customers need for different purposes. My point of view on this is that we probably have about a year's work to get us to the head of line on those different capabilities, the capabilities that we're currently missing. Is being able to access the metadata of storage pools that are not owned by Pure Storage part of that roadmap? Not for this specific, not for the specific use case you're talking about, which is the AI data pools, for the large-scale AI environments. I think over time, right now, what we're calling our enterprise data cloud is very much focused on Pure Storage, Pure's products.
I do think over time, customers will want the ability to be able to also pull into that management framework storage that sits in other areas, not just maybe competitive products, but areas like SaaS as well. That's certainly an area we're going to be focused on over the next couple of years.
We do reach in container workloads to storage sitting outside through our Portworx. In that heterogeneous storage space across both cloud and on-premises, we see that demand in the container space to be higher, and we're leaning into Portworx as part of that.
Great, thank you.
Thank you very much. Before we break, Charlie, did you want to do a sum up?
No, I just want to thank you all again, hopefully, and of course, you know, some of us are going to have to run right away because we have the larger customer event. I think some of us will be able to stick around for a little bit of informal Q&A. I want to thank you all for your time. I want to thank everybody that has been online for your time. I hope we, I mean, there's a lot going on, and I know there's been some confusion. I hope we've clarified a lot of this. Certainly, we'll be referring to this in our future meetings and earnings calls. I want to thank you again for your time. Take care.