All right. Welcome, everybody. We're delighted here to have Rob Lee with us here today, who is the CTO of Pure Storage, to explain a little bit more about Flash and kind of the power advantages that you can get from Flash in a data center. So Rob, thanks so much for being here today. Maybe you could just kind of give us a sense, a brief introduction on yourself and your history at Pure Storage, for those who may not know.
Sure. Absolutely. And thanks for having me, Meta, today. So hi, everybody. Thanks for tuning in. My name's Rob Lee. I'm the I serve as CTO here at Pure Storage. I've been with the firm for 10.5 years, since really the early days. And you know, I joined as part of the founding team behind, you know, what's become our our second major product line, the one that we see being put into use the most often, most frequently, within the AI environments.
Got it. Perfect. And then, you know, starting given this is kind of a generalist audience who may not be familiar with the storage market, you know, can you just give us a sense of what the approximate percentage of data center power drain that comes from storage and how much that's kind of expected to grow over the next few years? It just kind of helps give a sense of the scale of the problem being faced today.
Yeah. Absolutely. So if we look, you know, there's lots of estimates, and it obviously depends on geography, this and that. But if we step back and we look in totality, the best estimates are that, you know, data center power, on a global basis, accounts for about 2%-3% of the global power grid. So data centers as a whole, today are using about 2%-3% of the global power grid. Now, within that data center power usage, it's estimated about 20%-25% of that is, you know, being consumed or being used to power storage.
So if you figure that, you know, you're really on the order of almost 1% of the global power grid is, you know, going to power data center storage applications, which is, you know, just a tremendous amount of energy that's being spent.
Got it. You know, just as we think about things, does that get worse? Like, why would that get worse over time, or why would that scale up over time? You know, just helping people kind of contextualize how much bigger that problem could get just from a storage standpoint?
Yeah. Absolutely. I mean, I think the biggest driver for that getting worse over time really is the rate of data collection, and data growth. And, you know, we've all seen the analyst charts projecting, and well, looking at what's happened over the last 10 years and then projecting forward, the rate of data growth. Then you combine that with newer technologies such as AI that are giving users, customers more value out of the data they're storing. That just drives customers to store more and more data. Now, you know, the challenge is that the legacy or kind of, you know, the typical methods of data storage aren't getting, you know, that much more power efficient. And so you've got, you know, this major driver of data collection, data growth, which is just driving that energy consumption higher and higher.
Now, you know, the other part of this is, you know, the compute that goes into these AI environments as well, right? As we all know, GPUs that are powering these AI applications are incredibly power hungry. You know, we're obviously in a phase of large buildout of those environments. People are deploying more and more of these GPUs. And so in totality, data center power consumption is growing and is projected to continue to grow. I saw one projection, which I think is probably a bit far out there, but I did see one projection coming out of, you know, MIT Lincoln Laboratory that says by 2030, you know, the data center share of the world's power grid is, you know, projected to be north of 20%.
Now, I think that's probably a bit far, but, you know, that just kind of goes to show you, how, you know, how extreme, the drivers are in terms of, you know, this wave of data collection and data use.
Got it. And so just kind of giving the audience a sense, just how much can moving off of this to Flash change kind of that equation or the power drain of storage within the data center?
Yeah, well, so there's really two elements of that. One is making the actual data storage way more power efficient. And just to, you know, kind of ground the audience, we're talking about an order of 10x compared to spinning disk. You know, the second element of this is, you know, by moving to Flash from spinning magnetic hard disk, we can provide way more performance to the data that's being stored, which ultimately means that whatever job is being run, whatever, you know, AI application is, you know, being run, can be done a lot faster. So the GPU time involved, you know, is going to go down, thereby also reducing the power consumed by the GPUs or allowing customers to deploy fewer of them, again, having the same power savings effect on the GPU side.
So you really have the two sides of it, making the storage a lot more efficient, but then also by way of faster performance, making the GPU power consumption way more efficient. Now, you know, well, I often get the question of, okay, so, you know, how is it that, you know, Pure's Flash is able to, you know, reduce the power consumption of hard disk-based solutions by, you know, 5x to 10x? You know, it really comes down to two things, right? So one is, you know, if you look at hard disks, it's inherently, you know, a spinning media, right? You've got, you know, this is a typical hard disk. And, you know, there's motors in here. This thing is extremely heavy. This takes a lot of power to run.
If you look at, you know, Pure's Flash solutions, you know, the Flash solutions that we provide, you know, component or call it drive by drive, you know, ours take about, call it, half the power to run. Now, that's only half the equation. You know, the Flash module I'm holding in my hand here, provides about 3x to 4x as much storage as this hard disk. And so when you kind of multiply those together, right, you know, our Flash takes about half the power to run as a hard disk. It provides about 3x to 4x , as much usable storage. You're now into that 8x to 10x , you know, more power efficient, storage solution on the whole.
And again, that's not even accounting for all of the ancillary power savings you get by just being a lot more performant, you know, moving data more quickly to the GPUs, thereby reducing the amount of time you have to run those.
Got it. And I just wanted to, I didn't say it up front, but if anybody has any questions, just put them into the Q&A portal, and we'll get to them at the end. So, Rob, what you just described sounds amazing, you know, that you could get kind of these 8x to 10x savings on power. So why have clouds primarily stayed with this for, you know, upwards of about 90% of their storage needs?
One word: cost. You know, if you look historically, you know, there's really nobody out there that says, "Hey, you know, cost aside, there's some reason why I prefer to be on disk." Historically, it's really just been about where is Flash, you know, cost effective to deploy, and where is it not yet cost effective to deploy. And so that's why you've seen Flash really over the last 10 years start to proliferate in the data center, whether it's in the enterprise data center or the hyperscaler data centers, beginning with the highest performance applications because that's where, you know, the benefits justified the additional cost. Now that equation is changing. You know, we like to think that we at Pure are driving, you know, that equation to change.
Because of our technology, we're now able to offer not just the enterprise, but we believe the hyperscalers as well, a cost-effective alternative to hard disk based on Flash that gives them all of the power benefits that we just spoke about, significant space savings. I neglected to mention that, you know, our solutions not only are they 5x to 10x less power consuming, they're also 5x to 10x less space consuming. And so when you think about this from the hyperscaler lens, you know, as they're building out data centers, as they're building out footprint to deploy GPUs, to deploy additional compute, they're running into physical limitations, right? Physical limitations in terms of how much can they fit into the buildings, how much power can they get into the buildings.
Now, all of a sudden, the significant power savings that they're able to get on what they've, you know, have historically been spending on the disk storage start becoming very meaningful. So, that's a large part of the driver, you know, behind our current conversations that we're having with many of these hyperscaler firms, really to modernize and refresh what has historically been, to your point, 80%-90% of their storage deployment, which is sitting on nearline disk today.
Got it. And so, you know, maybe you kind of alluded to it, but for probably the last five to 10 years, it's very much been a cost equation, and the cost equation just didn't make sense necessarily to move, to Flash except for kind of some high-performance needs. You know, are you seeing that conversation with the cloud change to be around the power saving conversation, or is it still just kind of, you know, we need to kind of take it in with the power, equation or the performance equation?
The conversation is definitely shifting to incorporate power savings, for sure. But I would say I would say it's still primarily focused around the TCO, the total cost of ownership. Power obviously is a big component of that, especially where, you know, there are places where, you know, you're just up against logistical limitations. Like, you can't, you know, there's a certain amount of power you can get from the utility. But at the end of the day, with the hyperscalers, it's very much driven and focused around the total cost of ownership. And when we look at it, it's, you know, it's a significant savings of infrastructure surrounding the storage media, whether it's the hard disks or Pure's Flash. The fact that we can reduce that considerably reduces their costs.
It's the reduction of power and the cost associated with power, which, again, as you'd imagine at their scale, is quite significant. It's the reduction in costs associated with lower reliability, significantly lower reliability with the disk compared to Pure's Flash solutions, where, you know, compared to disk, about, you know, 20x to 30x more reliable. And so when you think about that from the operational cost of a hyperscaler, needing to have somebody go out and, you know, wheel around a cart of hard disks every day and change components, that's a significant cost savings as well as overall longevity. And so, you know, the conversation very much is around TCO.
I would say the big change is that, you know, if you think about a TCO, you think about its total cost of ownership, you've got the cost to acquire, and you've got the cost to run and operate, right? And when you think about the cost to acquire, it's typically dominated by hardware costs, things that nominally sit on a deflationary scale, right? Bit by bit, hardware, whether it's Flash, whether it's disk, generally deflates in cost over time, over the long, long, long period. When you look at the cost to run and operate, whether it's power, whether it's space, whether it's labor, those generally operate on an inflationary scale, right?
And so I think one of the things that's kind of tipped over, if you will, is that we've reached a breaking point where the significant operational cost savings that our solutions can deliver to customers, especially the hyperscalers and the way they operate, are shifting that TCO equation significantly.
Okay. Perfect. And so, you know, there are competitors of yours that have Flash solutions. Can you just give some background on Pure Storage and kind of what gap you saw in the market that informed kind of the formation of the company, or kind of what gap you saw in the market?
Yeah, absolutely. So, if we go back to the beginning of Pure, we started in 2009, started shipping product in 2012. You know, we weren't the only Flash storage provider attempting to break into the enterprise. We weren't even the first. What I would say is we were unique in really the industry and still are, in that we recognized that for Flash, Flash had, as a media, had tremendous promise of performance, high reliability, a number of attributes which we deliver through our solutions. But we realized in order to exploit and deliver the best of those properties, you really needed to build a system entirely designed for Flash. You had to build a software designed to treat Flash as it was meant to be treated, that it behaves very differently than disk. You can't just pretend that it's disk.
and so we embarked on a very unique and, I would say, a long engineering road to build our entire software stack and now our hardware stack, designed to work with Flash at the most native level. And we did this because we recognized that if, you know, Flash was going to succeed long term, if Pure was going to succeed long term, it was going to be largely driven by the consumer market, right? If you look at what's driving, you know, Flash, the semi-market for Flash, Flash chips, it largely has been and largely continues to be driven by consumer. Well, it stands to reason that you want to be on that roadmap. You want to be on those economics. You want to make the consumer stuff work at enterprise levels.
In order to do that, you've got to go do the hard work of building software, building hardware to make the consumer Flash work. Every other competitor in the market, you know, even today, relies on a packaged SSD or solid-state drive, the same type of solid-state drive that you might have in your laptop or, you know, in your desktop device. And if you look at what an SSD does, essentially, an SSD is an industry's coping mechanism for the fact that Flash behaves very differently. The industry didn't want to retool all the software it had to work with Flash in the most native way. We had a tremendous amount of software that knows how to talk to a spinning disk. An SSD is essentially a translation layer that makes Flash behave and look and feel like a spinning disk.
Well, as you can imagine, making a semiconductor that is potentially capable of lots of, you know, great things, performance, efficiency, reliability, making it behave like a hard disk, well, you lose a lot of those benefits. And so that's really the distinction between Pure's approach and the rest of the industry. We've gone and done the hard work to work with Flash in the most native level that allows us to get, you know, way superior efficiencies, reliability, performance, and densities out of the Flash modules we ship, versus the competitive set, and really the rest of the industry, which doesn't have that IP and is really reliant on, you know, very inefficient SSDs in order to deliver their Flash. And, you know, when we look at the hyperscalers, the hyperscalers largely are in that same position.
Got it. So, I mean, just maybe putting that in terms of TCO or, you know, how would we have kind of talked about the TCO or the power advantage that you can get from moving from disk to Flash. But as we think about kind of your guys' solutions versus kind of competitor Flash solutions, is there a way to kind of contextualize what additional TCO or kind of additional savings you get out of Pure solutions?
Absolutely. We've done a number of comparative studies, but really, third parties have been looking at our Flash solutions versus the competitive sets all-Flash solutions, really looking at hey, a customer's considering Pure, they're considering you know, an alternative from a competitor, what are the power you know, envelopes and power footprints of solution A versus solution B? What we typically find is that our solutions are about 3x to 5x more power efficient than competitors' all-Flash solutions, again, because they're based on SSDs. The SSDs really limit them in you know, in a number of dimensions that all lead to worse power efficiency.
Got it. And just in terms of, you know, we always kind of get a question of how permanent some of those advantages that you have are, you know, as we move or, you know, as we're moving into kind of different drive capacities, etc., does it get harder? And so does your lead kind of expand or just how do you think about how your advantages persist, you know, as we move down the cost curve?
Well, our advantages are going to widen, at least for the next several years, for sure. You know, we've, as you mentioned, a large part of what's driving our advantages is our drive densities. And, you know, for the audience, where, you know, who may be less familiar, you know, the more Flash that we can put on these modules, if you think about it, I put a certain number of modules into an array. That array generally consumes the same amount of power. If I can put more Flash, you know, if I can double the Flash into, you know, I put into a system, well, I've pretty much generally about halved the power footprint per amount of storage I'm delivering.
And so drive densities, right, the more Flash I can deliver at, you know, the requisite performance levels, improves our power footprint considerably. And when you look at our drive density roadmap, you know, we're really, you know, at the beginning of a rapid pace of improvement, as we've spoken about, you know, we shipped 48 TB modules last year. We've shipped the 75s in November last year. We're on track to ship 150 TB. So again, you know, doubling, you know, year-over-year, we're actually 1.5x , or no, two and a, almost 3x . Jeez. A little early.
It's early in California. Yes. Yeah.
It is. It is. And then, you know, and then we're projected to, you know, we're looking at 300 TB beyond that. So we've got a couple doublings ahead of us, each of which will drive commensurate power savings. You know, beyond that, there's a number of other things that we have access to, you know, techniques that we have access to to further improve the power footprint, again, because we're able to work with Flash at the most native levels. We can be very intelligent in a large system about what parts of the system we're powering, when we can power them down, way more than competitors who, again, don't have that visibility because they're relying on SSDs. So I would expect over the next several years, you know, our gap in terms of power advantages will increase.
And then, you know, we'll have to see where we go, beyond that.
Great. I mean, just given the theme of the day, I think we've spent a lot of time talking about power and kind of the advantages that Flash and Pure Storage can provide. But just can you give a sense of what are the other reasons with AI workloads that it makes sense to move to Flash, kind of the other piece, the performance piece of the equation?
Well, I think that, you know, there's if we look at AI as a whole, you know, I think it's going to change. And we are starting to see this with our enterprise clients. It is going to change how customers are looking at their entire data footprint. I think historically, customers have definitely looked at their more operational data, their more online data with the lens of, yeah, I've got to have a certain amount of performance here. You know, it's my transactional database. This is where, you know, customer transactions are happening or, you know, customer support interactions are happening. Okay. There's, you know, a sense that that has to be a performant environment.
But if you look at the typical enterprise, not dissimilar to the hyperscaler, there is another, whole pool of data that is being kept, you know, cold or barely lukewarm, where people haven't historically been thinking about performance considerations. Now, all of a sudden, in the world of AI, where there's value in those data sets, either to drive training or, to provide context for inference applications, well, now all of a sudden, you know, if that data that may in some cases go back decades and decades, if that's locked away on slow, inefficient spinning disk, well, A, you know, we've talked about the power and space considerations. It's getting more expensive to store it. But B, if you can't provide the performance, to connect that data to, you know, these AI models, well, that's going to be a big limiter as well.
So I think, you know, we're starting to see customers, you know, incorporate their forward plans and thinking around AI into how they're keeping these environments up to date, how they're modernizing them. I think it's going to be a big tailwind for shifting a lot of this cold disk footprint to warmer tiers powered by Flash.
Got it. And then just how do we think of the changes in kind of storage needs as we move through training, refining, inference use cases?
That's a great question. You know, I'll preface this by saying, you know, we're still, I think, early days in cycle. You know, I mean, just look at, I don't know if you were at GTC last week, but, you look at the tremendous rate of, change and technique improvements. You know, I think there's, you know, a lot of these environments and techniques are still evolving. Now, that said, I think there are some basic principles or foundational principles. When we look at training, which perhaps has had the most focus, you know, I think there's, a couple of data storage needs there. One is, well, certainly to provide data very quickly into the GPU servers, so that, that data can be consumed, by the GPUs to build and train those models.
But what people often miss is there's, you know, a tremendous amount of, you know, kind of data preparation and then kind of a data workflow that goes, you know, that happens before that data is made available to GPUs. You get raw data in. It's got to be indexed. It's got to be looked at. It's got to oftentimes be transformed, so on and so forth. Well, that all drives storage consumption and storage needs. As we look at refining and inference, I think that's where the world gets a lot more dynamic, right? And I think a large part of that is that we're going to start to see those environments connecting models to both operational real-time data as well as historical contextual data. All right. So think about it.
If you're deploying a model, to do fraud detection on credit card transactions, as an example, right, well, you want to make that decision in line with somebody swiping their credit card. So you've got a real-time constraint. You don't want somebody standing at Target for five minutes while you're, you know, trying to make that decision. But at the same time, you know, in order to make that decision well, you probably want to provide the model that's driving that decision with the context, the historical context, you know, hey, what was this individual consumer's purchasing history? What were the purchasing histories of like cohorts, so on and so forth? And so I think you're going to start to see, you know, a wide variety of data demands that are all, you know, pulling in different directions that are all going to converge in these inferencing environments.
I think that's something that, certainly, we see as a significant opportunity for us at Pure, because when we look at the Pure storage platform, you know, we hit on all of those elements, right? Certainly the low latency element, certainly what we're doing to modernize, you know, these, you know, historically cold pools of data, bringing them into warmer tiers of Flash, and then providing the bandwidth and really the fast data movement, to connect it all together.
Perfect. And so, you know, obviously, you guys are pretty innovative on the R&D front. Just how do you think about kind of the steps that you can take to kind of refine your software or further take a lead as we move into some of these AI use cases and kind of expand it to not only just be, you know, kind of some of the SSD versus Flash advantages that you talked about, but really expand that kind of software advantage as well?
Yeah, it's a great question. You know, I think this is part and parcel with a lot of the, you know, the integration work and partnership we have with NVIDIA, where, you know, we're working with them on both the training and the inference side to, you know, really optimize those environments. And whether that's adopting high-speed networking protocols, RDMA, you know, GPUDirect Storage, essentially making the data transfer even faster than it is today, whether it's really digging into the application sets that sit behind these inferencing environments and really optimizing all the way down and through the application, the data storage and what we're providing, and then, you know, ultimately, the GPUs.
You know, I think that's the biggest thing that we can do is really work with industry-leading partners such as NVIDIA, really understand where the space is going from a toolchain and toolset and workflow point of view, and then tailor our products' performance and really the best attributes that customers are looking for from the data storage to those tools.
Got it. And maybe kind of circling back, you did a great job of kind of laying out those three phases. Is there different power intensity, or is it just relational to, like, where the compute intensity is of kind of the different phases of AI?
It's a great question. I'll go back to, a lot of this is still evolving early in cycle. What I would say today is, you know, the dominant consumer of power as it relates to AI really is, the GPUs. You know, storage is significant, but it is dwarfed by GPUs. I think that will probably change over time. You know, right now, we're, again, being early in cycle, the focus really has been on, time to market, training larger and larger models, and the results, more so than the optimization. Well, that will kind of correct itself in time. And so if you look at where the dominant consumption of power, is being in the GPUs, today, that really is, much more on the training side.
Now, I think as inference applications get deployed, you know, that balance will shift, mostly because, you know, per unit work or per job, if you will, training is way more power-intense, but you're going to do inferencing a lot more. You're going to train a model once, and you're going to apply it, you know, millions of times. And so that balance will shift. And I think that, as inference really takes off and people are deploying that, you know, in larger volumes, it really is going to, you know, put a more of a focus and a spotlight on optimizing those environments, for a couple of reasons. One is, logistical power limitations, you know, is significant. But then number two is also cost, right?
You're starting to see a lot of companies in the AI space really digging in and trying to understand, hey, so what's my, you know, what's my marginal cost model? What's my marginal business model? You know, it's a little bit, well, I'll leave it there.
Got it. So, I mean, the clouds have traditionally maybe built a lot of their own solutions for storage as well as, you know, a lot of other pieces of the data center componentry. Just, you know, why would that decision process change as we move towards AI, or why does that equation why do you guys think that that equation could change?
Well, I think that equation could change, regardless of AI or not AI. And a large part of it is that, kind of what we spoke about before, you know, the vast majority of data storage deployed, still being deployed by hyperscalers today, the large clouds, is sitting on nearline disk. They absolutely recognize that it's not the most efficient, you know, media to deploy. The issue is, and they know they need to get to Flash. That's where they want to get to. The issue is that they are also sophisticated enough to realize that SSDs are not the answer. For all the reasons that we spoke about, they're not going to get the efficiencies they need. They're not going to get the reliability, a whole number of factors.
And so, you know, they're looking to kind of make that jump, but without the IP, without the direct-to-flash software and hardware, you know, IP that we have, that's a big jump for them, right? And so I think they're really looking at, hey, how do I get over that hump, at the same time that, you know, and you can look at public statements by pretty much all the hyperscalers that are, you know, tightening their belts in terms of additional hiring, additional headcount, and shifting more of their research to AI, more of their resources over to AI. You know, it's clear that it's an important factor, but at the same time, it's not really something they're likely to dedicate a bunch of engineers to go build in-house.
I think that's, you know, we look at that as a significant opportunity for Pure, right, to go and work with these hyperscalers, you know, to provide, you know, our technology to help them kind of make that jump from disk to Flash, get the best properties out of it, and to do it in a very cost-effective manner.
Got it. I know you guys have highlighted in the last quarter a big win that you had with kind of a cloud AI service provider. Just, you know, as we kind of translate the last answer to that win that you guys had, were any of the reasonings or rationale different behind kind of why you were able to kind of win that customer or what they were looking, what their objectives were?
Yeah. I, you know, I I would say, the similarities, but I I would put maybe that in a different bucket, right? When we talk about the hyperscaler opportunity, you know, en masse, that would be inclusive of AI environments, but I I would say it's actually largely focused on the cooler kind of data archive almost environments, the general pool of storage that's 80%-90% served by hard disk today. You know, most of the AI environments, whether it's cloud, whether it's, on-prem in the enterprise, are going to be on Flash today. That's that's just, again, it it sits in that kind of high-performance bucket. You know, with the GPU cloud provider we spoke about last quarter, I I would say that, well, a couple of things.
One was, unlike, you know, our broader opportunity set with the hyperscalers, which is really more about incorporating our IP around flash management into the hyperscalers' designs. The win we spoke about last quarter was really sale of existing products, right? Existing product in FlashBlade in particular, but sale of existing products to serve their AI customers' needs. And if I look at what drove that win, I would say it was a couple of things. You know, one was the flexibility that our offerings gave them to deliver high levels of performance to a number of the phases of data workflows associated with AI training. We spoke about this a little bit before, that everybody focuses on AI training, thinking about how fast can you move data into GPUs and provide it for, you know, for the models to go and crunch on.
What they often miss is all of the preparatory work that happens, to transform the data, to pre-index it, to label it. Well, this cloud provider saw, you know, demand from their customers to, you know, capture all phases of that workflow, and they found that FlashBlade was an ideal platform to provide high performance to all phases, of that workflow. Secondly, you know, I'll remind you that, and the viewers, perhaps, that, this was an Evergreen One deal, right?
You know, the customer chose not just the FlashBlade platform, but really the subscription SLAs backed by the FlashBlade product to give them that flexibility, flexibility in terms of how fast, you know, customers ramp, and, you know, being able to, you know, deploy and really not have to have, you know, capital expenses get way ahead of revenues, let's say, but also the flexibility to shift between SLAs as their customers' workload mix might shift in the future. And so when we look at the large GPU cloud win, those are really more of the driving factors, high performance across a, you know, wide set of the data flow steps associated with AI training, and then wrapped in the flexibility and optionality that Evergreen One gives them both from a consumption perspective, but also as a, you know, in terms of workload mix changes.
Got it. Okay. That's super helpful. Maybe just kind of a final few questions on just, like, how do we contextualize maybe we spent a lot of time talking about the cloud opportunity and, you know, clearly, given some of the power constraints, that's a big concern. But just how do we think about the enterprise opportunity as different from the cloud opportunity and, you know, where are you kind of seeing customers most interested today?
Yeah, absolutely. So, similarities, but also big differences. So, certainly, you know, we spoke about how the hyperscalers are laser-focused on TCO, or enterprises are focused on cost as well. But I would say that enterprises are focused on just total risk reduction and cost, being one element of risk, you know, reliability and operational considerations being another important element of risk, but then also optionality and future-proofing, right? As enterprises are modernizing their environments, especially as they're preparing for, you know, rapid pace of new technology, you know, moving to a data storage platform that gives them a lot of optionality, so that, hey, if, you know, the way that AI inferencing environments changes six months from now, I haven't locked myself into a rigid infrastructure set that's now hard to get out of.
That type of optionality and risk reduction is something that is, absolutely top of, you know, top of mind in the enterprise. You know, I would say that, you know, power considerations, you know, it varies geo to geo and situation to situation. Certainly in Europe and parts of Asia, it's a top consideration because of, you know, a variety of reasons, whether it's, you know, regulatory, whether it's, just ability to get power. You know, I would say in the U.S., you know, there's definitely focus on it from a cost perspective, but in places where it becomes, you know, an absolute inhibitor, you know, we spoke with a client in Midtown Manhattan that was out of power. And I mean, just imagine what is involved in trying to, you know, add 50%, add 100% power to your data center in Midtown Manhattan.
It's like you can't just snap your fingers and make that happen. Well, now, all of a sudden, power becomes a huge issue. So it's a little bit more binary, we find in the U.S. But yeah, overall, I would say the considerations are much more about overall risk reduction, cost being one of them. And, you know, I think that makes sense because, you know, if you look at the hyperscalers, they are, they're looking for the best technology. They're always, you know, they're highly technically sophisticated. They're designing their environments. It's really about what are the best technology pieces for them to go consume in their environments.
Well, our enterprise clients are looking to us to provide a much more complete platform and much more complete solution, because that's, you know, designing data centers isn't, and data center technology isn't generally their core competency and their core business. They're really looking to partners such as Pure, to go and solve those needs for them.
Got it. Okay, well, that has been perfect. Rob, I appreciate you coming on today to kind of give everybody an overview and breaking it down in very simple terms so we could all understand and bringing physical demos, which makes it even better. So, if anybody has any follow-up, either follow-up with myself or the Pure Storage IR teams, I would be happy to answer any of your guys' questions. Thanks so much.
Thanks for having me.