Research disclosures, otherwise talk to your sales representative. I'm now going to read Pure's Safe Harbor statements made in these discussions, which are not statements of historical fact or forward-looking statements based upon current expectations. Actual results could differ materially from those projected due to a number of factors, including those referenced in Pure Storage's most recent SEC filings on Forms 10-Q and 8-K. The very important stuff is out of the way.
Yes.
All right. So delighted to have Pure Storage with us here today, Charlie Giancarlo, CEO, Kevan Krysler, CFO, and Rob Lee, CTO. Perfect. This is a very exciting time for you guys. It's been a very interesting year. And at this time, a year ago, I think we were sitting here kind of talking about macro and cloud customer delays. However, you were also at the cusp of introducing the E series. Your subscription products were doing quite well, and you were just starting to kind of talk about AI. So as you look today at how you were looking at the opportunity a year ago versus today, how has that opportunity set looked or changed, or have just everybody woken up to it?
Yeah. Well, yeah, I do think there was a wake-up call earlier this year when it started to become well, first of all, we've overperformed in our subscription business all year long. We started signaling that right in Q1, that there was a great overperformance in Q1. And then it was repeated in Q2, and it was repeated again in Q3, and then finally in Q4. But by Q3, it became clear that it was significantly impacting the way the print of our P&L. Overall, a very good thing, but of course, perhaps not certainly ahead of our expectations. And I think that really caused everyone to sit up and take notice, saying, well, if subscription now is affecting their P&L, perhaps they are really on the march to drive a greater subscription business.
To be clear, by the time the year ended, we're now 42% subscription in the company overall. We grew the portion of our subscription that is pure consumption-based purchases, if you will, by our customers. We grew it by over 100% a year, whereas we had only projected 50% at the beginning of the year. And it's now becoming a large enough fraction that it really does affect the print on a quarterly basis. So we're very pleased with it. We're going to continue to focus on driving subscription this year. To be clear, we don't expect it to be 100% of our business. Still customers that will prefer to buy in the typical way that is an actual purchase of the product. But we do expect subscription to continue to become an increasing part of our product.
To speak to the other things, the other things that we said were going to happen, such as our E Family launching and doing well, it did. It launched. We had about three-quarters of sales of the E-product. It's now the largest or the fastest growing product ever introduced by Pure. That's been strong. As perhaps our longer-term shareholders know, we've been in the AI business for about five or six years, increasing over that time. We have several hundred AI customers, although and I kind of have to smile when I say this, the majority of them being what is now called traditional AI, because, of course, all the focus today being on Gen AI and ChatGPT, et cetera. But we're getting our fair share of that as well. Certainly, even years ago, we won the Facebook environment for the RSC.
Now we were able to announce this past quarter, as we did, by the way, two quarters earlier, an eight-figure win in a GPU cloud. All of the things that we talked about are coming true, both because I think of the data storage environment going in the direction that we expected, as well as our ability to be able to deliver what we said we were going to do in the year. Now the last thing I'll mention is we had indicated that we thought that we would start to be able to place hard disks, not just in enterprises, but eventually in the hyperscalers. We now think that we will be able to win a design win, be very clear, a design win sometime this year in one of the first hyperscale environments. So that work continues as well.
So we're all just catching up to it. This is the end of the story. OK. All right. So you talked on the last earnings call about kind of three different use cases for flash as you kind of train, refine, use models. Can you just refresh the audience on that? And just how much of the opportunity comes at those various stages? And what are the performance differences there?
So yeah, if I could split it up into three, there's the training environment. And of course, a lot of the market right now is focused on the training environment. That's where, of course, NVIDIA is spending a lot of their time and a lot of their growth is coming from AI training, SuperPODs, BasePODs. That's where a lot of the focus is now, I think appropriately so. If you're generally working on text, it's not a lot of storage, to be very clear. So storage is not particularly affected by that. If you're getting into images and video, there's quite a bit more storage involved. So think of that as the if you have a pyramid, think of that as the tip of the pyramid. And it requires, to be very clear, it requires very high-performance storage systems, which we supply.
They're very expensive on a per-gigabyte basis, however you might want to measure it. But it is the tip of the pyramid. So in terms of overall volume, one of the smaller parts. Right below that, we think, is the so-called inference market. So very few enterprises will actually start to do their own training environment for LLMs. They'll use the GPU clouds. They'll buy LLMs. And then they'll do what's called inference inside their own environment. That is, update the models a bit, but also run their data across those models to improve it for their own environment. And that's, let's say, the next layer in that pyramid. But below all of that, let's think about when customers now, when general enterprises want to start using Gen AI LLMs against their own data. Well, where is their data? Their data exists in data storage arrays today.
Those data storage arrays were purchased to fit the application stack they're connected to. It could be a database. It could be a backup environment. It could be an analytics environment. It could be just a large data lake. Those environments were purchased to just because they're price sensitive. They were purchased to be able to have the performance level necessary for their primary workload and no more. Well, if you want to make that data available for AI, you have two choices. You can buy a brand new array and copy the data into the brand new array, in which case it opens up opportunity there. Or we're thinking, well, why buy a if you're going to buy a brand new array anyway, why not replace that array with a Pure Storage array at roughly the same price, but with a lot more performance?
Either way, what I'm talking about raises the requirement is going to be to raise the overall water level, the overall opportunity. There's going to be an opportunity to replace existing storage with higher performance systems. And I see that that's the broad base, if you will, of that pyramid, where the bulk of the opportunity is going to sit.
OK. And just how are you guys seeing that timing play out of kind of those different buckets? I mean, I know it's TBD, but.
It starts from the top and goes down.
OK. The eight-figure Evergreen//One deal that you announced on earnings with a GPU cloud provider for AI use cases with your FlashBlade products, that is encouraging for your cloud and AI market opportunity. However, I think investors, and myself included, were surprised it was an evergreen deal. So kind of what was the rationale there? And then how does it inform how we should be thinking about kind of how some of these other bigger opportunities would evolve?
Well, Rob was right in the middle of that deal. So I might ask Rob.
Yeah. I mean, I would say it was a little bit of a surprise to us as well as we were engaging with this customer. I think there's a couple of things to call out. I think that at the end of the day, this customer it's a great validation point that this customer saw the benefits of flexibility that Evergreen//One was going to be able to offer them, and really two dimensions or two elements of flexibility. Number one, certainly, if you're a service provider, a cloud provider type of business, as you're building out infrastructure, as you're building out capital, it's nice to be able to have flexibility in terms of how quickly are you going to be able to land customers? When do you have the time building out the infrastructure to go support them?
Evergreen//One, because of the consumption flexibility we offer, fits that model very, very nicely and consistent with demand we see in Evergreen//One across other service provider type of businesses. But I think the other part of it, which I touched on a little bit in discussion last week, is that this environment that's being stood up by the GPU cloud provider is intended to be used for a number of different parts of the AI workflow. So certainly, part of it is that high-performance training that Charlie spoke about, as well as some of the data preparation tasks that go into that workflow, both pre and post-processing of the data, as well as some of the inference workloads. And what Evergreen//One offers them is flexibility as potentially those user workloads shift and mix and move around a bit.
Evergreen//One gives them the flexibility to deploy the right SLAs needed to match those workload mixes. And so if we step back from it, I think it's a great point of validation, not just for the continued point of validation, not just for our product fit in this AI space, but really the flexibility that Evergreen//One is offering customers in terms of both the technology as well as the consumption model.
Do you think that that will kind of become more popular? Or is that kind of an isolated instance where it just makes sense for this customer?
Well, I think the recognition of the flexibility, absolutely. We saw the strength, the overperformance of Evergreen//One throughout the last year. We've kind of discussed what our forward-looking momentum on that. I think the recognition of the flexibility overall and, frankly, the de-risking of customer environments that Evergreen//One offers is a big driver of that.
OK. I mean, the counterpart to that is you're de-risking it for the customer. But have you de-risked it for yourself? What protections do you have in there just to make sure that?
Well, so I think there is a sort of dichotomy between the risk protections that either the customer or we can take. And what I mean by that is the customer generally, when they buy if they buy storage, they want to buy for a period of time and not have to worry about it. So they're generally buying with a view of three to five years ahead. If they're buying it as a service for us, it's our responsibility to deploy the amount of capability that they need for the SLA. And we don't have to do that day one. We can do that over time. So we're able to reduce, in that sense, the risk not just for them, but for ourselves as well.
The other thing I'd add to that is we really think of it. We build it. We run it like a true SaaS service. Just like a SaaS service that you might utilize as a consumer, well, that service provider understands your behaviors, understands, and is able to predict what you turn on Netflix. You don't just get access to programming. You get recommendations. It starts to understand your preferences. Well, just like that model, when we work with our Evergreen//One customers, we have a deep knowledge of their environments, understanding of what it's being utilized for, how it's likely to grow. And so as a result of that, we can go deliver not just a higher level of service, but we have opportunities to go and optimize that.
OK. So the European crime drama recommendation I'm getting is OK. All right. So a big question that everybody has here is just, how do you think about that AI opportunity for you guys in terms of the cloud opportunity versus enterprise? And just what you're seeing, particularly there's multiple different stages of the opportunity.
Well, I would say that the cloud opportunity on, again, this high end of it. Remember, we still have hundreds of AI customers, a lot of it in standard enterprises. But it would be financial services. It's pharma. It's protein folding. It's a whole bunch of different things that are not LLM-based. And that business is a strong business for us. And we continue down that path. I think when we start talking again about LLMs and Gen AI, I think it actually is starting mostly in the clouds and some very specialized enterprises, the ones that are much more technologically capable.
I think the vast majority of enterprises are. If I were an enterprise and not necessarily one of the largest, I'd probably go to one of the GPU clouds for a variety of reasons, including CapEx, including the fact that I'm experimenting more than I am in a 24/7 production AI environment. So there's a lot of reasons to go to the clouds. I think it starts there. Then we'll roll down into inference. I do know from having spoken with customers, leading edge customers, I would say, as well as leading edge system integrators that work with them, that there's already a recognition of this issue of data fragmentation in their own environment that they're going to have to deal with. So they're already starting to look at that. So I think that's going to build over the next year or two.
OK. You alluded to the fact that you have increasing confidence that there could be kind of other cloud customers or at least a major cloud customer contribution. What is that? What is finally making them kind of make that move to flash? They've long kind of been laggards of using 90% disk, I think. Is it performance? Is it power? Is it space? What is kind of finally getting to be that trigger point?
I want to just describe for the listeners that Meta is now referring to true hyperscalers, household names that we all know. 90% of what they have, roughly speaking, is hard disk-based. They're able to make that performant because they have so much of it. They spread the data out so that they have very high parallel performance. But even there, it's about 20% of their power, probably more than 20% of the space in their data centers, power and space becoming more important if you want to use it for GPUs. But also, in our view, we're now getting at the point where the system-level pricing that we can provide because of our DirectFlash technology is competing now with hard disk. So we're now at the point, from an overall pricing standpoint, where they need to start thinking about replacing that hard disk environment with flash.
Because of our DirectFlash, we're at the forefront of that price performance curve. It's not as simple as, OK, take out a disk, put in one of our DirectFlash modules. It really affects the entire design of their data center environment. We're now in the process with a number of them of discussing those of doing co-engineering you might call it co-engineering. I think it will lead to a design win sometime this year. If it does, we're probably looking at future years before it turns into actual substantial revenue of any type. But we're on that roadmap now.
OK. I mean.
Yeah. And just to answer the very first part of the question, it's absolutely yes to all those things. It's price, efficacy and competitiveness. It's performance. It's space, power, as well as reliability. We're talking about, well, at the hyperscale, you literally have people whose jobs it is to show up every day and replace disks on a daily basis. And so by improving the reliability of order 10x, 20x and beyond, by improving the lifetime of the media that's deployed, that's another vector where we can dramatically improve their TCO.
For those in the room who haven't seen, Morgan Stanley published a note with the technology team and the power teams kind of looking at the drain that AI data centers would have and highlighted Pure as kind of one of those names that could help as transformational technology. Just staying on the clouds for a second, given the scale of the purchases that clouds make, particularly of really any equipment that they buy, just should we be thinking that what this deal looks like looks different if any potential deal looks different than what maybe a traditional deal has looked like?
Well, yeah. First of all, it's not going to be our standard product for a variety of reasons. The standard product is designed specifically for enterprises that have a certain set of requirements. The way that hyperscalers structure their data storage is entirely different. We need to fit within that structured environment. So first of all, yes, it's going to be a completely different construct than what we actually deliver today. Secondly, we need to fit within their overall performance envelope as well. Now, flash has much higher performance. It's going to allow us actually to change that overall data center design, which is going to be, at the end of the day, a benefit to both them as well as to their customers overall.
OK. All right. Perfect. Another trend during the year we talked about upfront was the success of Evergreen at the expense of CapEx deals. You've mentioned, hey, the mix is never going to get to 100. But just how do we think of the influence of macro versus kind of flexibility that customers are looking for?
Yeah. Why don't I start, Kevan, and then OK.
Yeah. I'll take it away, Charlie.
Yeah. Well, first of all, it is a very different sale. It's obviously, if we have $100 of CapEx sale, about 70% of that is the actual system that we sell. About 30% of that is what we call our Evergreen//Forever, which is also a subscription, which ensures that that product is upgradable non-disruptively to the application forever. That's why we call it Evergreen//Forever. What it means is that 10 years later, without the customer spending an extra dollar of capital, that product will look like a product we had sold in the last year and without any disruption to the customer overall. So after the depreciation cycle, it is a pure subscription going forward. They're able to continue operating with increasing performance levels because we're upgrading it, again, without additional capital being spent.
So you could think of it as Evergreen//One, which starts off day one as consumption, is subscription-based, and Evergreen//Forever as starts off with the CapEx but then becomes subscription over time. Kevan, maybe you'll talk about the substitution effects.
Yeah. Let me just kind of hit your question first. The first thing is on the macro piece. And has that played into the momentum we saw with the Evergreen//One offering? And I think yes, to answer that in a couple of different ways. You've got the sensitivity on your capital budgets. You've got sensitivity on your operating budgets. And our customers are able to, with the agility and flexibility of the Evergreen//One model, can adjust to that sensitivity. The other thing on the macro in terms of impact is the fact that customers have been constrained on labor and resources. And they can leverage a cloud operating model through Evergreen//One and then still be able to respond to the fact that they're short of labor resources. So those are some factors why the macro, I think, has helped boost our Evergreen//One momentum.
Then you've got the flexibility concept with Evergreen//One. And it really is compelling in terms of how agile and flexible that model is. If you think about a cloud operating model in and of itself and the benefits and business value of a cloud operating model and now you're taking that infrastructure to wherever the customer wants that infrastructure to be and still enabling that cloud operating model is another compelling factor in terms of what drives that. But I think the other important thing is when selling IT infrastructure, our direct sellers, our sellers, the channel ecosystem, and even our customers, they have to get aware and comfortable that they can purchase that technology through a service, not just as a solution. And I think that's new muscle. And we've been working that for years with our sellers. And I think we've gotten some good momentum.
And there's still a lot of work still to do with our channel ecosystem, getting them comfortable with selling this as a service, as well as our customers getting more and more confident that they consume this technology through as a service versus buying the infrastructure.
And so a big question we get a lot from investors is, OK, that's great. But how is it not super cash draining for you? And so can you just kind of walk through kind of the mechanics of how that's just not a cash drain?
Absolutely. A couple of things here. First of all, we do provide flexibility in terms of payment terms on Evergreen//One, so annual payment terms, quarterly payment terms. So you have that effect. And you saw a little bit of that effect on the headwind on our free cash flows this year. Obviously, that becomes a tailwind as more and more time passes. So you have that piece of it. But when you think about the infrastructure that we support Evergreen//One with, that's our infrastructure, incredibly optimized. But in addition to that, we've been running the Evergreen model for over a decade. And so as we bring that equipment back, we're refurbishing that equipment and then redeploying that equipment to service our Evergreen//One offerings, in part with new equipment as well. So if you think about the optimization and value associated with that, it's pretty significant.
Obviously, the infrastructure we're deploying out to serve those Evergreen//One customers, we depreciate that over time. So in terms of the P&L hit, that's going to be happening over time. And then in terms of commissions, because we're still building muscle with our sellers, we're still paying commissions on TCV, whether it's an Evergreen//One or whether it's a purchase via CapEx. But the P&L treatment, the expense treatment on an Evergreen//One sale for our commissions is recognized over time in our P&L, whereas if it's a sale, you've got a large piece of those commissions being recognized when we ship the product.
OK. Perfect. So let's turn back. You've been vocal that the cost curves just work in your direction. And the E series represents a transformational opportunity to accelerate the demise of disk. HDD vendors, many of whom are here, like to say that this isn't true.
This is a gun-free environment. So I just want to be clear about that before I get the question.
Yeah, yeah, yeah. There's security downstairs. What has been the receptivity of customers to the E series? And just how are customers thinking about the pacing of getting rid of disk?
Yeah. Well, it far exceeded our early expectations. We introduced it in late April. So we basically have three quarters of sales. When we introduced it, we were asked about it, as you might recall, on the conference call. Of course, we had built in a certain assumption for sales. We blew right through that. So it's exceeded our early expectations. It is the fastest growing new product that we've introduced to date. So it's growing very rapidly. Now, any new product takes customers' time in the enterprise space to get a sense of it, to change the view that lower-priced storage is disk so widespread that it takes time to change that perception.
I did something, as you may recall, on that first quarter call that I had never done before in my entire life, which is announce a price on an earnings call because you've just created your ceiling by doing that. The reason why we did that is we felt that customers didn't believe that we could get flash at the same price as disk. And so we felt like we really needed to really shout from the mountaintop, if you will, that in fact, we have flash now at the price of disk. So it's growing very well. It's now at the level where it can be meaningful for us this year at a top line level, a needle mover there. So we feel quite confident that the E series is going to become a more and more significant portion of our overall sales.
One last thing I'll mention is that it's now on both of our products. It's both the FlashBlade product and now the FlashArray product, which brings its ASP price down even further, making it even more affordable for more customers.
So you think in that flash to disk transition and kind of accelerating that, what are the big?
The other way around, probably.
Yes. Sorry. Just what are the inhibitors to getting there? Is it just getting over the mindset, you said? Or is it getting to a better macro environment where they're spending more on their environments? Is it just kind of waiting out price? What do you see as those?
It's a little bit of those first two things. And by the way, as we go into this new year, we now have these new 75 TB modules. So denser yet again. It'll allow us to bring the price down a little bit further. But it is a slower macro slows down customers' willingness to try new things. There are fewer new opportunities. A slower macro means they tend to sweat their assets longer so before they go to replace it. So a better macro helps us in market transition. But I'd also say it's changing the mindset of a lot of buyers yet who have not yet heard the message.
Yeah. I mean, I think if you look in the very, very short term, you have small effects like customers who maybe set budgets for the full year ahead. As Charlie mentioned, we've only had a partial year of sales as that opens up. You also have, as you know, a lot of the disk environments that we're going to go and replace are typically purchased and then run and depreciated over several years. And so we'll kind of take time for that to roll out. But if we step just slightly back from it, we view this as a very significant opportunity. We're at the point in time now where there's just literally no good reason to look at a disk environment, say, I've got to go refresh this. And I'm going to go do it with a disk-based system.
There's no argument on any of the attributes of performance, efficiency, reliability, all the things we've talked about. Previously, really, the only argument was cost efficacy. We've completely neutralized that.
OK. Perfect. Kevan, you guided to 10.5% year-over-year growth at the midpoint for fiscal year 2025 on earnings last week. We've outlined a lot of things: macro, subscription, some larger customer sales. Just what is kind of built into that guidance? And then just how much visibility do you kind of have in the current environment?
Yeah. So high level, a couple of things. I think the biggest change in providing guidance for this year was we provide a guide on TCV sales for our Evergreen//One and Evergreen//Flex. So obviously, a lot of work that we did to come up with that annual guide. When I think about it from a macro standpoint, from a year ago, there is a sentiment improvement in terms of if I'm a year ago thinking about the sentiment of a hard landing in terms of recession, I think that's eased in terms of, hey, is this going to be a little bit softer? If a recession at all, what does that look like? And I do think that improved sentiment has been helpful because I would say also that visibility has improved in terms of what we're looking out for next year.
Really, the biggest thing is the subjectivity on when we're selling to customers, where do we continue to sell an Evergreen//One service option versus CapEx? It's really that judgment in terms of how is that going to convert between a sale of our solutions versus a sale of the services.
OK. Perfect. NAND pricing has been a tailwind, most certainly over the last year, to your gross margins. You've long kind of indicated longer term you think that your gross margins are kind of below what they're running at today. Just how much of that is influenced by or how should investors be thinking of just how much impact there is from kind of current NAND pricing?
You want me to touch on that first? And you can come up. Look, I think NAND pricing and Charlie hit that in his prepared remarks as well. But NAND pricing is correlated with macro. So when we think about at least for last year, it was heavily correlated, in my view, to macro. And what did that allow us to do? Obviously, the macro was a headwind for us. And you saw that. But it also allowed us to accelerate our roadmap for the E family to take out disk. And frankly, I think we were about a year accelerated in terms of our roadmap as a result of what we saw with NAND pricing. So that's a plus in terms of our strategy to take out disk.
When I think about improving NAND pricing, look, I think that one is correlated with an improving macro, which lifts all ships. I think that that's helpful. Then also, for competitors who compete on cost plus, it's a benefit for us as well. So I think there's some attributes that are beneficial while still appreciating that the long-term vision is that NAND pricing will come down.
OK. I mean, and just in terms of a lot's been made by competitors of forward purchases, just how do you think of kind of cash utilization or just balancing kind of forward investments in flash?
Well, look, we've got a fantastic direct procurement team. We have the benefit of leveraging our relationships with the flash vendors themselves because we're buying the raw flash. And I think that gives us a distinct advantage. And we've just done a really nice job managing that environment and have confidence that we'll continue to do that.
Yeah. It's not something we talk about on a quarter-by-quarter basis. But I believe that we have the best not only do we have an excellent procurement team, but because we're buying raw flash, because we have an ability to buy a broader, different set of flash, that is to say, we can switch from consumer flash to enterprise flash, from TLC to QLC, all depending upon the market pricing of each of those, and put it into our product to take advantage of spot pricing because we make it all appear to be high performance, high quality to our customers at the system environment. So it gives us a huge advantage.
Are there any questions from the audience? We have one up here.
Thanks. On this last earnings call, it was really eye-opening. And we kind of saw it across the board in terms of how AI is really accelerating the adoption of flash. And I was hoping I could get a better understanding of how broad is that opportunity? What data sets need to be on flash versus cold storage? And does Retrieval-Assisted Generation impact the use of flash as well?
Yeah. So I think we would argue that over time, everything—well, we are arguing that over time, everything will be on all flash. And the benefit of it is Rob identified this, that even if we go into the lowest, let's say, price level of disk, flash is going to produce four to five times more performance, even at roughly the same price level. Whether it's RAG or whether it is just making the data available on existing systems today, you need more performance to make it available to an AI environment. Now, when you talk about LLM creation or even inference, that's all flash. Now, to be also clear, the hyperscalers can do it with disk. But the reason is because of the way they architect it. And it takes massive amounts of disk to be able to do that.
But even they, for the reasons we gave, we think they're going to be going to flash. One example of that is when you do it with all disk and we substitute our flash, we can do it at about 1/10 the space power and cooling. If 20% of your data center power is going to storage right now, saving 20% of your data center power is a very big deal. You have to remember, these data centers, which are very sophisticated, have teams of people whose full-time job is to figure out how to improve data sorry, how to improve power efficiency by 0.5% a year. And by replacing disk with flash, we can do 20%. So it's a very big deal. It's only part of what's going to drive the hyperscalers to flash over time.
But we really see that AI, the benefit we're seeing of AI, is not solely in the fact that you need to produce the highest performance systems to feed the GPUs. It's not solely in that you also want to back that up with warm data to be able to replace those models in those high-performance systems. It's that we think the entire data storage environment needs to be upgraded to be able to make use of that data. Yeah. And I think the first effect you'll see is people recognizing that there's just no place for cold data anymore. I think that entire cold tier. Why is it called cold data? Well, historically, people have kept it.
You have to defrost it.
Yeah. You have to defrost it. The reason was people kept it around solely to satisfy regulatory requirements or retention. It was like the only consideration is, how can I do this and minimize my cost? Well, now all of a sudden, whether it's Retrieval-Augmented Generation, whether it's other techniques that are yet to be kind of rolled out, all of a sudden, when there's value in all of those bits of data, it can't be cold anymore. I think that is really the major effect. We discuss it on the call. I think what we're going to see is an acceleration as people recognize that, an acceleration of modernizing these environments to unlock and defrost the frozen cold data.
All right. Well, that is a perfect place to end. Charlie, Rob, Kevan, thanks so much for being here.
Thank you. I appreciate it.