Great. I want to welcome everyone to the fireside chat that we're hosting this morning with the team from Pure Storage. I have Rob Lee and the IR team. Rob has been with the company since pre-IPO and is very knowledgeable about product technology roadmap and just the company operation. But before we get started, I do need to read the disclosure statement. The statements made in these discussions, which are now the statements of historical fact, are forward-looking statements based upon current expectations. Actual results could differ materially from those projected due to a number of factors, including those referenced in Pure Storage's most recent SEC filing on Forms 10-K, 10-Q, and 8-K. So to that end, I want to thank again the Pure team. And it's a very timely fireside because we just had the earnings. We had earnings actually by Pure and the peer group.
Rob, maybe you just quickly give us an overview of the outlook, and then I'll come back with more specific pointed questions.
Yeah, absolutely, Mehdi. And thanks for having us back. Look, yeah, we, as you mentioned, just announced our Q4 and full fiscal 2025 earnings on Wednesday, capping off a strong year, return to double-digit growth. And as Kevan remarked in his prepared commentary, passing an important milestone of $3 billion in revenue for the first time. And as we look forward to next year, looking for really continued strength across the portfolio driven by what we're seeing in, well, across the portfolio, some of the strength that we saw in E, in our E Family, which we discussed quite a bit on the call, as well as what Charlie spent a fair bit of time discussing in his prepared remarks, which was Pure Fusion.
This is really our capability and set of functionality that we're delivering on top of our arrays and platform to help customers implement that cloud operating model on their premises and start to build that network effect that has been missing from traditional storage for so long. So yeah, excited for the year ahead. And certainly, which also got quite a bit of discussion on the call, certainly looking forward to further developing our hyperscaler relationships, our lead customer engagement, helping them move closer to scaled ramp, as well as advancing other discussions with potential hyperscaler customers. So yeah, very excited for the year ahead. And I'm sure you've got some questions.
Yes. I do have some specific questions, but I think it's great to just quickly cover some basic items that relate to the storage market. I'm sure everyone is familiar with block, file, object, cloud native, and there is also the capacity and performance type of storage products, but I think one of the most confusing, amazing, very obvious, but I still think there is a disconnect between the investment community and the industry is how these different storage types and storage capacity or storage products are organized or how should we think about metrics, thinking about a cloud, which most likely require capacity, and then we have the enterprise, and then we have the AI, so you have this slide in your earnings slide deck that has various storage types, and then you have the market capacity versus performance.
Is there any way you can help us frame the picture where we are in the industry, the mix, and how the AI would change this?
Yeah, absolutely. Maybe let me help you kind of break it down a little bit. We typically think about storage markets as being driven by different workload sets and different operating environments. And so as you mentioned, certainly you've got the set of workloads for which customers are looking to run on the public clouds or build in a cloud-native fashion. So you've got one set there. When we look at the majority of our core business, that's serving workload sets that would traditionally be run and operated in an on-premise environment. And then when you unpack that a little bit further, what you tend to find is different levels of, call it price performance requirements, right? Certain types of workloads that are sitting on-premise that do require a higher degree, a higher level of performance, as an example, online databases, transaction processing systems, virtualization systems, et cetera.
All the way down to lower price performance tiers, if you will, of storage that are also sitting on-prem. So as an example, think of long-term retention, archive, large content stores. And you've got everything in between. And so what we typically see, and really if you look at our portfolio, we generally kind of break this down into kind of three areas of the portfolio, right? We've got our higher performance delivering products, our FlashArray//X, FlashArray//XL, our FlashBlade//S, that tend to be competing at the highest performance levels, and whether that's driven by enterprise applications or AI, but at the highest performance levels. And those would be typically competing against all-flash competitive offerings.
We've got a set of offerings in kind of that mid-range, such as the FlashArray//C, which would typically be competing against hybrid disk and SSD or disk and flash-based systems at that medium performance level, and then what's a newer market for us with the E Family is really taking QLC Flash, our technology to go drive a ton of efficiency and other benefits from that flash, and take that and go and compete for traditionally what has been nearline disk-based systems, and we're really the only ones on the market that are capable of doing that, and so that's kind of at a high level how we'd think about segmenting or breaking down the different areas of the storage market and how that maps to our portfolio.
And then when it comes to AI, if inferencing is done in the cloud, would that be a performance X series, or would that be like a mid-range inferencing in the cloud?
I think that a couple of things. One is we're still very early days in how AI application deployments play out and what inferencing looks like. That changes very, very rapidly. What we do see, if I think about the basic fundamental principles, what we do see with the development of AI is a couple of things become very important. One is access to all of an enterprise's data. If you're going to build some new model to do better customer intent prediction or fraud detection, you're going to want to use context from all of your historical data on that customer to drive the decision. You've got to have access to all that historical data. Having access to large pools of data is very important. Number two, inferencing tends to, right, you're trying to make a decision on a real-time decision, right?
If you're deploying, again, a fraud detection model, you've got to make that decision when a customer is standing at the point of sale, swiping a credit card, whether you're going to authorize that or not, that has to be connected to your online systems where your production systems are running, and so what we think and what we see is inferencing is going to drive a need for both higher performance storage as well as connectivity to bulk long-term storage, that historical content that's so important, and then secondly, if you look at the inferencing applications, they very much tend to be aligned to the container ecosystem built on cloud-native architectures, highly driven by open-source software delivered on containers, and that's why we see so much interest on the Portworx side of the portfolio in supporting those environments.
Got it. Okay. And by the way, for investors on the call, if you have any questions, feel free to send me an email. Okay, Rob, back to our conversation. Look, over the past couple of quarters, we're all trying to understand what your increased activity with hyperscaler, especially with the one that you announced three months ago, would entail to. And last quarter, you said, the company said, "Don't expect any hardware sale." And as much as we were articulating the same question a different way, try to understand how it works, you're not ready to discuss it. This is why I was thinking about this set of parameters that drive the demand for storage. So as you engage with hyperscaler, this could be for inferencing in the cloud or inferencing on-prem. Are you starting from a start?
What is the starting point so that we could better understand how you've been able to penetrate? You may not be able to discuss the specifics of value add, but at least we could start to have some thought process. Sorry for the long question.
Yeah, yeah, absolutely, so let me kind of unpack that. I think a couple of different elements to that question, so number one is better understanding the structure of what these engagements might look like from a business model point of view. Number two is understanding the entry point. Why now? What's in it for what's driving the increased interest and momentum? and then number three, hey, how do we think about this in relation to inferencing, AI environments, et cetera, so let me start by just saying, and I think we've been trying to really get crisp and articulate this and repeat this. When we think about the hyperscaler opportunity, we really think about that as somewhat orthogonal to what we're seeing in AI. Meaning the hyperscaler opportunity really is being driven by the hyperscalers as a whole, the entire cohort.
Still today, they are deploying a ton of hard disk drives, and they realize that there's a lot of inefficiency in that, a lot of power and space, and really operating costs that are associated with that that are not getting any better. The disks may be getting larger, but they're not getting any faster, and those environments are getting more and more difficult to run. They recognize that to move to a more modern architecture that has to be driven by flash. They also realize that SSDs, commercial SSDs, for a variety of reasons, aren't going to get them the efficiency, the reliability, and ultimately the TCO and power savings they need, and that really opens the opportunity for Pure and our DirectFlash technology to go meet that need. The hyperscalers very much are looking at storage as a general purpose horizontal architecture.
They don't think about individual workloads and standing up individual environments and infrastructure for each application. There are just simply too many applications, and so they tend to design, if you will, a data center generation design, a template. They tend to design that very horizontally for all of their applications, and so that really is driving the interest in the technology. And our lead customer, in particular, is very interested in the ability to apply DirectFlash across a wide range of price-performance tiers. They're very interested in having a consistent architecture that they could go deploy and tune for their highest performance needs, as well as their most cost-sensitive and coldest disk-based environments, and so we really see that as orthogonal to the AI particular application. Where AI is accelerating the interest here is it's creating a much more acute demand for power.
The build-out of AI environments, as you've seen in the news, is driving demand for power, power that the hyperscalers are having difficulty bringing online, and so the ability to go and save a ton of power by modernizing disk to flash systems is quite attractive for them and has accelerated our conversations, and then I think the first part of your question is, hey, so help us understand there's no hardware as part of the sale, et cetera, so let me kind of unpack this a little bit. First of all, the interaction and the engagement with the hyperscalers is very much around technology integration. This is not going to be sale of existing product and existing systems we have. We've been working with the engineering teams at these hyperscalers to find the best integration of our DirectFlash technology with their designs, their servers, their hardware, their software.
As part of that design, we would not be contemplating a sale of hardware to the hyperscalers. Best way to think about this would be more of a software and technology licensing arrangement, allowing the hyperscalers to utilize their own supply chain capabilities to procure the hardware. Hopefully that kind of gives you some color and way to think about it, maybe.
Yeah, and this is actually a good segue to bring back Fusion. Is Fusion an ability to look at the entire array part of that software that you were referencing? Or is it completely two different things?
So yeah, so if we segue to Fusion, Fusion really came out of so if we segue to Fusion, Fusion is really delivering a lot of the tools to the enterprise customers on our enterprise arrays, a lot of the tools that allow the enterprise customers to operate much in the same way that hyperscalers operate their own environments. Fusion was very much driven and inspired by the learnings that we, Pure, and our engineering teams got through working with the hyperscalers, building a deeper understanding of how hyperscalers operate their environments, and building those same sets of tools and capabilities into our enterprise products. And what Fusion allows customers to do is move away from the legacy model of, hey, I have an extra workload.
I'm going to go deploy an isolated siloed environment for that workload, giving that customer one more thing to manage, one more thing that they have to configure and keep in sync, one more thing that could potentially strain resources, one more thing that has to be upgraded, maintained, modernized, et cetera, in the future and shifting to a model very similar to how the hyperscalers themselves operate, which is under Fusion. If a customer needs more capacity or more performance or they have a new workload, they can just add an array to that Fusion fleet and will automatically go and configure that, help the customer understand how to automatically provision the storage capabilities that are needed for the additional workload, centralizing policy management, centralizing the optimization of resources, compliance, security, all of the other things that are a challenge, are a huge challenge for enterprise customers today.
And so we released Fusion v2 about a quarter ago. And the initial adoption has been tremendous. Dozens of customers that have deployed it through a software upgrade and are now managing their fleets in this much more cloud-like way. And what we expect is that adoption will continue to grow at the very high pace that it has. And we expect that that will start to generate more of a network effect that we haven't really seen in storage before, which is to say, if a customer already has a fleet they're managing under Fusion, it becomes that much easier, that much more valuable to add additional arrays to that fleet than to manage a siloed and disparate array separately. And so that's maybe the way to think about Pure Fusion.
Got it. Got it. I was trying to use fusion to get a better understanding of what exactly you're doing with the hyperscaler. But I guess fusion is right now, it helps optimize existing installed base among enterprise customers?
Yeah. Think of Fusion as taking some of the learnings about how hyperscalers themselves operate and bringing those capabilities into the enterprise customers' hands.
Right. Okay. All right. Okay. So let me go back to my question list. And then compared to pre-COVID, six, eight years ago, you now have multiple products. There's definitely been a product diversification. You're looking at various segments of the market within enterprise, within the cloud infrastructure. And now you're turning the model into a more predictable subscription base. Right? So far, am I looking at the model correctly before I ask a question?
Yeah. No, I think that's a good way to think about it. What I'd say is that we've always had a very strong subscription base driven by our Evergreen//Forever. Where we've really expanded that is with Evergreen//One, which is the full as-a-service offering. But yeah.
Yeah. Now, the challenge is, as you work, all of these vectors come together, and you try to add more predictability to the model. We have to go through this transition. The quarterly report and a guide in the near term may not really be a reflection of all of these things that you're doing. When I look at, let's say, ARR, you reported $1.7 billion. It was up 21% year-over-year. If you could just double-click here, how should we think about this level of recurring revenue broken into services, software versus hardware? And I have a follow-up here.
Yeah. So if we look at our subscription revenue base, really a couple of key components that you kind of layer in there. The largest component really would be driven by our Evergreen//Forever subscriptions, which would typically be attached to our traditional product sales. Those would be the largest component simply because they've been operating at scale for, geez, probably over 12 years. And so that would be the largest component of that. Second, you've got your full as-a-service offerings with Evergreen//One and Evergreen//Flex, which we've seen tremendous growth on that through FY24. And we've had a number of discussions on Evergreen//One pickup within FY25. That would be your kind of second component. And then we have some of our other subscription services, such as our Cloud Block Store offering and other solutions that would be a part of that.
I see. Okay. Let me ask the question a different way. If I were to look at FY24, you're averaging like a 22-ish% year-over-year growth in ARR. And it's a good traction, but I wanted this to be a bigger part of your total revenue. So if you did about $3 billion of revenue in FY25, how should we think about ARR reaching $2 billion? And assuming that your revenue is going to grow, but your ARR is probably going to grow much faster, right? And I'm just trying to figure out, is it going to take a couple of years to get to that $2 billion handle for ARR? Would it require hyperscalers to come in and turn that software, what you're doing, into a multi-year agreement? What are different parts of it?
How should we think about scaling the ARR, which would be a bigger part of your annual revenue?
Yeah. Okay. I understand the question now, Mehdi. Yeah. So again, I'd go back to kind of the current mix of our subscription revenues. And really, the key components there would be Forever and Foundation, the largest component in Evergreen//One. Forever and Foundation, because they're at scale, we would expect that to continue to grow more or less tracking kind of our traditional product sales. And so as far as levers, if you will, to drive ARR growth, really would be looking at the growth of sales in our Evergreen//One and Evergreen//Flex or full as-a-service offerings. As those continue to grow, that would be the kind of contribute, obviously, to ARR growth in a larger way. And so we saw really strong growth of those offerings in FY24. And we've kind of discussed some of the dynamics that we saw as FY25 progressed.
We would be expecting Evergreen One sales to continue to grow as we look into FY26 and that's been contemplated in the guidance that we put out this week.
Yeah. Got it. Okay. I was trying to get a color on the mix, but you're.
So I hear you. We're not going to break out quantitatively with the mixes. But like I said, the largest components there really would be Forever and Foundation.
Yeah. Yeah. Yeah. And that's a reflection of the model changing. The predictability is coming. It's just that we're trying to model it, and you're not ready yet to break it up.
Yeah. That's right. Well, and I mean, I think as we exited FY24 and throughout FY25, we gave you a sense of Evergreen One TCV sales just to get a sense of where that business is going. As we go to FY26, as I said, we do expect continued growth there. We're not providing a particular guide on it, but we would be planning to give you kind of quarterly progress towards those Evergreen One TCV sales.
Okay. Got it. Now, just one thing regarding your FY26 guide. You did, on the call, talk about gross margin rebounding to 65%. I think Kevan mentioned getting back on track to hit the 65%-70%. And if I also look at your OpEx growth, then the FY26 OpEx, I'm sorry, operating margin guide of 17%, to me, is conservative. Even if I plug in 65% for product gross margin, 17% operating margin seems conservative. Am I missing something here, or you just want to regain momentum with product gross margin improvement before you talk about the prospect of upside to the operating margin of 17%?
Yeah. I mean, I wouldn't necessarily think of it that way, Mehdi, right? I don't think our long-term model really hasn't changed. And let me kind of unpack that a little bit. When we look at the gross margin performance in what we saw in FY25 in the whole year and what we're expecting for FY26, we do expect, in particular, to product gross margins, we do expect to kind of moderate back to mid- to high-60s or long-term range as we transition through FY26. And really, that's driven by, if we look at the dynamics we saw in product gross margins, particularly in the back half of FY25, that really was an effect of the strength of our E Family.
We discussed this on the call, but just very quickly to recap this, we saw tremendous growth and strength in the E Family relative to the rest of the portfolio. And as you know, that's where we're competing with disk-based systems. At the same time, with that elevated level of mix, we did experience an elevated level of QLC NAND costs, which in the rest of the portfolio really had very de minimis effects on product gross margin. In the E Family, where we're competing with disk systems that are on a much more stable price basis, we've made the conscious decision to go and invest in taking that market. And so that did show itself in downward pressure on the product gross margins because of the larger mix of E Family in the year. As we go into FY26, we do expect that to moderate.
We do expect further strength in E, but QLC NAND costs we do expect to moderate somewhat, and so that's really driving our confidence in return to historic kind of our longer-term range of 65-70 points of product gross margin, and then as this translates to operating margin, really no change in philosophy there. I think as we've discussed, we're planning, and all things being equal, would be planning to operate the business to create, call it a point to two points of operating margin a year. However, with the hyperscaler opportunity, taking that opportunity to forward invest and supporting the hyperscalers, both the current customer that we have as well as the future, the prospects that we're speaking with.
And so really looking at taking a year or so to invest in that opportunity, the R&D associated with working with the hyperscalers, doing that product integration, accelerating our DirectFlash roadmap to help support those hyperscalers, as well as building out more supply chain and operations capabilities to support that level of scale. And so with that additional investment, you're seeing that play out as we're planning for essentially a flat operating margin profile as we look at FY26.
Yeah. Got it. My only concern here is the type of, and correct me if I'm wrong, the type of QLC NAND that you procure, there may not be too many manufacturers. There may only be one or two and additional capacities coming online the second half. Would you agree or disagree with that hypothesis as it relates to the supply of the QLC that you need?
Right. And I think your question is specific to supporting the hyperscalers, right, versus the core business.
Yeah. Absolutely. Right. Right. Yeah. Moving to QLC NAND.
Yeah. Right. So, specific to supporting the hyperscalers, a couple of things I'll kind of bring your attention back to. Number one is when we announced the design win in the QLC discussion, we gave you some sense of scale that we were expecting, which was about an exabyte or so of deployment this year in terms of advanced testing and the beginning of build-out. And really, the larger capacity deployments starting to roll in in our fiscal 2027 calendar 2026 timeframe, which we expect to be in the double-digit exabytes. And so when we think about timescales, first of all, we're really working with our NAND suppliers and our supply chain and the hyperscalers planning for the calendar 2026, calendar 2027, and beyond time range. So you have that time factor in there.
The second is, as you've seen, we have expanded our partnerships and collaboration with key NAND suppliers, be it Kioxia, be it Micron. And we're looking to further expand those partnerships as those suppliers recognize the significant opportunity set here in the hyperscalers to displace disk. And so through those partnerships, they're investing in building additional capacity to meet those needs. And then lastly, I'll also point out the technology improvement, right? When we look at the wafer density, when we look at essentially how many bits a particular wafer can store and how many kind of the growth within a NAND chip, we are expecting that to continue to increase. We have a really good line of sight as to the NAND roadmaps. And so even the existing capacity will be able to produce more bits in the future.
So when you net all that out, we feel confident working hand in hand with our hyperscaler customer prospects as well as the supplier community that that capacity will be there to meet the demand.
Yeah. And when you say that you're going to add more to the list of NAND suppliers, I guess that would include Koreans. Is that a fair statement?
We haven't identified specific NAND suppliers outside of the two that we announced expanded partnerships with, but I think it's a fair statement to say that we're speaking with all of them.
Yeah. Okay. That's fair. And then I just want to go back to the Fusion. And that's included in the subscription services revenue, correct?
So today, Fusion is not broken out as a separate. It's not monetized separately, right? So Fusion is a capability that is delivered as part of our Purity software that would be included as part of the Evergreen subscription and really the software that we're delivering on our arrays today. We do see that driving additional stickiness, and we would expect that to result in and be a key pillar of just kind of growth in the core sales. We do see potentially in the future as we add on additional capabilities into Fusion, potentially future monetization opportunities. But that's really kind of over the horizon, if you will.
Gotcha. Okay. So the subscription services, what I want to better understand is when we go back to the systems installed more than six, eight years ago, is there an opportunity for replacing those? And when you recognize services here, some of that has to do with upgrading the install base, correct?
Let me start with the first part of the question. Unlike the competitive set, unlike the other rest of the industry, we don't replace our previous sales, right, and what I mean by that is in our traditional product sales, a customer may buy an array, buy a product on day one, but they never rebuy that product from a CapEx, from a product sale point of view. Because that Evergreen Forever is attached to that array, through that Evergreen Forever subscription, we will continue to modernize that array over time, and we have customers that have been with us, modernized arrays, expanded under that Evergreen Forever subscription for over 10, 12, probably even longer number of years.
And so the other way to think about that from a P&L point of view is what is a traditional product sale on day one. By year three or year four, it transitions to a full as-a-service subscription recognition, right? That customer is still with us, is continuing to get the benefits of the technology, modernization, but instead of rebuying that original array three years later, they're now just continuing on with us through that Evergreen//Forever subscription. And so I wouldn't think of it as an opportunity to resell or upgrade that array, and in fact, the benefit we have is we don't have to go and replace our install base. We don't have to go and make that additional sale, and you see that in the stickiness, and you see that in the customer retention.
Sure. Okay. I'm a little bit maybe just me, but I'm a little bit confused here. So if the array and the Fusion and the Evergreen are all part of the same value proposition, which is in the, is that in the product revenue? In that context, what is in the subscription services?
Okay. So again, with a traditional product sale, customers buying the array, the hardware, the software, we'll recognize that product sale upfront. But their value over time is also attached to that Evergreen Forever subscription that would be attached to that product sale. We'll go and recognize that Evergreen Forever subscription as subscription revenue ratably over time. Now, in the traditional model within the industry, right, that original product that a customer bought, they're going to have to rebuy another array in three or four years to replace the original one, right? In our model, we don't have that replacement sale, right? That customer comes year three, year four, year five through the Evergreen Forever subscription, which shows itself on the subscription revenue line. We are able to keep that original array modernized. If necessary, we'll bring in faster controllers, new CPUs, new hardware, new software.
But that original product has now been modernized completely under that Evergreen Forever subscription, which is showing itself on the subscription revenue line. And so that's kind of the way to think about it. And as I said before, Fusion is not separately monetized today. That would just be part of the features of the software, if you will.
Yeah. Okay. All right. And then let me just look at the mix. I think there was a reference on the call to an upcoming announcement at GTC. You have talked about conversations with hyperscalers and increased engagement. You referenced this event coming up. And I imagine this would be as part of the whole evolution of AI, especially as we go through different generations of AI products. So can you share with us any insight as to how Pure could capitalize on GTC and better illustrate the value that they're providing to hyperscalers? Kind of open-ended question.
Yeah. So the GTC, as you know, is the NVIDIA event really focused around AI. Our announcement really is focused in the AI space. And when I step back and think about the AI training environments today, we actually serve a lot of enterprise AI environments, call it at the small to moderate end of the performance scale. We do really well and serve the hyperscalers, as we've talked about with you all in terms of the world's largest environments. There's kind of an area in the middle, right, of moderately high performance environments that have long been dominated by traditional HPC or high-performance computing providers where we haven't historically been super strong. But we'll be announcing some product introductions and extensions at the GTC conference really aimed at extending our core technology and our capabilities to go after that traditional HPC AI segment.
I would encourage you to tune in and look forward to talking to you guys about the product details perhaps next quarter.
Okay. Rob, we have reached the end of our time. Is there anything else you want to share? Any final thoughts before we wrap it up?
No, I think this is a pretty wide-ranging conversation. So thank you again for having us, Mehdi.
Okay. My pleasure. I wish you a great weekend, and thanks everyone for joining us.