Great. Well, thanks everyone for joining us again. Day one of BofA Tech Conference here in 2024. Appreciate you all coming. I see some familiar faces in the room. Welcome. Great to have you all back here again. Delighted to welcome Pure Storage to our afternoon session today. We have CFO Kevan Krysler, and we've got founder Coz, John Colgrove. And we're delighted to have you. I know, Coz, you don't do a ton of these, but I also heard that you brought some goodies with you, so I'm really excited about this discussion today. And Paul Ziots from IR is also here if you want to reach out with any questions after. Paul's handed me some forward-looking stuff that he wants me to read.
So statements made in these discussions, which are not statements of historical fact or forward-looking statements based on current expectations, actual results could differ materially from those projected due to a number of factors, including those referred in Pure Storage's most recent SEC filings on Forms 10-Q, 10-K, and 8-K.
Very nice.
Okay.
Very nice.
Sorry to get that done as well.
You know, you don't have a future in radio. You've got to be able to do that in half-step.
I know. I know. I'm too slow. We're going to get like an AI automated thing to hit the button and go like super fast, right? Like how I watch my YouTube videos. All right. Well, welcome. Thanks for joining. Pleasure to have you. I know there's a lot to talk about, hard to condense all this in 30 minutes, but maybe to kick it off, maybe quickly first to Kevan. I know, Coz, you're going to do a lot of more talking maybe, but just from a Kevan perspective, just as you reported results last week, right? What would you say are maybe like the top few takeaways from the call? And then I want to jump into some of the tech discussion.
Perfect. Yeah, we'll tee it up for Coz. But when we think about the Q1 earnings, a great start for the year. Good strength in terms of double-digit revenue. And outperform both on top line as well as on our operating profit. When we think about our top line performance in Q1, I would say there's outperformance on product revenue itself, really driven by Global Enterprise. We had a nice quarter with Global Enterprise, as well as our FlashBlade portfolio. Across the board, our FlashBlade portfolio did very well. Now, that also includes FlashBlade//E, which is our price-performant solution, where we're looking to go take out disk as well as lower storage tiers. So then you've got the profitability. I think from a constructive perspective, Evergreen//One came in a little bit lighter in Q1, and we can talk about that.
Obviously, progress on the hyperscaler opportunities that we alluded to as well in the earnings, and Coz can spend some time walking through that. Then updates in terms of the AI opportunities in front of us as well.
Okay. Great. Yeah. No, thank you, Kevan, for that quick recap. So Coz, you founded the company in 2009, and you had a vision about the data center of the future and what would be an all-flash data center. So do you want to talk about what progress have we made in that vision, and where are we today? Where are we going?
Well, look, I think when we started the company, we wanted to lead the transition to all-flash, and we started that with the highest performance use cases because what was killing disk was performance. And I like the way you're just taking that out for me. What was killing disk was performance. And so where people were paying the most and flash was the least economical, it made sense. But we always viewed it as eventually disk would be completely gone. And if you go and talk to somebody, "Well, will disk be the technology in 100 years?" Of course not. "Oh, what about 50 years?" People would say, "No." And you start talking 20 years, and for some reason, people would be like, "Big on disk still." And by the way, disk is actually 70 years old, so it has been around a long, long time.
But the thing that was killing it was performance. And so that's where we started. We've now reached a point where today we're looking at TCO numbers, and we're saying, "Even if the disks were completely free, all-flash is better." And that's quite a statement. And probably most of you are like, "Wait, that can't be true." You have to recognize that, so you buy a disk, and actually have one here. You buy one of these things. This one weighs 1,613 grams, which is roughly the same weight as a modern one. And it's big, it's heavy. I could drop it on this table, and it will break the table. It weighs as much as your laptop unless you have a very heavy Mac on you. I don't see a lot of Macs here that are that heavy. So this is archaic.
Now, today, one of these things, what do you do? Well, okay, you put it in a server. All right. You've got sheet metal around it. You have power supplies. You have cables. It only lasts for a few years before it breaks. Roughly speaking, 1.5% of them break every year. And in fact, when they get to be older, 4%, 5%, 10% of them break in a year. And so all of that has cost. And then the power has cost. You add up all those costs, the people, the power, all the equipment you put around it, that adds up to more than building things out of flash these days, or at least building things out of our DirectFlash. And one of the things that drives that is the density, right? So this is a solid-state drive that we would have sold 10 years ago.
In fact, we actually did sell this one about 10 years ago. And inside this, this is flash pretending to be a disk. Flash isn't a disk. So flash has big erase blocks. Think of it as you can't overwrite data in place on flash, so you write a block. There's a mapping table in here that says, "Oh, this block's over here." And then you go to rewrite that block, and the mapping table gets updated to say it's now over here. And that is complicated firmware. It requires memory to run at decent speed. So inside an SSD, you see a bunch of memory. And for those of you that follow it, DRAM is not getting cheaper that fast anymore, and it is not getting larger that often.
But that memory and that firmware in here that does all the remapping makes this bit of flash pretend to be an SSD. And we started doing DirectFlash. Now, this is a 75 TB DirectFlash Module. So for starters, you think about it, one of these today replaces like three of these. It has way more performance. It uses way less power. It is 10 times as reliable, and it has twice the lifetime. That means many fewer human visits to the data center. And every time you have a human visit to the data center, you have a bunch of costs, and you have the potential for the human to make mistakes and outages and things. But with this DirectFlash, the key, and if you took a look at this, it's like standard flash, a standard flash controller under an extremely attractive heat sink.
Not much memory because what we're not doing in here is we're not remapping all those blocks. We do that up in our controllers. It means these things are 10 times as reliable because the only thing to fail in them is the firmware. I mean, eventually, some of the electronic components will fail. Eventually, the flash will wear out. But if you actually do the math, for most use cases, the flash will last 70 years, and we know something will corrode and wear out long before that. So these things, the DirectFlash, and this DirectFlash software that we have as a layer above is what lets us get that efficiency. And that's what's going to drive the hyperscalers because we can go to them, and we can make the case that just replace your disk with DirectFlash is more economical.
And that's before they start to factor in all of the layers of caching and other things that they have to make the disks perform better. Disks, the performance is awful. So when the hyperscalers talk to us, we get stories like, "Oh, gee, yeah, we could buy new 32 TB hard drives soon, but we can't use the space because it doesn't have enough performance." We had one hyperscaler we were talking to a few years ago, and we did a model with them of, "Okay, what would an exabyte look like?" And it had 18 TB hard drives in it because that's what they were using then. And we talked to them again this year, and we're like, "Well, we should update your model." How about we go to 26 TB hard drives? They're, "No, leave it at 18 TB.
We can't use the extra space because there isn't enough performance." And that's why flash is going to completely replace hard drives. And the flash is just continuing on. This is something we're going to ship later this year. It's a 150 TB DirectFlash Module, right? Same power. All of a sudden, you're using half the power. And when you're replacing disks, this has way more performance than the now like six disks it will replace. When we ship a 300 TB module out at the end of next year, again, same performance, same power. So it has way more performance than the disks it's replacing, and the story gets better and better. And the people, roughly speaking, when you talk about all the human effort involved, well, one of these costs about as many people to manage as one of these.
And so when there's 40 TB hard drives in, let's say, 1.5 years, 2 years, and there's 300 TB flash modules, so there's 7.5 times fewer devices, a lot of people would like to save 7.5x of their human cost or cut their human cost to 1/7.5. So that's why we're so convicted that we're still at the start now of replacing all of the hard drives with flash.
No, that's a very compelling argument you make over there. So what is some of the reasons maybe why this adoption has not happened even faster, right? It feels like you've had some of these positive trends that have been around for multiple years. You just went through a period where flash pricing came down very dramatically. And now we're in an era where it's bouncing off of these super low levels. But when we think about it from a broader adoption standpoint, what's been a gating factor at these hyperscalers? Is it like a process of education that takes time? What is it that's been going on?
Well, I think there's a combination of a few things. Where they've needed more performance, they've bought SSDs. And they've tried several initiatives to do DirectF lash, flexible data placement, open firmware SSDs, zoned drives. And they haven't gotten success because they're not like you have to do it down at sort of the flash mapping firmware level. And the flash vendors, they're no good at the system software layers above. And the hyperscalers have focused a lot of people at those higher layers, and they're not good down at the base. And so they haven't succeeded in those efforts. And you look at it, and it's not necessarily the most obvious thing in the world until somebody comes and talks to you about it because an exabyte is a billion gigabytes. And hyperscalers, they're thinking like maybe call it 100 exabytes a year.
That means a penny of difference is 100 billion pennies or $1 billion. And they look at a disk, and they say, "I can buy a disk for a penny." They say, "I can buy flash for $0.03, $0.05." And they think that means it's billions of dollars more expensive. Now, they're all trying to expand their data centers with GPUs, and they're all realizing, "I need to get a nuclear power plant built next to my data center." Those are expensive. And so, again, one of the hyperscalers we're talking to was talking to me a couple of weeks ago. And the engineer there, he's saying, "Well, we're going to need about 125 MW for the storage for this one application. It's not for one deployment. You're going to cut it to 17 MW." And their budget is in watts, not in dollars.
The power is driving it more than anything else now. The GPUs are very power-hungry. They're all trying to deploy hundreds of thousands, millions of GPUs. That's where the storage just gets that much more compelling. They kind of needed that event, I think, to get them to look at it differently, to get them to look at the actual costs.
Sure. So when you look at the world of storage, right, there is a subsection of that that obviously requires high performance. Some of it, very clearly, in the AI world does. And then there is a subsection that probably doesn't require very high performance. So would you say if you cut that industry by that sort of metric, how would you think about the attach of flash within the high-performance category? And what is happening in the world of AI? Are you seeing the tier-two CSPs start to say, "Gosh, why are we not building this on a Pure architecture?
So high performance is all flash, and it has been for a while. So when they're looking at building new, they're looking at, in effect, building it either with SSDs or pure. And these dense DFMs, they're all about replacing disk, okay? So if I need gigabytes a second of throughput per petabyte for my GPUs, I can use the same DFM architecture, but I need a much smaller one. And that's where you still see SSDs in that size and things. And so then it's building on SSDs or building on pure. We focused our products and the company from the beginning on much more of the enterprise market. And so a lot of the resiliency we put in and reliability and things like that aren't as valuable to somebody who's building a GPU cloud.
So again, we're doing some engineering work to come more at that side of the business. Some of the stuff we're doing with the hyperscalers, which is co-engineering that we're talking to them about, and some other large companies, more flow in that direction. And so it's a bit of a new product direction for us. That said, the GPU training, which requires all that throughput, is a tiny amount of data. Everybody remembers ChatGPT as like the seminal AI moment. ChatGPT was trained on the same amount of data as my cell phone holds. It was GPUs crunching for months, thousands of them on a tiny bit of data. The inference market is where the real dollars are going to be because I take the results of my training, and I apply it to all the data I own.
Now, all the data I own, I need that to be reliable, resilient, highly available, all the things that Pure has built into our products. So that's where we're getting a set of deployments. We have a couple of GPU cloud wins. We have some in hyperscaler AI things where they did not do the homegrown. But the big storage wave is going to come with the inference. I think it's going to be a slightly slower wave than people think. And by that, I mean it's going to take five years, seven years for people to do it because it's not one or two people deciding suddenly, "I'm going to do everything with GPUs." It's every major organization out there saying, "I'm going to start deploying this," and they're going to deploy it application after application after application.
And so it'll be a several-year process for businesses to deploy it, but that's where the big wave in storage will come.
We've sort of heard Jensen talk about maybe 40%+ of the GPU deployment out there is already doing inferencing. But I think it's very centric to maybe some very specific use cases that have to do with social media recommendation engines and things like that. So when you think about inferencing, or at least when I think about inferencing, I don't think of super high performance for inferencing. Correct me if I'm wrong there. Do you think that you require that high performance, which is where it seems like your tech intersection is in some ways? So what's.
You need high performance. You don't need ultra performance. So don't think of feeding a GPU gigabytes a second, but think of it as the data infrastructure that, pick your favorite General Motors, Boeing, Coca-Cola, Safeway, whoever deploys. It'll suddenly want twice the performance that it has, let's say, right? So performance needs will go up. Data volumes will go up, but they're going to get the value by Safeway is going to be really good at analyzing as soon as I walk in the store, where am I going, and what am I doing, and how do I buy more, right?
The same way I guarantee when I finish today and I go and get into my car, my watch will tell me, "Oh, it's going to be 1 hour and 17 minutes to drive home, and you should take 101, and there's traffic here." And I won't ask it. It'll just tell me that. And so the inferencing that you're talking about, yeah, sort of that hyperscalers are doing that kind of inferencing and a couple others. Our path to that is in through the DirectFlash, replacing all their hard drives because they have so much data that they're inferencing over there, using hard drive data for that. But all the rest of the enterprises, it's already high-performance storage. It's going to continue to be high-performance storage. It's just going to get a little more high performance and a decent amount more volume.
Okay. Okay. That's helpful. So as we think about the evolution here for Pure, how much should we think that the hyperscalers are going to be competitive in trying to develop something by themselves versus coming to Pure for a solution? Why has that not happened maybe?
Well, as I alluded to, they've tried several times. They tried to partner with the flash vendors around open firmware SSDs so they could basically write their own firmware to do it. They've been doing stuff with zoned drives and flexible data placement. But the flash is very quirky to deal with. You have to get in there and understand in detail each generation. So you might not have been observant enough to notice, but you'll see the flash chips on these two devices are actually a slightly different size, physical size. That's because one comes from one manufacturer, one comes from a different manufacturer. Every generation from each manufacturer, we have to go and characterize the flash and use our flash management software to get the best out of the flash, get the most efficiency, most consistency, longest life, most reliability, etc. And it literally changes every generation.
And we've been doing that for a decade. We've built up a lot of expertise in that. And it isn't easy for them to go do that. So for them, at this point, it's a simple equation. Should I go have my engineers do that, or should I go have my engineers do something potentially more valuable to me and get it from Pure? You and I both know there's some that have enough of a bias that there'll be a couple that are like, "Oh, we're just going to build it ourselves." And maybe if they fail another time or two, then they'll come to us. There's some that will just say, "I'd rather have my engineers do something more valuable, and I'll happily get it from Pure." And you'll see over the next few years how that is.
I mean, Charlie's said several times he feels confident in design. I feel very confident we're going to have a design win. But until the day we get the deal, we don't have it yet. And we have to keep operating like that. That's why when we talk about our financial results and something, we've never put anything for that in because when it happens, only then will we know it. And look, even that, it's going to be a ramp. Once we get a design win, it's like nine months to qualify it, and then they start rolling it out. And they'll test the rollout in a few places. It takes time.
Sure. Kevan, maybe one for you. Just to address this opportunity, what kind of go-to-market changes do you think you need to make, if any?
Well, the go-to-market will change dramatically specific to the hyperscaler opportunity. I mean, the go-to-market is really the solution itself and the design win and the rollout. So you really don't have a significant go-to-market initiative. You've got business development activities that would be at play there, but you wouldn't have a traditional go-to-market play that you otherwise would have with enterprise or commercial.
When you think about the ramp of hyperscaler, Coz, you just said it's multi-year. It takes time. How do we dimension the total size of the opportunity at hyperscalers for Pure?
It's a great question because you'd want to size it a couple of different ways. I mean, to Coz's point, the ramp is going to take some time. I do, and I think we have a view that once we get some hyperscaler wins under our belt, there will be a domino effect of sorts occurring. And then the real question for us is we're confident that the model will expand operating profit, be accretive to our operating profit in terms of dollars. The commercial construct is really dependent on us whether or not we want to take some trade-off on top line versus a gross margin profile. I think at this point, we probably have a bias towards trading off the top line for improved gross margins. I think that's better for us in the long term in terms of what that looks like.
As we negotiate the different commercial constructs, one of the other areas would be warranty, right? We're talking about a 10-year warranty on this solution, which, again, when you think about the TCO comparison today, disk being a 5-year life versus 10-year, you can just only imagine that benefit from the TCO lens. But you've got the warranty piece. You've got something new with hyperscalers, which is software support. And what does that look like over a 10-year period associated with the DFMs and Purity that goes along with it?
Okay. Okay. That's great. I just want to go back to one of the points you opened up with, which was a little bit of deceleration on the Evergreen//One and Flex side, which was a little bit of a different trajectory relative to what you saw in the last few quarters prior to that. What do you think has changed, if anything, in terms of why that shift happened? Is it that subscription was more attractive in what was perceived to be a weaker macro, and now it's not quite as weak, or are budgets improving? What's your perspective on that?
Well, at this point, I don't view that there is a change in the environment that really gave rise to our TCV sales, for Evergreen// One, being a little bit lighter in Q1 than we were expecting. I think there's a few things at play. We were killing it on Evergreen// One throughout last year. Q4 was a fantastic quarter for us. So I think there's a bit of a take a breath. We cleaned out everything we could, converted it. Now we're developing new opportunities. And I say that because what we saw in demand gen and pipeline build for the Evergreen opportunities was quite robust, which is the reason why we were able to reiterate our guide and expectations of $600 million of TCV sales for this year. I also think that the AI opportunity will lend itself very nicely to the E1 solution.
When you think about the magnitude of CapEx spend associated with the GPUs, I do think customers are going to be looking for different alternatives to optimize their spend elsewhere. Evergreen// One for AI solutions would be terrific. We've got some announcements in our upcoming Accelerate associated with that as well.
That's fantastic. Can you just talk a little bit about just maybe either of you on where can AI be as a percent of revenue for Pure in a few years down the line, right? I mean, inferencing, Coz, you said, could be a very big market. So at steady state, let's say, given all the potential you have from disk replacement and sort of AI attached, how would you peg what your steady state mix would be like?
Yeah, I'm going to answer that for Coz.
Okay.
I want Coz's answer too. Look, there isn't a steady state yet, and I mean that with all sincerity. Once we've got these design wins on the hyperscaler, you have to imagine that the profile of our financials are going to look very different in terms of what that means. And what does that mix look like on the bulk storage opportunity versus where does AI come into play? And again, those dynamics are going to be very fluid. I do know we've got great traction in positioning from an AI perspective, but to have a view in terms of proportional magnitude over time, I think it's almost impossible. But with that, I'll turn it over to Coz.
Okay. Well, all right. So I'll give you somewhat the same answer. I mean, so just think about Evergreen// One, for example. That's one of the best things about Evergreen// One. It's that when a company is buying into that, they don't need to know the future. Because you go to the average person and you say, "Gee, what are your storage needs going to be in three years?" They get it wrong all the time. They hardly ever have a clue. Sometimes it's way higher. A lot of times it's way lower because they have over-enthusiastic expectations. What Evergreen// One does, what storage-as-a-service, it allows them to get it right. You get what you need now, and you flex up and down as you need. Look, yeah. Somewhat the same thing. They're both huge opportunities.
But as to whether it'll take three years or five years or 10 years to ramp up fully and exactly how fully, right, I would argue that the hyperscaler is probably a little more discrete because it's a handful of decisions. But AI, it could take two years. It could take 8 to become huge. And there's no doubt that in 10 years or 15 years or something, it's like there's no such thing as AI that's separate. That's everything, right? But exactly how you get from here to there, my job is just to come up with the right products to build and build them. And then we go out and we sell them, and we sell them as fast as we can and figure out how fast we can get the market to ramp.
Yeah. Yeah. Well, we're almost out of time, so maybe just to wrap it up, any final thoughts that either of you want to share with the audience about maybe why this is an opportune time for Pure as an investment case?
I think one of the things that doesn't get talked about enough is we have a platform that reduces risk for the enterprise in so many ways. And people don't understand that the simplicity of our products, the ability to buy with Evergreen and Flex up and down, right, because the down's the hard part, others do not have the same storage service where they can do that. Because with us, you can buy the smallest box today, go to our largest box five years from now, go back down to a mid-size box. You can do all that non-disruptively, eliminating downtime, making things simpler, the ability to not have to have the agility to not have to buy in advance. It reduces so much risk. And it's something that we need to do a better job of communicating out to everybody.
But I think all the people running larger IT organizations need to really internalize this. And I don't think they have yet.
Okay. Great. Well, stay tuned for our accelerate event.
Yeah, looking forward to that. Yep. We'll see you there.
Looking forward to it.
Thank you very much.
Thank you for participating in the BofA conference. Thank you.