Everpure, Inc. (P)
NYSE: P · Real-Time Price · USD
71.44
+1.13 (1.61%)
Apr 30, 2026, 1:16 PM EDT - Market open
← View all transcripts

The 44th Annual William Blair Growth Stock Conference

Jun 6, 2024

Speaker 2

How's everybody doing? Yeah? Good? Surviving? I heard you had 50 lambs born yesterday. Is that true? All right. I got to hear more about that later. So very happy to have Pure Storage here with us today. Kind of a fixture at this conference, really, since the IPO. This is Rob Lee. He's Chief Technology Officer. You have Paul Ziots, who's head of IR right there. And I'm Jason, so I think I know most of you. Been covering the stock since pretty much since the IPO. It's been a fun company to cover. Lots of changes in the industry over these last, whatever, eight years you've been public. Is that right? Eight years?

Rob Lee
CTO, Pure Storage

Yeah. Almost 9.

Speaker 2

Almost nine years. Okay. Great. Well, we're just going to do a kind of fireside chat format. Before I begin, I have some disclosures. I'm required to inform you that a complete list of research disclosures or potential conflicts of interest is available at our website at williamblair.com. And then from Pure's perspective, statements made in this discussion, or not statements of historical fact, may contain forward-looking statements based on current expectations. Actual results could differ materially from those projected due to a number of factors, including those referenced in Pure Storage's most recent SEC filings of Form 10-K, 10-Q, and 8-K. Is that fast enough?

Rob Lee
CTO, Pure Storage

Jason, I think you've got a future with a radio gig.

Speaker 2

Great. Well, let's get rolling. As I was telling Paul upfront, Pure Storage is kind of like a house name at William Blair. So a lot of the folks here probably are pretty familiar with the company. But maybe just for the small number of folks that aren't familiar, maybe a little history on the company and just sort of let's kind of frame the opportunity that you guys have in front of you.

Rob Lee
CTO, Pure Storage

Yeah, absolutely. So as you mentioned, we've been public for 8, coming on 9 years. We started 15 years ago in 2009, really on the premise and the observation that flash technology, which was very nascent at the time, camera cards, USB sticks, things of that nature, had the potential to fundamentally disrupt the enterprise storage industry. But to be able to do that would require an entirely different approach, would open an entirely different set of opportunities in terms of how enterprises consume enterprise technology. We really built the company on that basis. We invested heavily in really the software IP and hardware over time to get the best benefits of the NAND flash components, deliver those into the enterprise.

We started with really going after enterprises' most valuable kind of business applications, the traditional IT environment, if you will, and have expanded significantly since then to where now we can go into our largest enterprise accounts and say have the complete and unified platform and portfolio to completely replace their data storage environment. I would say really two things set us apart in terms of our product set, our technology, and really the service that we offer to customers. One is obviously everything we do with flash and driving those benefits from a price performance, efficiency, power perspective. But the second is just changing the enterprise IT buying kind of mentality.

Everything else in enterprise tech up until we started, and even most things today, are still in the model of you buy something day one, and as a customer, you start planning for obsolescence and how do you sunset off of this thing and replace it day two. We saw the potential with flash and the right product set and architecture to go and fundamentally change that and move enterprise tech and storage in particular from a depreciation mindset to really an appreciation mindset. The potential to go sell product and services day one that actually just get better over time as opposed to fall further and further behind. That's really the basis of what we call our Evergreen architecture.

What I'd say at a kind of a high-level takeaway is those two things really set us apart in, I would say, not just the data storage segment of enterprise tech, but really the enterprise technology industry as a whole.

Speaker 2

Great. Let's double-click on some of the design decisions you guys have made. Because I remember, I don't know if it was 2017, 2018, you guys announced DirectFlash Modules and all the industry press freaked out. It's like, what are you guys doing? Everyone else is going this way towards off-the-shelf SSDs. Why are you guys kind of building your own? Can you talk about that design decision and kind of how it's evolved in terms of the business?

Rob Lee
CTO, Pure Storage

Absolutely. What I'd say is we announced the DirectFlash Modules, which is the hardware component. Actually, I actually have one with me today. Kind of our custom-built flash modules, if you will. We announced these in 2017, 2020, somewhere around there, 2017, 2018. What actually makes this tick, what the secret sauce is, really the thing that sets us apart is what's not in my hand. It's all of the software that we've pulled out of what typically resides in an SSD. That software we've been working on since the beginning of the company. It's the realization that to get the best benefits out of the NAND chips, which is the commodity component, it's not about how you solder the chips to the board. It's not about the packaging of the SSD.

It's about changing the way that software interacts with the flash to get the best benefits out of it. And so we started that journey with software day one. And over time, we've evolved our hardware roadmap to go and support and really, I would say, amplify the benefits that our software create. What are those benefits? Well, certainly performance is one. If you look at an SSD, an SSD, I'm not sure if I have one here, but what we're primarily replacing is hard disk technology. And that's what I think I do have one of those in here. Let me see. Here we go. This is what most of the legacy industry is built on, a hard disk. It's pretty heavy. It consumes a lot of power. Sorry, hotel. Very slow, very inefficient, fails a lot.

What happened when NAND flash came on the market is it is a commodity that requires a very different approach to interact with than hard disk. And so what you had was an entire industry, an entire technology set, software that was built up to work with hard disks. You have NAND flash come on the market and a realization, hey, there's a lot of benefit here, but the software isn't tuned. It's not designed to work with the flash natively. So what does the industry do? Well, they come up with a coping mechanism, which is an SSD, something in the middle that looks and feels and kind of tastes and smells like a hard disk, but does all the hard work of doing that translation to work with flash. Like most translation-type retrofit technologies, there's a limit to that benefit.

What we've seen over the years is that that SSD approach, those limitations have really reached a tipping point. As we have progressed in our roadmaps, as we've taken our core software technology from the earliest days of going after the high-performance workloads, as we have taken that technology and expanded it to go after the largest pools of disks that still sit out there today, exabytes and exabytes and exabytes of these things, what we've realized is that the efficiencies that we can drive out of that software, coupling it and pairing it with denser and denser drive modules is the key to unlocking the cost efficiencies, the power efficiencies, the space efficiencies that the industry needs. The industry needs in the enterprise, and we're seeing that with our E family of products.

Really the early strength that we've seen over the first year of that is recognition of that. But more excitingly, the hyperscalers. The hyperscalers, and I would say the top 5-10 hyperscalers out there consume about 70%, 70%-80% of these things in the world. We're talking firms that deploy hundreds of exabytes on an annual basis of nearline disks. We've got the potential with this technology, the software, the denser modules that that software can go drive to go replace that disk in the hyperscaler footprint. And we can do it simply because we've got the software and the SSDs can't scale to get to that level.

Speaker 2

Yeah. So let's talk about the hyperscalers and let's talk about, I guess, historically, they have not worked with third parties for a lot of this stuff. They just kind of buy the off-the-shelf components and write their own software and so forth. There's a few exceptions to that, but for the most part, that's their MO. What gives you the confidence that you guys can kind of break through there and start to, as Charlie said, get a design win there before the end of the year?

Rob Lee
CTO, Pure Storage

Yeah, absolutely. And look, as you know, each of these firms is different. They all have different priorities. They all have different cultures. And certainly, there's a proclivity to want to build it yourself in these firms. That said, I think there's a couple of things that work in our favor. Number one is the hurdle rate to developing the software is high. We've been at this for 10, 12 years. I mean, we are the industry experts at working with flash at this level. We work with all of the memory manufacturers. It's not atypical that we'll ship a memory manufacturer's NAND. We'll ship that into the enterprise faster than that same company will get an SSD out to market. We work very closely with all of them. They come to us for test data. So we have that level of expertise driven by the enterprise business.

If a hyperscaler wanted to go replicate, if anybody else wanted to go replicate that, well, number one, you've got to go replicate the software IP that we've been developing since really the formation of the company. But then B, you've got to not just do it once, you've got to do it multiple times. Every chip, every generation of chip from the same manufacturer, every chip from a different manufacturer behaves differently. They all have idiosyncrasies. The software has to be tuned and optimized and re-characterized for that different chip. We've been doing this across multiple vendors for, again, a decade. And so for a hyperscaler to go repeat that, 1, you've got to go develop that software. 2, you've got to get on that continuous kind of treadmill, if you will. And 3, I would say it's also just getting harder.

As the memory manufacturers, and I'm sure you all follow them, they all have very clear roadmaps to building denser and denser. They're going to stack more layers on the chips and they're going to make the chips denser and more cost-effective driven by the consumer market. Well, as they make those changes, the flash becomes more and more difficult to work with, if you will. It requires more and more sophistication. So that hurdle rate keeps getting higher. And then the last point I'll make is I think the calculus and opportunity cost for the hyperscalers is perhaps a bit different today. Number one, with the rise of AI, you've got certainly power constraints that are creating a lot of urgency, and we've definitely seen that in terms of accelerating our engagements.

Number two, again, the rise of AI and kind of focus there, I think, is diverting a lot of their attention and resources. If you're sitting at that firm, do you want to put a ton of R&D into optimizing your storage, or do you want to go compete in AI? And then thirdly, in a zero-interest rate environment several years ago, you could go deploy thousands of engineers and not worry too much about it. We've clearly seen the pressures really that everybody's under. And so I think for all those reasons combined, we're pretty bullish on the opportunity. And again, not just blindly bullish, we've characterized our discussions and engagements with multiple hyperscaler firms at this point as progressing very nicely.

Moving from kind of technology, architecture, TCO discussions to engineering engagements to, as we've said this past quarter, discussions that are really centered around testing plans and commercial discussions as well.

Speaker 2

Would you say that Arista is a good analogy of how you guys might be able to work with the hyperscalers just as a third-party expert effectively?

Rob Lee
CTO, Pure Storage

Yeah, I think it is a good analogy in several respects. So number one, as you think about, as you think about the potential for penetration and just kind of leverage in the model, you got one there. But then number two, if you think about how Arista, as an example, works with the hyperscalers, very much as a co-design, co-engineering partner as opposed to, hey, we've got this product over here we designed for the enterprise, let's go kind of put it into your footprint. That model doesn't work with the hyperscalers. And we very much recognize that. And that's why we've been trying to characterize our pursuit of the hyperscaler footprint as really a co-engineering co-design exercise, very much the same way that Arista pursues it.

Speaker 2

Is it right to think that there's going to be some creative structures in terms of if you do land a design win, how you're monetizing your technology?

Rob Lee
CTO, Pure Storage

Yeah, I think it's so definitely fair to say that we've got a lot of flexibility. If Kevan were here, he would say he likes optionality. And look, we've got a lot of flexibility thanks to a couple of things, but namely our architecture and our core IP. We've got the ability to go work with them across the entire spectrum, whether that's modified existing systems, whether it's custom systems based on DFMs, whether it's DFMs themselves, or whether it's just the pure software technology. We've got the flexibility to go work across that entire spectrum. And then we've got the flexibility certainly in terms of commercial structures to work with them in terms of clearly you could see trade-offs between top line and margin, all that kind of thing. And we've got the flexibility there. We are looking at it certainly primarily through an operating margin dollar perspective.

Speaker 2

Okay. And how does this compare to an SSD on density, power efficiency, performance? I mean, do you have any specific metrics you can share with us?

Rob Lee
CTO, Pure Storage

Better. So yeah, so let me share some metrics. Let me just step back and provide a little bit of context. We talk a lot about density. Kind of some of you may have seen these devices before. The reason we talk about density is it's actually a very good proxy for efficiency as measured in dollar terms and efficiency as measured in power terms. Quite simply, the denser the drives or modules that we put into a system, the less surrounding infrastructure we need to run that system, which means the less surrounding costs from an acquisition cost basis, as well as less cost to run and power the system. If you actually go do all the math out, it's actually a fairly good proxy for the overall system.

So to kind of put some numbers on it, if you look at hard drives, this one is from 10 years ago, but the ones you buy today are largely the same. The largest ones that you can find are probably about 22 TB, maybe 24 TB. The largest ones that we see actually being used in the hyperscalers tend to top out around 18 TB. So we've had conversations with actually one of the firms that we've been engaging with. When we initially worked with them to build a TCO model, we built it around 18 TB drives, which was the majority of their footprint. We had a little bit of a pause. We came back and re-engaged a couple of quarters later. They wanted to take a deeper look. We said, okay, well, let's update the model. The last time we engaged, you were running 18 TB.

You probably want to look at the 24 that are on the roadmap. We'll go update the model. The senior engineering leader in the room cut us off and said, nope, I want you to keep it at 18. And I said, well, why would you do that? We want to have a fair comparison. He said, we've tested the larger drives. We can't use them. They're too slow. We're topped out on performance. And so they can build them denser, but we can't use them because they're too slow. So the largest drives you'll find out there from a hard disk perspective are really, really slow 20 TB drives. If I had an SSD with me, I would say the largest ones we see in enterprise production are about 16 TB.

There are folks that in very specialized environments are looking at 30 TB, but that's about where that sits. And we see that really tailing off. We're shipping these today. This in my hand is a 75-TB module. So again, compared to what we see typically deployed on the floor, 5x denser. So that's 5x more efficient. The resulting system is going to be 5x less power consuming, less space consuming. And I think I've got a prototype in my drive. No pictures, please. Paul is going to yell at me. We'll be shipping these later this year. This is our 150-TB module. So this is a doubling. This represents a doubling over what we shipped last year.

And so when you think about that from absolute terms, when you do a comparison from SSDs to what we can do in flash today to what we'll be able to do by the end of the year, you're order 5x, 10x denser. And so again, that pretty much translates into efficiencies. But it's not just the absolute terms. It's the improvement curve. And I think that's one of the things that gives us the most optimism and the most bullishness. Hard disks stop getting more efficient and better. That improvement curve from a technology perspective stopped 15 years ago, 15, 20 years ago. And when that stops, you have lots of, you guys have seen this happen over and over again. You have lots of long tail effects. If you look at where the improvement curve that we're driving NAND to, we're still on a yearly doubling.

Now, that's not going to continue ad infinitum, but that's where we sit in terms of just driving the technology deeper and deeper into not just the enterprise, but also the hyperscale environments. So yes, 5x today, 10x really by the end of the year.

Speaker 2

Why can't the SSDs get to the densities that you guys are at?

Rob Lee
CTO, Pure Storage

Great question. So in some sense, they could try. You could go solder more chips onto the board, but you're going to trade off a ton because they don't have our software. And what I mean by that is, again, I'll go back to an SSD is a coping mechanism for software that knows how to talk to disks, but doesn't know how to talk to flash. So the SSD is trying to provide that bridge. It's trying to do all that hard work inside of that SSD. All of that hard work requires complex software and firmware. It requires a ton of resources in terms of compute chips, DRAM. You're actually having to put more componentry in the SSD to get this work done. And as NAND, as the memory keeps advancing on its roadmap, again, I said before, it gets harder and harder to work with.

So that software is having to do more work. It's under more pressure. And then what happens when you try to increase the densities? You're jamming more hard to work with flash into that SSD. You're further exacerbating the pressure that that software work is having to go do. And quite simply, the SSDs are tapped out right now. They're tapped out on DRAM. They're tapped out on just how much they can get done in that SSD. So yes, you could conceivably go produce a larger and denser SSD. You're absolutely going to trade off performance. You're absolutely going to trade off efficiency. You're going to trade off reliability. The other thing to realize is we control our own destiny. Because we've taken the software out of the SSD, we can go build with the same technology, super dense drives to go after the most price-sensitive workloads.

We can go reconfigure and build smaller drives at much, much higher performance. We've got a ton of flexibility there. To go build a very large SSD, you've got to believe there's a volume market for that. There's certainly a benefit to having 150 TB modules. Are you going to find a huge market to go drive volume for a $15,000 SSD for your home video server to go drive that development? It's a little bit shakier. And so for all those reasons, we just don't believe. We know the hard disk roadmaps are done. This is really just kind of folks trying to cling on to the last bits of what's left.

The SSD roadmaps, I think, are going to be very challenged to follow us where we're going for a variety of technical reasons and frankly, some economic reasons, which is why we're so bullish about pursuing the opportunity ahead of us aggressively, whether that's in the enterprise with Ian and Kevan and what the whole management team has talked about our philosophy there, as well as taking this technology into the hyperscale.

Speaker 2

So on the hyperscaler side, I guess two things. One is, is this kind of roadmap that you've shared with them, I'm sure, and they compare that to the SSD roadmap, is that kind of what has kind of engaged them at a new level or at a different level with you? And then the second part, maybe it's actually the first part, is what kind of flipped for them in terms of starting to really talk to you guys about this?

Rob Lee
CTO, Pure Storage

Yeah. So I think on the whole, in a somewhat paradoxical way, the hyperscaler community has lagged the enterprise in mainstream adoption of flash. And I think the reason is they've had historically more R&D capability and capacity to ride out the declining roadmap of disks. They've been able to squeeze more out of it. They've been able to layer more complexity around it to kind of stay on that roadmap. As we've shared what we're doing with flash and kind of the benefits we can go unlock, I would say the primary drivers for their interest and really growing interest, certainly TCO, certainly footprint, reliability, and longevity have been big pieces of that. But I would say that the initial door opener has been certainly a realization that the TCO benefits we can create. The hyperscalers are stuck on disks. They know these roadmaps are going to end.

They know eventually they have to get to flash. They also know, quite sophisticated, they also know that SSDs are not their path to get there for all the reasons that we just discussed. But they also don't have the software and the IP to go take a direct-to-flash approach like we do. So they're kind of in that kind of, they're kind of stuck in that point right now. What has tipped, I would say, the conversations over and really accelerated them is, one, the TCO delta and advantages keep rising. And so you pay more attention to that. But more recently, I think power constraints have been a huge, huge accelerant. As a lot of these firms are looking at their current and future data center plans, they're contemplating the buildouts of GPU farms and the power constraints that are going to come with that.

They're realizing not just the dollar cost savings that we can deliver them, but the megawatt, gigawatt type savings we can deliver to them. That's actually, I think in multiple cases, really been an accelerant to the speed of our engagements.

Speaker 2

Yeah, I think it's a surprise to a lot of investors when you hear how much of the current cloud infrastructure, data center infrastructure is hard drives.

Rob Lee
CTO, Pure Storage

Yep.

Speaker 2

I mean, it's like what, 80% or something?

Rob Lee
CTO, Pure Storage

Yeah, we'd estimate 80%-90% of bits deployed. Yeah. It's a huge number.

Speaker 2

Yeah. It's really surprising, given what's happened in the enterprise. Yeah, right. Okay. I wanted to ask about AI kind of as a tailwind. And I know on the last earnings call, you guys have kind of framed the kind of customer, I don't know, it's customers, but it's like there's AI, there's cloud, and then there's hyperscaler, those three opportunities. Can you just walk through those three opportunities?

Rob Lee
CTO, Pure Storage

Yeah, absolutely. So when we talk about, so I think there's two discussions there. One is how we're breaking up and thinking about the AI opportunity overall and what that means for Pure. And then I think we had a separate discussion about just clarifying how we think about opportunity segments between AI, cloud, and hyperscaler. So maybe just to kind of reframe, when we talk about AI wins, we're really talking about footprints that we're serving for customers directly attached to their AI projects. These could be in hyperscaler firms. They could be in enterprise firms. They could be in specialty GPU cloud firms. But really think of it as our storage pretty much more or less directly connected to GPU activities driving AI.

When we talk about cloud customers, we're typically talking about customers where we're serving them software or hybrid cloud technologies to run better on their public cloud footprint. But it's really a sell through or sell on the cloud model. When we talk about the hyperscaler opportunity, that's where we're talking about driving mainstream replacement of disk and selling our technology directly integrated into the hyperscalers' architectures and products. So that's kind of how we're thinking about those segments. And the reason we felt it important to clarify is there's been a bit of a conflation, if you will, of as we talk about the hyperscaler opportunity, many in the financial community are equating that to AI buildouts. We'll benefit from AI as well. That's great. But we see the larger hyperscaler opportunity as really the mainstream replacement of disk. So that was behind the clarification.

As we unpack the AI segment, I think we also had a separate dialogue and really confirming kind of our views over the last 3, 4 quarters that we really see AI as several opportunities. We see, I would say, the customers and the nature of those environments as separating out in time. Number 1 is certainly led by the AI training or machine learning type environments, which are the folks that are building these models, large data sets directly connected to high performance GPUs. That's kind of opportunity set number 1, the training environments. Number 2 is what we think is going to become the more broad-based enterprise deployment, but earlier in cycle, which is the inference or RAG opportunity. How is your bread and butter enterprise going to go deploy AI? They're probably not going to build it all from scratch.

They're going to go assemble components and figure out how to put it into production. That's what we look at as the inference opportunity. And then number 3 is really just the overall accelerant to modernize and unify the underlying infrastructure. As I go have conversations with enterprise clients, the step 0 in kind of unpacking their AI plans is, hey, where is your data sitting today? If it's sitting on 25 different systems run by 6 different teams, half of which are sitting on cold spinning disk, like how in the world are you going to actually connect all this stuff together? Well, we can go help customers with that, just frankly, with our baseline platform. And so as we look at those 3 opportunities, I would say in terms of timing and scale and size, I would take it in that order. Training is very much R&D.

You've got to go build those models. That's got to mature before it can be deployed in broad base.

Speaker 2

That's today.

Rob Lee
CTO, Pure Storage

That's today. I think inference is emerging and early in cycle, but we're not there yet. I think we're seeing early signs. That's going to take some time to develop over this year, next year, and probably several years. And then as far as folks modernizing their environments, obviously that's kind of what we do all day long. And I think we are seeing signs that focus on AI and preparation for AI is influencing a lot of their technology selection decisions. And frankly, that's a huge opportunity in and of itself for us.

Speaker 2

Okay, I think we'll have to end it there. Thank you, Rob.

Rob Lee
CTO, Pure Storage

Thanks, Jason.

Speaker 2

Thanks everyone for coming. We're going to go upstairs right now to Adler.

Powered by