Thanks, everyone, for joining again today, the UBS Global Media Communications Conference, and we're going to throw in a little bit of technology here. For those that don't know me, I'm David Vogt. I'm the UBS hardware networking analyst, and we're excited to have with us today, NetApp. And from the company, we have in the audience, Gloria Lee, sitting out there, and sitting up here on the stage with me is Phil Brotherton, Brotherton. He's in charge of solutions and alliances and been a 20-year veteran of NetApp. Before we get started, NetApp has asked me to read their safe harbor, so let me see if we can do this. Today's discussion may include forward-looking statements regarding NetApp's future performance, which are subject to risk and uncertainty.
Actual results may differ materially from the statements made today for a variety of reasons described in NetApp's most recent 10-K and 10-Q, filed with the SEC and available on their website at www.netapp.com. NetApp disclaims any obligation to update information in any forward-looking statement for any reason. So with that out of the way, Phil, thanks for joining us.
Yeah, thanks for having me.
So I thought before we jump into kind of strategy, product, cadence, demand, I thought it'd be helpful for people to kind of get a sense for what falls under your purview and kind of your background. You've been with the company for 20 years, so you've seen it all, and so maybe just to kind of level set the discussion, I think would be helpful.
Yeah. I'll give a really quick, who's NetApp? So NetApp is a storage company. We're known as a storage company, and specifically, like, customers in Wall Street, for example, know us as a file server company, first and foremost. We started our life selling to companies who build chips and build software. They continue to be very large customers of ours. For those of you who are old enough to remember the dot-com boom, we were a dot-com darling, and everyone who got VC funding bought a NetApp filer to power their website. When that went bust in the dot-com, when the dot-com explosion, we focused the company on the enterprise and building up our revenue base. Our revenue base back then was about $1 billion. We're over that.
I joined in 2004 as part of the enterprise push, and we've done a ton of work. This is where I got into my part of the job. We did a ton of work with Oracle, for example, to establish ourselves. This is before virtualization and things like that came out. A ton of work with VMware that probably more than doubled our company to the $4billion-$5 billion. And then, the next step, primarily for me, was to go to more modern applications in the cloud. So I started what's now known as the first party cloud services. My group started back in about 2014, to tell you how long these missions take.
Today I manage a group that's responsible for a whole bunch of those partnerships and some of the engineering development.
Great. So given your background, given you pretty much have seen everything at NetApp in the last 20 years, we're not going to get into specific numbers about the quarter, but given that, you know, NetApp just reported last week, and other storage vendors also reported their results that last week, we'd just love to kind of get your perspective from the enterprise hat that you used to wear. Kind of what's going on in the marketplace, right? So you had success with your new low-end C-Series product. That's been a, you know, an incredibly strong launch this calendar year. But across the board, we've generally seen sort of a more challenging or difficult spending environment. What are we seeing, and kind of how do you think this plays out as we think about 2024 and beyond over the longer term?
I mean, I'll give you my best guess. The safe harbor is a good thing to remember-
Yeah.
When I say, answer this question. The... I'd say from the macro environment, we see what other people are seeing. It's a challenging macro environment. Some puts and takes out there. That translates into storage demand is pretty tied. You've, you've studied it for years. We're pretty well connected to the sort of macro overall. And then from a... So when we look forward, we're expecting still kind of a choppy market, basically. When we look at our our specific execution, I think we're really excited about the C-Series. The C-Series, fundamentally, for those... In our world, our customers buy us because we run software, that they really like.
When we deliver it with a platform that gives them a better price per terabyte than the previous platform, they like it even more because it adds value to the total stack that we provide them. The C-Series is exactly that. It gave them a low-cost, relatively low-cost, flash-based system. Flash systems are fun because they're fast and small and good on power. The demand, you know, the demand picture on that's been really good. We also focused our sales force even more, got us more focused on, "Hey, let's just grow." You know, the customers want to talk about flash, and they want to talk about, what we call first-party services on the cloud. So we really lasered in our go-to-m arket like that, and our execution, you know, our execution, I think, showed, showed the upside.
Do you think, do you think the overall storage market reflects, you know, the, the strength of the C-Series? Does it reflect more of sort of the macro or the realization that customers want, to your point-
Mm-hmm.
more cost-effective flash solution, when historically, they might have purchased, you know, disk systems that were more cost-effective?
Yeah.
With the introduction of the C-Series, it gives them a price performance offering that, you know, historically wasn't available to them.
Right. Actually, so everybody here is probably sitting with a laptop that has flash, what's called a flash drive in it today, and maybe two years ago, you still could have bought a terabyte disk drive, right? And those are essentially gone. In the enterprise market, we're still selling hard drives because their dollar per gigabyte, dollar per terabyte is still better than a lot of flash systems. And there's a lot of people who are good with the performance of hard disk-based systems and don't want to pay the premium for the speed of flash, fundamentally. But flash has this big advantage that when you move to flash the boxes are just smaller, so they take up more space, or less, sorry, less space in the data center.
Right.
They're cooler, so they burn less power, and they're easier to run from an application performance point of view because they're just fast, so you don't have to... If there's anything data center people hate, it's like trying to figure out why this database is running slow, and is it the storage or... So this takes the storage equation - flash more or less takes the storage, takes out of the equation. With C-Series, the reason C-Series takes off is we have a very high-performance system out there we call the A-Series. We came in with the C-Series using a technology that's a little bit slower, but - and substantially less expensive per terabyte, and it just took the value of ONTAP and extended it into a space that kind of existed with disk drives to a certain extent.
I think we won some market share as well, but it fits a really good point in the market. And you'll see us continue to push basically push the flashes as far as we can-
Right
I nto the cost.
Do you think the success of the C-Series, and there's some other new entrants in the low end of the market that are all flash? Do you think that changes sort of the trajectory of the flash market versus the disk drive market longer term or medium term? Right, so the way I thought about it is, you know, obviously you're going to have cold data. You have use cases where you just need tape or disk systems, right?
Yep.
You don't need flash.
Yeah.
But the growth in AI, the growth in more cost-effective flash solutions, kind of changes the calculus a bit, right? Total cost of ownership is cheaper for, I would imagine, today than-
Power. Power is becoming a big driver.
So that goes into total cost of ownership, power consumption, cooling. So we broaden out the use cases. So does that facilitate or maybe expedite sort of the migration? Because I know historically the company's-
Yeah
... talked about, you know, in a given quarter, you have X percentage of installed systems that are on flash versus disk.
Yep.
Does that accelerate sort of the migration or?
Every time we, every time we give people more cost... There's just, to your point, there's a lot of use cases for, call it, slow, slow-
Right
.S ystems with good dollars per gigabyte. Every time we can get flash into more into that space, those customers will switch to flash. As they... Our customers are all on depreciation schedules, so it's all based on their depreciation schedules and things like that. But the reason for it... I mean, I'm involved with a lot of different customers that are doing this. The reason for it is ONTAP is ONTAP, so they don't change anything operationally in our systems. They just get, when they do move to flash, assuming the dollars per gig is the same, they just get a smaller box that's faster-
Right
T han the one they're replacing it with. It's a no-brainer from the customers just to upgrade.
And since you just mentioned ONTAP, I think it's a pretty good segue. So when you look at the success that you've had with enterprise storage using ONTAP, when we leverage that across, you know, new public cloud offerings, whether it's AWS, et cetera, what is the key differentiator? Is it the familiarity of the solution that draws, you know, potentially new business wins for you? Kind of maybe walk through, you know, how maybe the public cloud providers view your solution with ONTAP as a natural extension to what you're doing in the enterprise.
Yep . So again, for people who aren't familiar with this, maybe the easiest way, ONTAP is our software that we've been developing since the... It's actually, what, since 1993 or something? It's from the very beginning that we built software first for NFS file servers and for what are Windows file servers, which are also all your Mac file servers now, or all your Mac files. And that file service that we build at enterprise scale, now you look at where we're running in banks, we're running in the biggest chip development centers. I mean, we're everywhere with that file system. That file system, putting it on the cloud, that's what this... Like, FSxN is the name of the product on Amazon. We have a couple other names on Google and Azure.
By taking that capability and moving it to the cloud, that's why the cloud guys wanna partner with us, is they know customers want that enterprise-class file system on their public cloud, and then we want to provide it where our customers are going. So if you look at, I can't talk about everybody, but if you look at our customer base, our customer base was a relatively early adopter of clouds in general, and we were working with our customers to help them go, "Hey, what - how can I get ONTAP to the cloud?" That's what I was doing in 2014, really.
Right. So this whole, like, multi-cloud, hybrid cloud solution, you know, you guys, other companies have been talking about it for quite some time. It seems to have been sort of accelerating or gaining steam post-COVID. I think maybe that's my judgment based on the data that we've seen.
The technology is getting easier and easier to adopt.
Maybe that's-
That's a part of it too.
Maybe-
I don't know if it's COVID, but definitely the technology is getting better.
When you think about your medium-term outlook for storage, you know, how critical is it for you competitively to have this, basically, this ubiquitous software platform that can run anywhere?
It's huge. Yeah, I think it's at the root of our differentiation. When you... Like, I'll give you an example. A couple of years ago, VMware was trying to also extend VMs to the cloud.
Right.
Right? So they have a cost problem because... I'll get into a little bit of technical detail. When you scale your servers at the same rate you scale your storage, that doesn't work for big service providers. What happens is you end up with... Usually, you scale your compute, and then you get a ton of underutilized-
Right
S torage. So what big operators do is they break the compute layer apart from the storage layer, and you scale them independently.... So, like, VMware's challenge was: How do I do that on the cloud reliably? And whether it's called ANF, Azure NetApp Files, FSxN, we're the only game in town that can help them do that, and it was super important that we were a first-party service. So we weren't some glue-on, or they didn't have to OEM us. We're right there in the Amazon consoles, and we brought that out. We're doing the same thing with Red Hat OpenShift. And that's why I say the technology is really changing, because just 'cause you can do the storage layer doesn't mean you've solved the compute layer for operating hybrid. But, we're... I mean, we're a long way down the path of making hybrid truly viable.
Then you get one strip of, you know, if you think your data management, not necessarily your... It doesn't really matter in some sense. The hardware can come from NetApp, the hardware can be from Amazon set.
Right.
But having one strip of data management is a big simplifier for our customers.
Maybe just remind me, so I'm not entirely sure. How are you working with Red Hat OpenShift? So, right, as I understand it, obviously, it allows, you know, developers to write once, deploy anywhere. Are you sold alongside the OpenShift platform, or how does that work?
So again, it's on-prem. So let me be careful not to... Red Hat has OpenShift on Amazon-
Right
Known as ROSA.
ROSA, right.
They also have OpenShift you can buy through licenses and different, you know, support agreements on-prem. So we've worked with them to integrate our products, to make our sales force aware, our support guys aware, how to use OpenShift. OpenShift is more of a container technology. Containers need persistent storage sometimes, and that's where we're working. We have all kinds of tools to make. I'll kind of joke. If you know NetApp, and you probably remember we snap and clone everything at NetApp.
Right.
So it's integrating these technologies that we have that make operations easier. We integrate that into the OpenShift stack.
Got it. So, I mean, as a reasonable proxy for kind of the market opportunity for NetApp, you know, ONTAP, effectively, you know, it, it starts with the enterprise customer, right? You're not, you're not, Amazon's not out there natively selling ONTAP.
It's going to change with AI.
Well-
It's going to change with AI.
Well, we'll get to that in a second.
Yeah, yeah.
But AI, right now, it's like an enterprise-
I'll take your stipulation.
Right, it's an enterprise, it's an enterprise-led sale-
Yeah
N ot an AWS digitally native-led sale.
Okay.
And so the AWS component of it or the Azure component of it is sort of the ancillary offering that makes a ton of sense. So I was going to go with this. So when you think about AI as we move forward, we were talking about this prior in terms of, you know, maybe we, instead of moving technology from on-prem to the cloud, maybe there's a bit of a reversion, effectively, if that's the right phrase, where technology and, and infrastructure that's been resident in the cloud, and applications, and storage, and data migrates backwards. Is that-
Well, I
You think AWS is going to start to sell NetApp storage, or Azure is going to push NetApp storage because, you know, there are use cases for some of their customers-
Hang on
W here being on-prem makes sense?
Yep. Let me change your premise just a tiny bit: client-server... My whole career in IT has been with client-server apps. I just- It's when I... My age, basically, it's when I joined the world. It's the- some of you kids are too young to even know what I'm talking about. But all the apps we've built in the last 30 years, until about 2015, probably, roughly, those were done in this style of what's known as client-server or POSIX, if you're into it, and they all lived in on-prem data, in on-prem data centers. The cloud didn't exist, is the simple way to think about it. AI is going to invert that. AI is going to drive application. All the new application development starts on the cloud. But then the question on the cloud always becomes...
The rule of infrastructure is it's an economics decision. If you're a big enough operator, and it makes economic sense to have data centers and to buy your own equipment at usually on, often on capital, so, and then you may sell a service off your infrastructure, those are the people we're seeing coming, saying, "Look, I'm doing so much AI work." I'll give you an example. It's from life sciences, an example from life sciences. "I'm doing so much AI work on the cloud, I can actually do this more economically by taking the exact tool chain that I use on the cloud, but run the hardware," if you will, "the GPU-loaded servers, the networks, and the storage on-prem." And it's a question of scale.
I think customers are going to do this, scale and money. They're going to do this, as they get more into AI. You'll see, you'll see different, different approaches. So we're gauging our demand sense off of we have hundreds of customers doing AI today. They come from kind of specific verticals, mostly, like life sciences, where there's been clear value add, you know, financial value add from using AI. We're expecting Gen AI to expand that ginormously, depending on how the... all the legal issues and all the-
Right. So but when you mention you've got hundreds of customers using AI today, that's not as we would define Gen AI today. That's-
Yeah, so-
Traditional AI use cases, machine learning, others.
Right. What we see in our base is kind of the way I think the market is. We see definitely Gen AI cases popping up, but it's predominantly what you'd call predictive AI.
Predictive AI. Got it. And so to that point, when you think about sort of, you know, a customer comes, a customer comes up with a great idea, hypothetically, and they have this great, you know, data set that they want to run some kind of large language model with billions of parameters on. They want to spin that up in the public cloud. That's flexibility. They can take advantage of the GPU installed base. But to your point, doesn't the data egress fees become cost-prohibitive to keep that workload or that model-... running for not just training, but now inference, right? So every time you want to access that model-
Right.
Does that, does that color your view in terms of more models, more data will be resident? I think your CEO, George Kurian, said AI will be performed where the data is resident, effectively, to paraphrase his quote.
Yep.
That means, or that would infer that there's going to be more models, edge of networks, on-prem, enterprise data centers.
I think everywhere. What George and I, I think, would say... We're all speculating, I think.
Right.
But it makes sense that, just to use an example, it definitely makes sense to do these giant foundation models on the public cloud. There's nobody else that has enough compute to do it, but you only need a couple. There's only a handful of foundation models required in the world. Then you get all these derivatives of those things, and those are gonna tend, we think, to go where the data is. Then it gets complicated. We did a really cool demo, just to do a techno brag, we can do a thing where we can take on-prem data and cache it to the public cloud, so that you could then present the cache data straight into, say, train your foundation model, right? And the data, literally-
Sure.
The data didn't leave your data center.
Right.
The cache left your data center. So there's a lot of data security value in doing that, and we think ideas like that are gonna be really popular, and it requires really advanced technology to do this kind of caching that I'm talking about. It's one of the reasons that, you know, we think having ONTAP on both sides of the... I always call it both sides of the wire, but on-prem and on the clouds is super important.
Does that change, besides governance and security, does it change cost structure?
Absolutely, because it changes your egress fees. It changes all kinds of stuff.
So how do the hyperscalers treat that? So, like, you're pushing a cache version of the data-
They like it right now because it's super. Well, well, let me just, let me. You know, I've worked with Amazon a lot, and they're the ones I'm super familiar with, and they really are customer-centric. So if we can solve a problem quickly and efficiently for a customer, Amazon is always a great partner. And we actually did this demo, just to be fair to everyone. We did this demo that I'm talking about at Google first, and it applies to all three clouds.
Got it.
But I mostly, I mean, just broadly speaking, we just brought out a pretty cool feature. This is, again, taking you into the weeds, but running VMware on-prem and on VMware Cloud had a thing called a Transit Gateway, which is kind of an egress fee a little bit, it's like a toll booth, that was making costing very hard to predict for customers. We had tools and things, but it created a variability in costing. And VMware and NetApp and Amazon just engineered that gateway out of the problem for customers, and that's the kind of thing I think is gonna... It makes hybrid cloud operation, you know, feasible-
Right.
Basically.
Solution.
The reason I brought that one up is those kind of use case, those kind of issues, I think we're gonna see in AI development extensively, 'cause the data lives in a lot of places, really, and you do have to. It's hard to move all that compute sometimes to the data, so sometimes you move some of the data, and it's gonna become very. It's very important that we... We already do it well.
Right.
And, yeah. The last thing on this, by the way, just to get all our plugs in, is the other thing about this whole AI area, so it's different than the sort of old database market, for example, is the AI market is all about data, and the data is usually in an object format or it's in a file format.
Right.
It's so in our jargon, unstructured data. And being the leader in unstructured data on-prem puts us in a really advantaged position compared to what are thought of as our traditional competitors, typically. So we think AI is gonna be really good-
Right
For NetApp. We're pushing hard in it.
So without getting into the specifics on financials, since you have a long history, can we talk about how NetApp thinks about storage in relation to compute deployments, right? So typically, they go hand in hand. Obviously, the storage market has been a bit softer because of macro, and we touched on that earlier, but we're starting to see general purpose compute maybe pick up a little bit. Like, Dell talked about that last week, that they're seeing a little bit of a sequential uptick in demand.
Mm-hmm.
The market's been relatively weak. I think a good rule of thumb for us, and correct me if I feel differently, is that, you know, as compute scales or gets healthier, storage is generally quarter to quarter lag. As, you know-
Mm-hmm
Customers become more, comfortable that the recovery has happened, where, you know, obviously, we're using compute more, we need more storage. Is that how we should think about kind of the traditional storage needs of your customers, let's say, over the medium term?
Yeah.
Is that a fair way to characterize what the demand trajectory might look like?
Yes, is the short answer. I wouldn't- I'm not in a position to say the quarters, if it's-
Yeah, yeah, not about a quarter-
But, but-
But generally speaking.
But generally... In fact, I'll make a joke. I've said in the... We partner a lot with NVIDIA, and this is a joke, just for everyone on the web, but I've described our approach to working with NVIDIA as ambulance chasing. Because if I see a DGX server sold, I know there's a storage problem right behind that, and you can calculate the joke from there. But but definitely in AI, like, the storage is following the server, right? And it is a matter of months behind. The buy decisions are often separated. I think that when you talk about the compute side, the CPU side, as opposed to the GPU side, an awful lot of that demand is fueled by VM deployments these days.
Got it.
Not 100%, but a lot of it. So I might characterize it as an uptick in VMware workloads.
As the better indicator?
You would see it in, and I'm generous, like, VMware is just a big part of the enterprise market we sell into, but that's how I would look at that one. The one other piece that you're... In NetApp, we often sell as a file server, so as a standalone file server. And in that case, the demand is straight on us.
That's different, right?
It's not, it's not server-led.
Can I ask a-
That's a big part of our market.
Can I ask on GPU for a second? So if I look at, let's say, an NVIDIA cluster today, there is some embedded storage within the entire NVIDIA stack.
Yep.
So when does third-party storage effectively come into play? Where... Is it, is it—it's not on the training side, I would imagine today, it's more inference or-
No.
I know the lines are blurry.
Yeah.
But also along those lines, and I'm gonna, I'm gonna ask this question, when you think about sort of the scale-out architecture needs of the hyperscalers, how do you think that storage solution, you know, since you're familiar with Amazon, plays out? Is it just, you know, a software-defined storage solution using NVMe memory, or is it a little bit more robust? Like, how do you-
Let me break your question into two.
Sure. Like, I know there's a lot of things-
Yeah, NVIDIA question-
Sure
Big enough one, I want to take that one on first.
Sure.
It is absolutely. So, NVIDIA, when you look at... Let me generalize your question first. I came from the database world originally, and I was at a large company that has two letters in its name and then three. And we built database servers and sold them for SAP and for, back then, PeopleSoft and all kinds of big, big enterprise systems. The database person's view of the system tends to stop with the storage on the first copy, okay? And what I mean by that is, that first copy has to be the performance copy that makes the production app work. And you're really measured on, can you handle the load of your large enterprise customers? That takes a lot of work. You tend to not focus on the second copy, the third copy, the fourth copy.
That can be for compliance, that can be for application development, it can be for backup. That's where, like, EMC, back in the 1990s, really made its mark, was being that company. And NetApp, too, we follow the same basic model of thinking about all the copies of the data. To get to NVIDIA, so when you're looking at a big GPU system, NVIDIA talks about storage in a... Let me just say it this way: they talk about the primary copy. I won't break it down into all the details. They talk about the primary copy. The NVIDIA team is well aware that you have to have copies for the development of the training model. You have to have copies, so people who are doing GenAI models in industries that are regulated have to have all kinds of tracking.
Right.
Which is actually very similar to what we do with our pharmaceutical customers, have to track drug development. So all those snapshots and clones and things that don't exist in the NVIDIA storage, that's where we tend to partner with NVIDIA to provide all those capabilities. And then there's a term in NVIDIA Land called SuperPOD. You'll see that we are a SuperPOD partner.
Okay.
I'm not shying away from that performance tier, but, but it's super important that the workflow of AI is what we provide.
Got it.
This is where you'll hear George say, "AI runs on data, and data runs on NetApp.
Right.
That's why I'm talking about this whole data pipeline, is that's what actual building an operational AI system looks like.
Got it. And then maybe on the scale-out side from hyperscalers, what are they thinking today? Like, how do you fit into their needs? Like, there's a lot of competitors, private and public, that are talking about software-defined networking solutions, software-defined... Excuse me, I got networking on my mind. Software-defined storage solutions, you know, using off-the-shelf memory that meet sort of the scale-out needs-
No
Like what AWS is trying to do.
Yeah, it's evolving. The, the place we're working, the place we're working the most with the hyperscalers in AI is the one I mentioned. It's the how do you connect on-prem data and public cloud data, and do that in the most efficient ways possible? There's a bunch of tracking in, like, Bedrock. To keep using Amazon examples, I'm going to get letters from Azure and Google. The, but in Bedrock and, and SageMaker, there's a bunch of, of techniques that we do in, in our FSxN product that we're working with them on integrating. But it... And, and to be honest, it, it's still fairly early days in the AI, NetApp integrations in the AI-
Right
Use cases, where we're the hundreds of customers I talk about, we're more mature in the,
Enterprise side
Enterprise on-prem business, yeah.
So maybe with the couple minutes that we have left, we didn't touch on the public cloud component of your business. That part of the business just went through a strategic review, which the company announced last week. Maybe if you can kind of walk us through what's left in the business as you see it today, and kind of how it fits in with what you're doing on the traditional enterprise hybrid cloud side of your business. And I think some people are a little less familiar with sort of the different moving pieces, whether it's Spot, you know, Instaclustr, et cetera, some of the more recent acquisitions that you've done as well in that group.
Let me start with a high-level answer to your question. In NetApp, we started a push towards being really strong on the public clouds, and it's not quite a decade ago, but it's a long journey. The first thing we moved, we ported ONTAP, because we have different flavors of ONTAP that we ported, so our core software. And then we started extending it with some acquisitions. And today, if you look today, it's about 60% of the business of our cloud is from ONTAP, and 40% is from those other products you mentioned, Spot, Instaclustr, and Cloud Insights, and we had a couple of others. When we talk about the strategic, what's the right way I'd say that?
Review.
Review. Strategic review. Yeah, thank you. The strategic review, we were calling out a couple of lower ROI projects in our cloud portfolio to move them, really put them back, focused primarily on our what we call our 1 P, ONTAP. ONTAP on the public clouds has a lot of upside in it. We have good growth, we think AI is gonna fuel more. As I've been talking about, the hybrid cloud-
Right
All that stuff fuels more. And then surrounding it, like Spot, Cloud Insights is a product that helps ONTAP primarily do more-
Do more
For customers, and it's an observability product more than anything. We have a cool one that we call Cloud Data Sense, that's under the covers of this, that helps people, like with ransomware, it looks at what's coming into the system and can predict, can predict problems coming into the system. We also protect people on the back end of ransomware, but, but that front end is a cool cloud product. What am I missing? Oh, Instaclustr.
Instaclustr.
Instaclustr is pretty. You can tell by the way I talk like an engineering geek, that I like products. So the Instaclustr runs modern databases, open source databases as a service, mostly on the public cloud. We're just in the process of integrating that to all our storage assets, which improves the overall economics to the end user of Instaclustr. But we think all those fit together really well in a, for the kind of business we've been talking about. Like you've hit it up right, we're really enterprise is our first push, and then what you're gonna see us do, and I still think it'll be enterprise-centric, is AI, as AI adoption comes on. That's a modern kind of cloud-oriented workload or app design style, but that customer, pulling them into our sphere, if you will, has got a lot of upside for us.
Just maybe one final point on that. When you think about ONTAP being sort of the critical sort of touch point effectively or product from the public cloud perspective, when you look at the solutions that maybe cloud customers, whether it's enterprise or hyperscaler, are looking for, maybe is there something, not to, not to talk about potential deals, but from a technology perspective, is there something that maybe there needs to be a backfill or something that you don't have that customers may be looking for? Is there a way, for example, I'm gonna make it up-
You're gonna-
I s there a way for, like, you to find an asset or develop an asset that talks about power consumption and how you can be more efficient when you spin up a workload, or how you spin up a workload?
Yeah.
You know, is it, you know, these-
We've been in-
Lower priced GPU clusters or CPU clusters, like, is it a way to ration how you actually allocate workloads?
Yes, there is. Partially, that's why we bought Spot, actually.
Right. So as-
Spot's involved in that thinking. We've got some homegrown software, we call it BlueXP, that has some of that capability you've talked about. We're trying to bring this together into a more service-oriented-
Right
D esign. Something we didn't talk about, we bought a company a while back called StorageGRID, which is our just object store. And it's really important. We've done a lot to integrate object storage into ONTAP. So object StorageGRID can run as an independent object store for customers that want that. There are definitely customers doing, like, Hadoop modernization, that don't need all of ONTAP, they like StorageGRID. And then we connect StorageGRID to ONTAP, too, in many cases. Both StorageGRID runs just like ONTAP, it runs on the cloud and runs on-prem. And so we're real happy with that. That acquisition was done a number of years ago.
That was a while.
But we're real happy with that one. And then I think you'll see us, just to speak about this generically, there's upside in help- It's kind of where you were going, and just if I generalize, helping customers manage their estates better, making it more fluid. Lately it's been ransomware, you know, protecting yourself from ransomware. All those areas are areas we can sell, essentially build data services that are sort of an upsell on the base.
Got it. All right. I think, we're just about out of time, so maybe we'll just end it there. So Phil, thank you for your time. It's been a pleasure. Gloria, thank you for joining. And, thanks everyone for joining-
Yeah
The Fireside. If you have any questions, I'm sure Gloria can help you out, and Phil can answer anything technical in nature, and so we're good to go. Great. Thanks, everyone.
Yeah, thanks. That was fun. Thank you.