All right. Hi, everyone, and welcome to Insight 2024. We appreciate your time, especially for everyone who made it in person. Before we kick off, I'm gonna read a Safe Harbor. Each of the 2024 Insight Financial Analyst Tech Sessions may contain forward-looking statements and projections about our strategies, products, future results, performance, or achievements, financial and otherwise. These statements and projections reflect management's current expectations, estimates, and assumptions based on the information currently available to us and are not guarantees of future performance. Actual results may differ materially from our statements or projections for a variety of reasons, including macroeconomic and market conditions, global political conditions, and matters specific to the company's business, such as changes in customer demand for storage and data management solutions and acceptance of our products and services.
These and other equally important factors that may affect our future results are described in reports and documents we file from time to time with the SEC, including factors described under the section titled "Risk Factors" in our most recent filings on Form 10-K and Form 10-Q, available at www.sec.gov. The forward-looking statements made in these presentations are being made as of the time and date of the live presentations. If the presentations are reviewed, there are no presentations. If the Q&A sessions are reviewed after the time and date of the live presentation, even if subsequently made available by us on our website or otherwise, these Q&A sessions may not contain current or accurate information. We disclaim any obligation to update or revise any forward-looking statement based on new information, future events, or otherwise. So with that said, thank you. Let me welcome you again.
As a reminder, today's sessions are all technically focused. We won't be covering any financial information, so this is your opportunity to ask our tech leaders questions about our products, our services, our value proposition, the competitive environment. I know those are always top-of-mind questions for all of you, so you get to hear it straight from the horse's mouth. After this session, we'll for those of you who are here, you have some free time. The keynotes start at 4:30 P.M. You're welcome to go to the keynote. There's a small section of reserved seats that you're welcome to sit at, but you're also equally welcome to go find a customer and sit next to them and get their live reaction. After the main session, the showroom floor is open, and you can wander around. If you need anything, find one of the IR team, and we're happy to help you.
With that, it's my pleasure to invite George Kurian to the stage. George?
Thanks, Kris.
Thank you.
Good morning. Welcome to NetApp Insight. Thank you for taking the time to be here. We are super excited. We have a lot of innovation payload that we will be sharing with our clients, that our teams have been working hard on over the past year. We will talk about, with our clients, that we are at the start of the era of data and intelligence. What we've seen over the last several years that we've shared with our clients is that data-driven organizations are outpacing those that don't have their data well-organized over many years. But now, the tools that are being made available are even more powerful than they ever have been before on two dimensions. The first is you are able to, you know, analyze a set of data that is, frankly, for any organization, the preponderant majority.
85%-90% of an organization's data is unstructured data, meaning files and videos and documents of various sorts. It's conversations with customers and whiteboard sessions and so on. That's about 85%-90%, and we are the unquestioned leader in that part of, you know, customers', data landscape. The second is, not only are the tools more powerful to be able to normalize and understand that data, but they have almost human-like intelligence to be able to understand the domain in which they operate without human involvement and be able to switch domains. So go from one domain to another, go from one modality of operation to another, right? So going from, you know, text to video or video to image, and so on.
And so we talk about the fact that we are at the junction of data and intelligence, and what's required for success is that you need to have your data and data strategy well-organized. We said that to our clients last year, and we are reinforcing that to clients this year. I was just out on the road over the last few weeks with clients, and, you know, I'll draw two banks, one whose data is well-organized. They're already using Gen AI tools. They've got a hybrid architecture, and they're making good progress with those tools. Another bank that had a classic, siloed, custom-built architecture with lots of different landscapes, they are a year away from buying an AI computer and then starting to put their data on it, right? So profound differences.
You will see, for example, that in industries like life sciences, where there has been regulation and requirements to have high-quality data cataloged the right way with clinical data codes and with procedure codes, that they are making much more rapid progress than those that have not had their data well-organized, and surprisingly, you know, some of the more regulated industries are making more progress than the unregulated ones, so data and data strategy, super important. The second is when everybody has really powerful tools and all of the world's data supporting those tools, your domain knowledge and insight becomes even more important.
And then the third is the ability to take your domain knowledge, your data tools, and then apply it in an iterative test, learn, and adapt loop, so that you can graduate some of these projects from proof of concept to production, and then close the loop back to make your data science environment even more strong. And then finally, a data ecosystem that enriches your data, just like your business ecosystem supports your business. Those are the four key things that we'll talk to clients about. And then we'll talk about the two challenges that organizations face in, you know, using their data effectively to support, you know, advanced use cases like AI. And there are two challenges there.
One is a familiar challenge, which we are skilled at helping clients with. It's a data management challenge. And so what does that mean? How do I find the data across my landscape that I might want to use for my AI project? How do I govern sensitive data so that I can bring forward, you know, my security and access control model into my AI landscape, I can ensure privacy and so on? And then the third is, how do I actually keep my model environment and my data environment in sync as the data gets fresher and fresher, or data progresses through the life cycle, as it always does? So there's a data management challenge. The second challenge, which we have observed from interviews with around 800 clients over the last year, and I've personally participated in probably fifty discussions with clients around the exact same problem, is that AI is being built as a silo. It's got custom networking, custom chips, and it's not integrated into your data landscape.
There are so many clients that we've met that said, "Hey, I've got my AI computer stood up, but I can't get data to it." Or we had a semiconductor vendor that was copying 300 TB of data a week to try to keep their supercomputer moving. And I asked the gentleman who led that project, I said, "What's your life like?" And he said, "One word, hell." And he said, "It will not scale the way it is." Now, if you think about that gap, the chasm between your AI environment and your data and operational environment, for us, that looks just like how cloud was many, many years ago.
We stood up here in 2013 and said, if cloud were to become useful and seamless, you needed to have a bridge between your on-premises environment and your cloud environment, and we call that bridge the Data Fabric , and we innovated over many years to make cloud and enterprise data work seamlessly together, and today, you will have many of those cloud partners talking about the AI journey that they are taking on with us, and so for us, this is a familiar problem, and we are innovating to bring capabilities to the world of AI that don't exist today, and those capabilities exist along three fronts. First, is a set of tools and applications that make it easy to find the data, all of your data estate, so that you can quickly explore it and then choose what data you want for your AI landscape.
The second is to bring AI to your data, which is much easier than trying to bring your data to AI. You see, data is the gravity part of that equation, and so we know how to bring AI to your data, and you will see two or three transformative capabilities that we bring there. The first is the best infrastructure for AI, and we will talk about the third architecture, the third generation distributed system architecture. The first generation were shared nothing architectures like Pure FlashBlade or Isilon or Qumulo. The second was distributed architectures with centralized transaction management, like what VAST Data has, and the third is actually a truly distributed architecture, where the transactions and file operations are distributed across the system.
And so that's the second area, and we combine that with capabilities that allow you to have model versioning and data versioning synchronized, to have traceability of your data together with the models, and to have highly efficient, patented technologies for data retrieval. Retrieval is the first step of what you call RAG in your inferencing world. And then the last is a set of capabilities that we have built over many years that we've enhanced for the world of AI, where you can bring all of your security policies and privacy and controls across your AI life cycle, and be able to detect changes in the data so that you can efficiently apply those changes to your models. And so super excited. We think that we have step function improvements in our capabilities for unstructured data, reinforcing our leadership there.
We also have really good announcements in block storage, cloud storage that Pravjit will talk about, the security capabilities in our portfolio, and all of that falls under the umbrella of Intelligent Data Infrastructure . I'm excited for you to hear from our technologists today, but equally from our clients and the partners that we co-innovate with. Thank you for coming. Look forward to a great conference.
All right. Thank you, George. We appreciate it. So now we're going to start the first of our Q&A sessions. I'm happy to introduce a new face to you all, Pravjit Tiwana. Did I get that right?
Yes.
All right. Who heads up our cloud storage group? So one of the questions I get from y'all a lot is, you know, what is the value proposition of your cloud storage services? Why do people choose to use NetApp in the cloud? So you can ask directly. Why don't you come on up? Thank you. And before we have you start asking him questions, Pravjit, why don't you introduce yourself, say a little bit about what you do, and then we'll open it up to the audience, and I'm going to switch places with you.
Sounds good.
All right.
I assume this is live now, right?
Yeah.
All right. Hey, everyone. My name is Pravjit Tiwana. I lead the cloud storage group and the SVP and GM for the Cloud Storage, encompassing all three hyperscaler services from us in AWS, Microsoft, and Google. I joined NetApp earlier this year in March, although I have been in cloud world for pretty much all my career. What really excited me about being here is the world is seeing, right, really very interesting times with the growth of data, which we are seeing. Zettabytes of data is being produced every week, and if you intersect with what is happening in cloud, right, like, the migration in cloud, the momentum in cloud, and overlay it with the advent of AI, it creates a perfect storm right now in terms of, right, like, the intelligence, the capabilities which you can build on top of your data.
So from that perspective, right, like, since there is so much of data growth, there is so much of every organization is thinking of how to use AI to further accelerate their businesses. And when you mix cloud in this thing, the complexity and scale at which organizations have to deal with, it is becoming way more and more complex. And that's where if you see pretty much every business or organization have some cloud strategy, either they are all in, or they are at least exploring it, or they have one or two mission-critical workloads running, in the cloud. And that's where we play the role because we have our first-party cloud storage services available in all three hyperscalers today, and we have a deep, thanks to deep partnership with all three of them.
We continue to innovate on behalf of the feedback which we get from our customers. So I can talk about it, or we can do a Q&A, whatever the--
Yeah.
Yeah.
So, a question we often get from investors we talk to is sort of, how do we think about your competitors playing catch up to the first-party services that you already offer on the public cloud? They obviously go through the marketplace, but in terms of capabilities, how do you think about where the roadmap is to sort of stay ahead of the competition on that front? And do you at all see a risk of down the line, sort of some of your competitors moving to becoming sort of the first-party services on those public clouds? How do you sort of think about that risk?
No. Yeah, sure. Right, like, if you see the landscape today, we are the only first-party cloud storage available in the cloud with all three hyperscalers, right? Yes, the marketplace offerings might be there, but marketplace and first party is kind of like an apple to oranges comparison, because the deep integration which we have with the entire stack there, the whole sales motion to billing to especially support and operations, which you get in first party, is not the same thing which you get in marketplace kind of offerings. But that said, right, like, having multiple players or having competition is usually not a bad idea for customers, right? Like, we, we don't have any mechanism to say who will do it, and we don't need to speculate that either, right? Like, who else will be in the first-party cloud storage services down the road.
But where we are focused on is that we are being relentless in terms of the innovation we can do on behalf of our customers. The number of capabilities which we are building, working with our hyperscalers is phenomenal, and you will hear a lot of those capabilities either we have announced in the earlier part of the year and some of those we are announcing later this week. So you'll see a broad array of. The one thing which I have seen running cloud services at a very large scale for a very long period of time, there is no compression algorithm when it comes to things like scale and building the capabilities over and over again, like iterating on those capabilities over the years. You cannot, like, if somebody storage, compute, network, these are kind of primitives, right?
Like, where it takes time to understand how to build scale, how to build operational strength. If you see in my organization, we spend so much time on things like security, reliability, availability, performance. Those are the things which are not just easily replicable from anyone, because it takes years to build those kind of capabilities, and we are investing in those a lot. But when it comes to marketplace offerings, we don't think that's apples to apples comparison today. But we cannot also say that, "Hey, what's the future for our competitors down the road?" But in short, customers having wide choices is not a bad idea. It raises the bar for everyone.
Thanks. And if so many of the customers who end up buying the cloud storage are kind of new customers to NetApp, and so just how does the value proposition to those customers kind of different than those who would have traditionally been NetApp customers? And what's kind of the most effective way to get those customers on board?
Yeah, you're right. Three out of five customers which we get today in our cloud storage, they're net new to NetApp, and the remaining two out of those five are using us in hybrid mode. So from that perspective, yes, there is a. The good thing about being available in the cloud is the, is being available in front, and especially when you are integrated in the hyperscalers, it's being available to all the customers, and they can themselves play and figure out, right, like, what's the value which we provide.
We see lots and lots of that developer-based ecosystem also happening now in our first-party cloud services, where customers start with a small workload, learn that how much goodness we bring in terms of performance or about multi-protocol support or the data protection capabilities which we have or the security investments which we have done. Like this year, working with Microsoft, for almost six, eight months, we have been working on the SFI initiative from Microsoft, like the. So the customers are getting things like not just our security goodness, but they are also getting all the baselines and the security controls which our hyperscaler partners are defining. So combination of those excite these new customers to start using.
We have been fairly focused on price performance optimization in our hyperscaler clouds, and that also resonates to the customers, and they. But the final thing is, right, like, the AI integration, which we are able to provide. If you see, most of the science behind AI is being driven by also hyperscalers. Like, all three hyperscalers are very, very heavily invested into building the AI stack, along with obviously NVIDIAs and the Metas of the world. But using our storage in hyperscaler setups makes it almost seamless to use the AI stacks which hyperscalers have built, and yet without the need to, like, replicate data or clone data or copy data and create silos, they can do it wherever the data resides and can use that AI.
The combination of our security, our AI, our price performance, our operational strength, and the capabilities which we have built in ONTAP for almost two decades, all those combinations resonate with the new customers as well as the customers who are using us in a hybrid mode.
Right here behind you.
Hi, it's Tim Long at Barclays. You mentioned AI, maybe a two-parter. Talk a little bit about kind of how your business is affected by a lot of these large language models that are going on now. I'm sure it's not major at this point, and then maybe walk us into when we get into inferencing, and there's a lot more, you know, on-prem and bursting to and from the cloud. So how do you see, you know, that dynamic impacting, you know, the cloud storage business for NetApp? Thanks.
No, thank you. This unstructured data and file, like, most of the AI, is going to run on data, and data runs on NetApp, right? Like, so means it's a perfect combination for us. So we are really excited that, hey, with AI, growth in AI and the improvements in AI, which we are seeing, at least in the last 18 months, it has accelerated a lot. It is really good for customers to be able to. Our mission there is that we want to provide the AI capabilities, irrespective of whatever LLM models you want to choose, right there on your data, right? Like, you don't have to copy, clone, and those kind of things.
That's our--w e don't want to create silos. So if you look into earlier this year, we launched Workload Factory for Gen AI. That is basically a capability in which you can integrate LLM models from Amazon Bedrock into the data which you are storing in FSx for ONTAP, just with few clicks, right? Like, you don't have to copy it to, say, S3 object store or any other storage system. You can do that. So we see that is. And once you have integrated some capabilities, like your Bedrock foundation models, then you can also start overlaying it with other capabilities which you might choose from Marketplace, or you choose from hyperscalers, or you choose from us also, right? Like, so that provides a very rich ecosystem for customers to use that AI. And there was second part of your question. I forgot that part.
Inferencing, hybrid cloud.
Hold on, Tim.
Oh, yeah. Yes. So, let's talk about the hybrid cloud, right? Like, because we see very significant growth in our hybrid cloud setup year over year. So taking that same case of what we have done with Amazon Bedrock to use LLM models against the data which you have stored in FSx for ONTAP, you can also combine it with your on-prem storage. So things like, right, like with SnapMirror, you don't have to copy the whole data, but whatever is the dataset which you used to want to use for inferencing or your chatbot or whatever, you can combine that data into FSx for ONTAP. And you will see these kind of patterns also in other hyperscaler partners. So you will from, from that perspective, we want to provide.
Working with our hyperscaler partners, we want to provide a mechanism in which you don't have to say, "Here is AI for cloud, here is AI for your on-prem." We want to provide a seamless integration, and that's the path which we are on. On inferencing, we do support some of those capabilities today for your data and with our hyperscaler partners. We earlier this year launched GenAI Toolkit for Azure NetApp Files as well as Google Cloud NetApp Volumes, which provides. You can use it against your proprietary data. With few clicks, you are able to build your RAG interfaces on top of that, and you can build applications like chatbot, knowledge base, all those kind of capabilities with few clicks only.
So the inferencing part and the RAG, and GenAI Toolkit, and the Workload Factory part of ours also support building your whole RAG infrastructure with just a few clicks. I highly encourage you to look into. We are doing these demos this week in different sessions and also on the floor. So please do. If you have time, please do look into those, because these are really, really exciting and step-forward capabilities which are coming.
Steve Fox with Fox Advisors. Maybe just. You mentioned a couple things on the roadmap, but can you just sort of step back and talk big picture of how you envision the roadmap? And without, I guess, giving away the announcements you have this week, what are the big things we should think about that you're focused on, say, over the next 12 to 18 months for improving on?
In AI or in general?
In cloud storage.
Overall, in cloud storage, right? I think it's a majority of our roadmap is controlled by our customers, right? Like, whatever we are building is based on the feedback from customers. So if I have to look into what dimensions we are working on, right, like, without going into, right, like, what lands when and all, but the dimensions I can talk about, right? So that from a dimensions perspective, right, like, we know that customers have trusted us to run a lot of their mission-critical workloads, right? Be it about their ERP systems or their databases, like SQL Server or Oracle, or high-performance computing or streaming content. The list goes on in terms of the kind of workloads they run with us.
Our roadmap, and from that perspective, is highly focused on making sure that if customers have to run their mission-critical workloads, we are the best destination for them. So that's one dimension of what we are working on in terms of our focus. The second dimension for us is what we discussed about AI. Our mission is that, hey, we want to bring the best of AI stack to the data which you have trusted with us, be it on cloud or in-prem, and we provide those rich capabilities so that you can build inferencing, RAG interfaces, whatever you want to build, with almost like in a frictionless manner. That's our second dimension.
The third is the cost optimizations. We have, like, if you see in the recent past, in this year, we have shipped a lot of capabilities. Our customers have told us that they love capabilities like auto tiering, which is now available in all three clouds for some good period of time, which basically moves the data from hot to cold tier or a different tier based on the data access patterns, without the customer have to do something about it. All the things which we, all the ONTAP richness, like compression, compaction, deduping, thin provisioning, all those. So we'll continue to keep our focus on making price-performance optimizations. Our goal is that, hey, cost should never be the reason why you don't select our first-party cloud services. And the fourth dimension, which we will continue to focus on, is in our hybrid.
We understand that different customers are at a different stage of their cloud migration, but irrespective of whatever stage they are in, we want to provide the best capabilities so that not just migration, but deployment and operations of those workloads and the data which they bring to us is highly, highly optimized. And that's why, right, like, we'll continue to invest in things like disaster recovery, data mobility, and couple it with the elasticity which you get in cloud, in terms of instantaneous cloud capacity or bursting or those kind of capabilities. So it's the four things, right? Like workloads, AI, price performance optimization, and hybrid.
Pravjit, not sure if this is sort of the focus, but Cloud Ops, the Cloud Ops part of the portfolio, can you talk about sort of how do you think about the value add relative to the cloud storage part of the portfolio that you're sort of more focusing on?
Yeah.
And then when you think about inferencing as well, in terms of AI workloads, do you see the value proposition of Cloud Ops changing on that front eventually?
I'm pretty sure Haiyan is also talking today, on Cloud Ops. I can give a little bit high level, but--
Yeah, so we don't have Haiyan today. But if you could be very high level. A nd I can always follow up with you, Samik, on any Cloud Ops you want.
Yeah. We have a different leader who runs that Cloud Ops portfolio. I'm sure you know her. We have, like, a lot, a bunch of places where we have synergies in terms of, right, like, making it into the hyperscaler cloud, especially around things like cost optimization, goodness, which comes from Cloud Ops portfolio, or the observability stack, which we can integrate with those. So we are looking into it. I will not say we are just looking into it, but many of our customers are looking and using it in that fashion, especially around the observability stack and the cost optimization aspects and so on. We will continue to work with Cloud Ops in that sense, to build a comprehensive portfolio.
That's where all the aspects of our unified storage, our data protection, our Cloud Ops portfolio, they all work together to provide a frictionless way to manage your workloads. I think that is the part, and I'm not super deep into the Cloud Ops portfolio, but this is what I know, what we are doing.
Okay, we have a question online.
Yeah, this is from Aaron Rakers at Wells Fargo. How has NetApp share gains evolved for block storage? I'm not sure this is a cloud-specific question.
No, sorry, what's the question?
Yeah. So Aaron, I'm gonna have you hold that until Sandeep comes on. But I do think you can talk about block storage in the cloud. B ecause I think that's the thing that people don't expect to hear from us.
Yeah. Actually, block storage is the part in our cloud portfolio, in our first-party services, which personally caught me by a surprise when I joined. In last one year, we have seen 140% growth in our block storage and first-party cloud services. So it is pretty significant growth for us, and we are really, really proud and excited about them. And the good thing is, right, like, most of these customers, they go to, say, AWS console or some other console, and they self-provision their block capacity because they are familiar, and they know that how much ONTAP richness we have built over the years into our block storage portfolio, and they can use that in the cloud also. So yes, block is an integral part of our strategy, as well as consumption from our customers. That was the question about block, right?
Yeah.
Thank you. Wamsi at Bank of America. I want to ask you a little bit about when you look at cross implementation with different cloud providers, it's kind of different, the way the technology is i mplemented. As customers are deploying, say, AI workloads, do you think that one architecture is more favored versus other architectures? And what are maybe some of the benefits of one versus the other, if you see any?
Yeah. I think the way customers use us is from the interfaces, right? Be it from the console or from the APIs or the SDKs, right? In general, right, like, the nitty-gritties of the architecture are kind of like opaque to from the consumption perspective from the customer. So, right. And then from that perspective, our goal is to bring uniformity. So we have uniformity in the same terms of, right, like the protocols which you use to access data, irrespective of what cloud you are, they are common, right, like so. And then we have a BlueXP, our manageability, which you can use to manage multi-cloud setup, so it gives you a single interface and all.
So the meta point being that, hey, we are trying, and based on our customer feedback, we are trying to make it uniform so that you don't have to figure out. The customer doesn't have to worry about, hey, if it is one implementation behind the scenes or the second. They get the same performance, same capabilities as they will get in one hyperscaler versus the other. I think the second part of your question, right, like, hey, is it good to do way A or way B? I don't think, like, in computer science, like, right, like, there is a perfect answer for these things, right? Like, I think both have their mechanism. I think GCNV and the ANF side, we pretty much do everything from the service delivery engine capability, which we have built. So there is a lot of unique property.
FSx chose a unique way to do it, and that is also very, very, the kind of scale growth which we see with that is humongous, but the customers are not bothered about that. We have a lot of customers who use us in multi-cloud setup. They are using FSx on or with ANF or something like that, and for that, we make it seamless for them.
And if I could just follow up, for the customers, I think to Meta's question, there was, you know, you've got three out of five customers are new to NetApp on the cloud.
Yeah.
So, for the ones that are not, is the AI deployment currently, whatever they're doing with that, is that more on-prem centric versus cloud centric? And are you seeing any signs of early migration of AI workloads on NetApp to the cloud?
We are seeing signs, but you also have to look into the landscape of the businesses, right? Like, a lot of businesses today are in the learning mode, fine-tuning the models, and those kind of things, right? So that's why I was talking about our GenAI Toolkit and those kind of things. So customers are showing a lot of interest, and I'll say a little more than like just POCs at this point in terms of that, but in fullness of time, I truly believe that, yes, it will be one of the core players in terms of, right, like especially using the AI seamlessly between cloud and on-prem.
Okay, Louis Miscioscia , Daiwa Capital Markets. Similar to Wamsi's question, just wondering, 'cause, you know, this was all deployed at various different stages, and some of the cloud, you know, your customers are yours first. You know, where are you with, with all them in the sense of, you know, you just talked about trying to be uniform, but are, some more advanced, and you're trying to take those more advanced features and sharing with the other ones? So I'm just sort of wondering if there's some that you're just doing a lot better with, and if so, how would you take that to the other ones?
You mean like, t he capabilities. S ay, in Azure versus capabilities in AWS versus Google, that, in that sense?
Yes, and total revenue and growth and, you know, all the things we care about.
So just a reminder, we're not doing any financial updates.
Sure.
Pravjit will not talk about the financial ramifications. B ut he can talk about the high-level capabilities and partnerships across the clouds.
Yeah. I think you have to look into a broad spectrum of things here, right? Like, if you see on the AI side, right, like, it's a relatively a new space, right? So the standardizations and uniformities will take a little bit of time to land between not just hyperscalers, but in the industry in general, right? But when you come to the other things, right, like how you access your data, be it by different protocols, right? Like, we provide capabilities like, hey, you use the same protocol to access. We have Windows or Linux, right? Without needing to duplicate the data or the unified storage capabilities. From that perspective, there is a lot of uniformity which we are bringing in terms of access, as well as from our BlueXP framework or the capabilities, right?
But in fullness of time, right, like, we do expect our customers to start using more integrated services from these hyperscaler services. I think that's a natural progression. We don't want to control or be in the middle of it. Where they find value, they should absolutely use that. And yes, there will be a little bit, right, like maybe the object store in Amazon is different than Microsoft or the security, but if they want to use some other capability from them, that might be different. But yes, there will be a little bit of that, but that is the part which we're trying to make it seamless through our BlueXP arrangement for multi-cloud setups.
Okay, thank you.
Thanks. Understanding from the customer standpoint, you know, they're most interested in kind of being able to not have to replicate all of this copying from various sources. I guess from a cloud customer standpoint, is there anything that they're asking you to kind of work on developing other than just accelerating, kind of the back and forth that they think can kind of help optimize their services?
I assume your question is in reference to AI, right?
Yes.
Yeah. Okay. Yes, we are deeply integrated with our customers. That's. There are a lot of capabilities which we are building in terms of, right, like, how to manage your RAG-based infrastructure. That's the part which I was talking about with our BlueXP Workload Factory. Now, you can, with few clicks, can enable that. So what customers are asking us to simplify the complexity of AI, right? Like, there is a lot of complexity involved in that today, and how you set up your infrastructure, how do you bring your foundation models, how do you build inference on top of it? So what they are telling us is, right, like: "Hey, we are trusting to store our data with you.
We are running our workloads on top of you, so make the whole AI ecosystem also simplified." So our goal is to make it frictionless and simple. That's what we are doing, and that's what our customers have told us to do.
I guess I mean the cloud--
Hey, hold on. Sorry.
Like, I guess I mean, what are Azure, Google, Amazon asking? Like, are they asking you for anything different than your customers are asking you for?
I wouldn't say it as an ask, but yes, we are deeply partnering in how to simplify it for our customers. So we do a lot of engineering and technical conversations with our hyperscaler partners to make it frictionless. They do want. Like, our hyperscaler partners fully understand that there is a lot of world's data which are on NetApp, right? Like, and they want to utilize that with the AI. So from the simplicity perspective, yes, we have a lot of integration discussions, a lot of tech discussions with our hyperscaler partners, and that defines the roadmap or the capabilities which we build in cloud.
Sorry, just a clarification. So, when you do all this work to simplify the work, the sort of, let's say, RAG for your customers, and the cloud customers want you to do that, do you? Like, does this come in as a feature that you offer to enterprises? Like, how do you get monetized for it? Are you just more dependent on eventually more data finding its way to the public cloud, or do you then have a feature that you can actually monetize at a premium with the customer after putting that work?
I think monetization part, we can discuss a little differently, but this is a core capability which customers need it today, and especially in the future. Monetization part is, I'm sure, like, we will. If you bring goodness to what customers want, monetization part is separate. Right now, our main focus is to make sure that we make AI seamless to work with our--y es, there are some monetization aspects of that. We haven't fully built that model. I don't think anyone has built the full model yet on like, right, like, the AI monetization part, but, in fullness of time, we do expect it to grow our business. And I think, Hoseb and alot are going to talk more about AI, is it in the next round? Yeah.
Yeah. And we have time for one final question on cloud storage.
Great, thanks. Pravjit, can you talk about what is your understanding of how the hyperscalers and obviously, each one is different, but their key considerations for build versus partners has changed over time. And you know, 'cause they do the hyperscalers do source, right, a lot of the raw components. They could do a lot of themselves, but you know, you talk about complexity, about scale, and you hit on a lot of these topics. But what I guess if you really were to narrow it down, like, are we.
Like, the crux of the question really is, like, are we at a tipping point in terms of, like, there's only so much that they can do themselves, that they'll lean more to third-party services and, you know, native first-party services like a NetApp, and therefore, the floodgates will, you know, have opened or about to open in terms of, you know, giving us your thinking?
No, I get the sense of your question. I don't think I want to say on behalf of hyperscalers if they are on tipping point or not, but what I can say is that, right, like, one thing common across all three hyperscalers is they are very good at listening to the customers, right? Their customers are telling us that, "Hey, we want. " Say, take the case of NetApp storage, right? Like, they are telling them that they want those capabilities in the cloud, and they are responding to it in the right manner. Now, there are two parts, right? Like, we have built a lot of innovation, a lot of capabilities over the last two decades, as I was talking about earlier, right?
Like, there is no compression algorithm to basically say that, "Hey, we can build the same capabilities in next ten months or something." That's part A. And the part B is, right, like, we ourselves are, on behalf of our customers, are not also stop doing innovation, so we are also accelerating that. So combination of what we have built, the richness over the years and our laser focus on the dimensions which we've talked about. This is, combination of this will continue to keep all of us relevant, and it's the right thing for customer's perspective also to get these capabilities. So our hyperscaler partners understand that. It's not like we versus them kind of a situation.
All right. Well, thank you, Pravjit. I appreciate you coming up and taking all the questions. A nd handling all the AI questions. I really appreciate it. So thank you for your time today.
Thank you. Thanks for having me here.
We'll look forward to your next meeting.
All right. Thank you so much.
All right. Thank you. And now, because AI is such a hot topic, we have two presenters who are gonna come and handle all your questions. So happy to invite Russell and Hoseb up. Some of you have heard from them before. They've done a lot of work for us. So before we kick it off with what I'm sure is an endless stream of questions from the audience, why don't you each introduce yourself and talk a little bit about what you do related to AI for NetApp?
Please go ahead.
Oh, sure. Russell Fishman. So, I lead the Solutions Product Manager for AI globally, yeah. That is the one of the ma-in-
Russell, your mic is another one.
Isn't it on?
No, you turned it off.
Is that better?
Test.
Can you hear me now?
Russell's free.
Yeah, he's not on.
I just turned it off. I think it was on.
Oh, now you're on.
All right, now we got it.
Yeah, okay. That's weird. I'll switch back on again. So yeah, so, Russell Fishman. So I lead the product management for solutions at NetApp and AI solutions in particular. My responsibility, partnered very closely with Hoseb and our product and engineering teams, is on making AI real for our customers by taking products from our portfolio and combining them with third parties, primarily to create complete use cases that help our customers adopt and accelerate their use of AI.
Hoseb Dermanilian . Good morning, everyone. I run the AI sales and go-to-market for NetApp. Been with NetApp for 10 years, and been doing this for six years, so I, I'm very fortunate to be on this journey for almost six years now with NetApp. So I'm happy to answer any questions you have about what customers are using NetApp for, or how do we see the market growing from a customer standpoint. Please, Tracy.
Very big picture question. So George talked about the journey from cloud to the Data Fabric at the start in 2013. So now you're starting on a similar journey. Any mistakes that you would warn us about that, you know, you're gonna have to overcome as, you know, AI evolves? Because I think a lot of Wall Street's concerned about the hiccups, not the endpoint.
So you want me to cover on the journey, or?
[audio distortion] Just, like, what you're looking for and, like, what are the potential problems or challenges?
Oh, potential problems and challenges. So I think what we are seeing is customers today are trying to use these large language models, but the first trial is not as good as they wanna be, that they can deploy it in their own enterprises, right? And we're talking if I'm a customer, if I'm a large retail customer, and I wanna provide a robot that will answer support questions, and that cannot be trained on a whole data set that was available to the entire globe, and it might answer questions differently. Some of them, they are new even going to these LLMs, and they cannot identify the name of their CEO. So I think that is the biggest challenge, is them understanding that they need to bring that data to these models to make these models more specific.
I think the overall expectation today that they can just go and use these large language models, and it will apply to every problem that they have. What the biggest challenge we see is that expectation becoming a frustration, which then people either stop using the technology or they adapt and, "Okay, if we wanna use this, then we're gonna do this, one, two, three, four." Bring the data, fine-tune the model, RAG the model, et cetera, et cetera. So I think that's, from my perspective, is one of the challenges. You know, you could tell the other challenge is cooling power data center, but I think, you know, for the wider enterprise, the hyperscalers will be a big player in this.
So, you know, even talking to customers, those who doesn't have the power and the cooling and data center capabilities, they will go to the hyperscalers, but then they will face these LLMs not being trained specific for their own models, right? So it's a mix of both. So either they fix the cooling and power for their data centers and train these models up, ground up in their data centers, or that data that exists today on-premises, bringing to these large language models and open source models are the hyperscalers, but then that becomes the data challenge. Do we move the data? The data is sovereign and all that. So that, that's what we see.
I might add to--
Go ahead.
To second what Hoseb said. So if I look back at that, that transition, as you said to the Data Fabric , one of the big challenges we had was we had the technology, but actually getting customers to adopt it was complex, right? It was difficult. It required knowledge and sophistication on behalf of those customers. It also required knowledge and sophistication on behalf of the partners that make it real for most of the customers. You know, I think one of the things that we're really focused on, on this AI journey is to accelerate adoption. And that really means having a fantastic set of partners that we work with.
Obviously, Hoseb mentioned the hyperscalers, but our go-to-market partners, we already have our Partner Sphere program that focuses on getting AI adopted by our partners with our technology portfolio, and also the fact that we're focused on integrations with third parties. We already have this amazingly rich set of ecosystem partners, 'cause we don't believe we can do it ourselves, and that we don't believe that we're the only people that people need to work with to deliver AI. We think we're an accelerator. But what we're doing is by focusing on that complete picture, it's helping people adopt.
That, that's really this kind of inflection point of democratization, where, you know, Hoseb's been, you know, brilliant at helping customers who want to get ahead, who are willing to go and build things themselves, maybe have the data sets, have that sophistication. What we're starting to see, of course, is, you know, many more customers who want to adopt without having to go through the development, almost like just buying it off the shelf. So, that, that's the inflection point that we're starting to see, and all those integrations that we already do are the way that we're already helping that.
Yeah.
Hi, Tim.
Thank you. Maybe you could talk a little bit about the most common use cases, applications that you guys are seeing NetApp getting involved in. You know, and how broad is that set? That, you know, if you highlight two or three, how broad is the set, and how do you see the overall set growing?
Yeah, absolutely, Tim. I'll cover this first.
Yeah.
So, I'll do it on over the past six years, the journey, right? So, we started actually six years ago when we certified our storage with NVIDIA DGX-1s . That's what we saw customers. At that time, it was deep learning and neural networks. There was no Gen AI, much of language models, et cetera. It was mostly customers who were developing their own in-house models to. For example, we have one healthcare customer. This is six years ago, right? I'm not talking. Six years ago, we deployed NVIDIA DGXs with NetApp ONTAP to train models to detect anomalies in the X-ray images and et cetera, et cetera. So I think image processing and computer vision was a very big use case, and it's still a very big use case before and after the launch of large language models.
And then over time, we started seeing customers build AI centers of excellence. These are large customers. You know, we have some of them here at the conference. They're gonna speak later on, where it's not five users. It's basically they have multiple data scientists that's scattered across the nation, as well as across the globe, where one of them will do a model training to detect fires in the forests. The other ones will detect, you know, something in the war zone, et cetera, because, you know. I'll give you an example of a system integrator who does multiple different things, and they wanted to build AI center of excellence. Again, this is based on the GPUs they purchased in-house with the data that needed to feed those GPUs.
Obviously, when large language models and the whole ChatGPT and Gen AI boom, now we're starting seeing those customers who, as I mentioned, want to leverage the goodness of the cloud, but then their data has gravity, and their data is on premises. Those use cases is mostly we're seeing AI agents. Customers want to build AI agents to either respond to support, to either respond to customer service, you know, now we're seeing people writing software through AI, so that's another thing that we're seeing as a use case. This is now where the cloud and on-premise becomes together, and NetApp is actually the connecting tissue from the data perspective. So that's the evolution of the life cycle we've seen.
Now, aside of this, we also have seen handful of customers, you probably are witnessing, is those who are building their own large language models. It's not probably as big as the hyperscalers, but it is big enough to call it a supercluster. You know, NVIDIA likes the term SuperPOD. They call it SuperPODs. I wouldn't say it's the majority of the enterprise, because again, back to the cooling, energy, power, all that requirements. But we have seen customers actually build that large superclusters to specifically train foundational models. And this is because they either wanna provide this as a service to some other folks, they wanna compete with the hyperscalers, or they just don't wanna leverage someone else's model. So we have seen that type of customers as well. I hope that answered it.
I might add a couple of pieces.
Yeah, go on.
So you talked about personal productivity, chatbots and copilots, enterprise knowledge management, obviously, pretty, pretty broad use cases that have, you know, from a vertical perspective, broad applicability. But what we're starting to see, absolutely, is more interest in highly verticalized solutions. So we talked about LLMs, and talking about SLMs and XLMs, and those that have been specifically augmented with the knowledge that's necessary in each particular industry. So there's gonna be an explosion of that. And we're, you know, absolutely, we're seeing that in the market. And that's really probably closing the gap between, you know, folks trying out these generative AI solutions and actually making them very useful in a more specific context beyond just general productivity.
So, you know, what you see, for example, with Microsoft and their Copilots, for example, but that they're very broad, and then moving into industries like legal, finance, healthcare in particular, life sciences. We see these very specific versions of LLMs or SLMs coming into play. So, I mean, that's gonna be definitely where things are moving.
Can I just ask, when you talk about go-to-market on this front, how is that different from the traditional infrastructure go-to-market, in the sense that if a customer today wants to sign up and sort of get more educated from you in terms of what that technology should look like, do they want a full understanding of the complete roadmap, including going up to inferencing, before they even sign up, in terms of what a first step looks like, and how would you compare those two aspects?
Yeah, I'll answer, I'll answer that, and Russell you can, So it is different than a typical, "Hey, I want to re-tech refresh my storage," right? "I have 2 petabyte requirement," and they put a RFP out. Definitely, it's a different DNA cycle. We're seeing cycles six to nine months sometimes. We're seeing even faster ones, because if they know what they want to do. But definitely, the conversation starts from, "Let's look at the ROI. Let's test this and do a POC, a proof of concept, and then move on into integrating this with the stack." Usually, as Russell said, it is not only a storage conversation, so it is a conversation where we are sitting in the room. Usually, we have NVIDIA, usually we have our partners, like Domino Data Lab and others.
Because, again, this is not. It's the cloud probably nine years ago, right? They want to understand what's the value of them doing this, first of all, so that's what it starts from, an ROI perspective. And then it comes on, okay, what's the stack? It's the MLOps provider, it's the GPU provider, it's the storage provider, and then the people who are doing services on top of that. And that's the reason we, in NetApp, we put a specialist team that are highly specialized AI sales personnel, including technical folks as well, who are really going to engage in these conversations in a more deep dive conversations rather than just talking about storage.
Because, as you said, it is a conversation that is going from an ROI or, as you know, as I mentioned, from ROI to POC to a kind of purchase, if you would like, in the end.
But there's an evolution here for sure, though. I mean, I think that, you know, it has definitely been a highly specialized sale. It's generally been driven by the line of business and specific AI practitioners inside our customers. What we are seeing is, though, of course, as it becomes more mainstream, more of the components inside a company necessary for turning a POC into production are getting involved in that sales process. So, you know, as well as our, you know, these new buying centers that Hoseb talked about us focusing on, even our traditional buying centers are much more involved in those purchasing decisions.
Again, mostly because these other folks that have been involved in making AI real are generally quite good at getting it to a POC, but then when it comes to productionizing and scaling it, that has been a massive challenge. So the more traditional folks that are involved in making those systems production-ready are much more involved now in much earlier on in the sales process, because they know that eventually it's going to come to them.
And IT definitely getting involved more and more. I mean, six years ago, one of our first sales was prior to cardiologists in one of the hospitals in the U.K., and we didn't even talk to IT folks. It was the budget was staying in the cardiology department. They were buying everything, and they were just asking us questions, how to manage this data. Now, we're seeing IT being engaged more and more, especially that with the GenAI, what we are seeing is data being copied all over the place, data being replicated all over the place, the security of the governments. It all is going to come into the IT in the end, and people will ask: Why is this data sitting somewhere that is not certified?
Well, that's why IT is starting to gain control over this project more and more, because of all the now the challenges that comes with the data. In the end, listen, I think you can have the best models out there, but if you don't have your data grounded to those models, you're not going to get anything out of those models, unless you're doing just basic stuff, which we all do nowadays, say, "Write me a job description," or "Write me." Like, if you want to do those type of AI, yeah, you can. It's available out there. But if you want to now put this into a real ROI, that's where now IT is saying, "Okay, you need access to the data. This data is sitting there. This data is has privacy into it.
I cannot move it to the cloud," et cetera, et cetera, so.
Thanks. Lou Miscioscia with Daiwa Capital Markets. Creating IT applications is very difficult and takes a lot of time, and you guys are sort of on the front row to that. Just curious as to where you think things are right now or where, you know, so maybe the quantity of proof of concepts, and when would they actually possibly be deployed into real applications? Are we talking about months, quarters, years? You know, we're just trying to understand, you know, that even though AI is transformative, you know, is it still three or five years out, or something a lot sooner than that?
Do you wanna?
Well, I'd start by saying that you know, we've been doing this for a number of years, and we have a lot of customers already very much in production, so there's nothing stopping people going from to production. I think one of the things I was talking about earlier, though, is just how broad it's going to get in terms of customers that haven't necessarily invested in the sophistication necessary to go build it themselves, versus those that are more likely just to move to what I would call a value phase, just going straight past development, straight into value. So I think, you know, as an industry, if we're talking about an industry basis, we're definitely seeing a move towards more turnkey solutions. That will shorten those sales periods and those moves from, you know, POC directly into production significantly.
But you know, that, that's an industry-wide thing, and so that's some of those mentions I had of sort of more industry-specific SLMs, for example, would be a good example of what would accelerate that. Hoseb?
No, you're right. And I think one of the things that needs to happen to accelerate this is, those who already have things in production, they need to start sharing it somehow on what the values they have achieved by doing this. Because a lot of customers right now is like, "Hey, who else has done it? How they have achieved it? If they got anything out of it?" And we know a lot of customers who have done it, but they don't want to share it, right? So I think that's the biggest challenge as well, in terms of-- I mean, we heard, AWS CEO a couple of weeks ago. You know, he mentioned, I think it was on the earnings, how AI kind of saved a lot of hours of work that it was used to be in the past.
I think those will accelerate the adoption, but it depends who you're talking to. If you're talking to the small to medium-sized businesses is still very much early on. If you're talking to, you know, Fortune 50, they are already advanced, I would say. But then you take that middle piece as they are in the POC stage right now, if you would like to categorize it that way.
All right, thank you.
Yeah.
All right, maybe building on the last question, I guess, where do you see kind of the biggest bottleneck right now? Is it getting my arms around data governance, security? Is it what data do I even have that I could train it? Or is it kind of the productized turn- like, I just don't have these capabilities in-house, and I need a product that I can just buy that can help me with that? Like, where's the biggest bottleneck?
There, there's an interesting thing here, in that, it depends who you ask. So typically, when we talk to customers that haven't gone on this journey yet, and actually, you know, we published a study with IDC a few months ago that talks about this particularly. But when we go to customers that haven't, or organizations that haven't really gone down this path yet, they typically don't see it as a data problem. They think, "Oh, you know, this is an infrastructure problem. It's a, it's a this, that," you know. But all the ones that have already started down this path realize very quickly it's a data problem. Data is the number one issue that is actually holding people back. And so that's, of course, where, you know, where we can step in and really help.
That journey has been super interesting to see. The customers will go ahead, think, "Oh, we can just move forward and try this out," and then they hit this data wall, right? The issues that they have are, you know, yes, you know, where is the data? What types of data do I have? Do I have the right sort of data to go do this in the first place? That sort of assessment often trails the decision to move forward because they haven't even thought about it, and they often don't have a good enough handle on their data estate to start with, to be absolutely frank with you, but the governance piece should not be overlooked.
The reality is that AI, to a certain extent, we've seen it a little bit like the Wild West. Money has started at the board. It's been given to, you know, a line of business or a bunch of AI practitioners to go and just do stuff. And in fact, if anything, to go to break some plates and cause a mess on purpose, because the feeling is that the traditional structures inside customers are set up to slow down innovation. So the idea is, "Hey, can I get ahead of my competition? This is gonna--a s you say, this is transformative, so let's go and do that." Of course, the reality is that the regulations are starting to come into place.
The EU AI Act obviously came into force beginning of last month, and it stipulates a bunch of things that customers need to be thinking about legally, and it puts them on the line for significant penalties, not unlike what happened with GDPR. Of course, they're not enforcing it yet, and that enforcement period will start to ramp up over the next 6 to 12 months. But this is really bringing into focus the need to understand and control data and manage data and ensure the right types of data are used in the right way. And so that Wild West mentality that was pervading this whole industry, that's gonna move away very, very quickly. And listen, just like we saw with GDPR, we expect other regulatory environments to pick up similar rules as well.
So that's just gonna be the start of it.
No, I think you hit it on the right point. Yeah.
Oh, yeah. Okay.
Go ahead, Wamsi.
Okay, thank you. I was wondering if you could maybe just contextualize for us, at the high level, are you seeing signs of incremental, either data movement from, let's say, tape or something like that, where data's just been sitting over there, now we're going to train using this data? Are you seeing any movement like that? Is there any reason to think that the rate and pace of data growth is actually changing with AI, and how so in at your customers?
Yeah, as George mentioned on the last earnings call, we are definitely seeing data lake modernization projects happening more often than it was before. We are attributing that to the fact that these people are trying to build or make their data AI-ready, basically, to start now moving to the next step of using these tools. Now, how much that's gonna change or I can't quantify that for you, but I think we are seeing that data lake modernization. We're also. What's happening, Wamsi, is also people who have built their data lakes on old workloads, now that GenAI is requesting more access to that data, because it used to be cold, right? Now, they need to bring that back to life and reaccess it.
Some of the technologies that were available in the past or today are not really. It's either creating more cost or it's not operating at the speed they need. So we believe that's why the data lake modernization projects are bubbling up more. Now, I don't know if they're moving from tape or not. You know, historically, these data lakes have been sitting on servers with a bunch of drives in them, and then I think that was the tape to the servers, and now we're seeing that becoming more unified. Because also a lot of these technologies didn't have much of a cloud connectivity, especially you're a heavy on-prem user.
That cloud piece is now coming back into like, "Hey, I need to use the tools in the cloud, but the data is sitting in the cold." We are seeing data lake modernization for sure.
Thank you.
Hi, thanks. When you get the lead AI person and the lead storage buyer in the room together, and they're evaluating, you know, the AFF A-Series, and I guess more recently, the C-Series, are you - can you confidently say at this point that. 'Cause when you look at those two offerings, right, like you offer, it's built-in tiered storage, you got data replication on-prem to the cloud. Right, I mean, there, like, security governance capabilities, privacy, right, that, you know, the last few questions we're asking about. Like, at this point, can you say that this... this checks all the boxes from a, from a, you know, from your--
The A or the C or both?
Or if you could talk about both, and you know, how do you feel like those compare? Like, you know, if the answer is yes, that we can confidently say that, could you, you know, could you say that prior to these two SKUs?
So I would start by saying that, you know, the AI practitioners wouldn't couldn't care less about A or C series, right? I mean, so the storage guy, certainly interested to talk about it. But the way that we would actually engage with an AI practitioner has much more to do with understanding their AI data life cycle and how our portfolio of products are able to actually track that data throughout that life cycle, right? Make their lives easier. So that means all about things like productivity, but also creating an environment where the guardrails are automatically in place from a data governance perspective, and it gives them the freedom to go do what they want to do without putting their companies at risk from a regulatory perspective.
So that would be more the conversation that we'd have with sort of an AI practitioner. And up to this point, I would say that the infrastructure people have kind of been on the back end of that conversation, so they've kind of had things thrown at them, and then they've had to pick it up. And then, of course, that realization that picking up stuff that was never built for production becomes very, very challenging when you try to scale and move it, and move to the value phase of an AI project. So we are starting to see more of these folks come together into those conversations. I'm sure Hoseb will talk about it in a second, but, you know, the way that we talk to these different personas is different. I mean, they--
Again, you know, an AI practitioner does not care about storage, but they absolutely care about the value we deliver. And you know, we have this very broad ecosystem of both commercial partners, open source partners, and cloud partners that enable us to take our value and expose it up in a way that's meaningful to those personas, and that's how we really engage with them. But I'll let Hoseb.
Yeah, no, and back to the portfolio, I think the good news here is all of them run the same software, right? And it all comes down on what customer wants from a workload perspective. So if it is a data lake, it's different than if they are training a model, right? So, and that comes down to a question of performance requirements, cooling, energy, power, and all that. But again, I think one thing that differentiates us in this market and puts us up in the front is that, we are, our portfolio is well positioned to capture all these different requirements, while at the same time, keeping the same operating system running, whether it's in the cloud or on premises. So I hope that answered the question.
We have a question here.
Hi, Antoine Legault, Wedbush Securities. Just wanted to kind of expand a bit on your initial comments about converting a prospective customer into an existing customer, you know, going through the proof of concept, it might take about six to nine months or even shorter in some cases. Can you kind of walk us through what an upsell might look like once, you know, a new customer is, you know, has been on the platform, has been using solution? How does that look like? And do they come to you and say, "Oh, we really like the value. Do you have more to offer? Can we explore other products?" Or do you kind of go to them? Can you kind of walk us through what that might look like?
Yeah. So, you know, the reality is, a lot of these customers, their heavy budgets are going to the GPUs, and then when it comes to storage, they're like: "Hey, let's start with something that works." Because we all know that, where the expensive piece is. The upsell over there is like, now that they start training these models, obviously, the more data, the good data, the model gets better, right? And that is the upsell where we will say, "Okay, now we want to train a bigger model. We want to engage more parameters into this model. We need to bring a bigger data set to this storage," right? And this is where we see it expanding.
The other thing that we back to the data lake piece is, again, now that we have this system up and running, my data sits on either different storage, different platform, different architecture. How can we make it easier to feed? If I don't want to expand that compute cluster, can we bring this now into the umbrella of ONTAP? So that's the upsell of now you can even modernize their data lakes to be on NetApp, right? So, and sometimes we even start as large as they need it to. Not that every project starts small or big, there's no size difference here in terms of capacity.
But even if we start small, let's say, there's an opportunity where the model is growing, they need more data to put in there, or they need to modernize their data lake so that they engage the data gravity feeding to these GPUs, if you would like, and that could be an opportunity on the flash, on the object, and different parts of the business.
There's another aspect I'll just add to what Hoseb said, which is when we win the centers of excellence, so when you win those deals, as more workloads come on, we get the capacity expansion. That's the reality, right? Customers are, you know, attracted to us because of our singular control plane that we have both on-prem and cloud. We believe that AI is probably the one of the most hybrid workloads the industry's ever seen. We expect customers to continue to consume resources in various places. GPU accessibility is one good example of that, but also the reality that data gravity exists in different places, and the data sources are not always neatly in one particular area.
So, you know, that's what attracts them to us, and so we're in a really good position to attract more of those workloads, wherever they may end up being. So if a customer wants to, for example, have something on-prem, but then consume, first-party, hyperscaler PaaS services around AI, we're still the right partner to do that. So when we establish ourselves as the standard, we tend to get all the workloads, as opposed to some of our competition, where they get one area in like on-prem, and they're having to fight again to go win it somewhere else.
I'll give you an example, and I can't quantify how many of these will happen as well, where we were not present at a certain customer, the large healthcare, where they purchased the DGXs with NetApp because of what we have showcased to them of capabilities in the AI space. Now, because they like all the goodness of what they have seen from ONTAP perspective and all, and back to Russ's point, now we are kind of replacing some other competitors who've been running the old SAP, Oracle, and other workloads.
All right, we have time for one final question. I see now everyone wants to raise their hands.
Five more minutes.
Last question, Tim, to you.
Thank you all.
That's it.
Try not to mess it up. Just on the customer front, a lot of talk about hyperscalers. You got the partners there. Obviously, there's a lot of attention in the AI industry to these cloud AI companies, you know, CoreWeave, Lambda, et cetera. Can you talk a little bit about, just from a high level, what going forward, if those companies do better, is it better for NetApp? Is it worse for NetApp? If they go away, is it better? Is it worse? Maybe just talk a little bit about how the interaction is different. Do they need more technical capabilities, so maybe it's better? Anything you can parse out on, just kind of that different customer base, because they have gotten pretty meaningful, at least on the GPU side.
You wanna cover that, or you want me to?
And if you wanna throw sovereign in there, that's another one, too.
I mean, you've got us pausing, which means it's a good question, right? Look, I would firstly start off by saying that, of course, these are service providers, and, you know, the way that we tackle service providers is very different to the way we tackle customers. It becomes about joint service creation. So how do we help the service providers create differentiated value through their services? And, you know, that's where a lot of our value add comes into play, right? So if it's commoditized storage services that the partner is offering, then it becomes more difficult for a company like NetApp to differentiate our higher value services. So that, you know, but we're really good at that, right?
I mean, this is not a concern to us. What we tend to see, interestingly, with some of these big hosted AI providers, is that the customer base they're going after tends to be larger enterprise. Larger enterprises are the ones that actually really appreciate the value of the data manageability features that we bring to the table, right? They're not just looking for scratch space. They're looking for rich data environments that protect their data, classify their data, et cetera, et cetera. What I would say is, I think we are very well positioned to go after those areas as they continue to mature. I think in their current state, they're very much just price, performance, you know, raw horsepower GPU, but as they get consumed more by enterprises, they're gonna want those enterprise features.
I think we're in an extremely good position to take advantage of that.
Yeah, and we're already in talks in a lot of them. We already have customers in APAC as well, who are doing GPU service provider type services as well as in North America, some of them. Now we're talking more because they are starting realizing that if they wanna offer an enterprise-level services, it needs to have the multi-tenancy and the security features and all the sovereignty that you talked about. And this is where our value is kicking in, right? Us being in the three hyperscalers, it is. It didn't come without no work. And I think that our partnership with the hyperscalers, first of all, puts us up front. And then this new GPU cloud providers, we are in talks with them so that we are also building similar features from multi-tenancy and all the security features that they need.
Don't also forget that us being in the industry for +30 years means there's a ton of customers out there who store their data on NetApp. And if these service providers would love their GPUs to be consumed, that data needs to come from somewhere, and in the same way we did with the hyperscalers, where we provided that hybrid Data Fabric , we have a great value to add over there because no one is gonna bring their data easily today. I think that is one of the biggest--y ou asked about the roadblocks, right? It's that everyone is holding very tight to their data, that the goodness of AI is not showing up, and this is where we wanna untie the gravity. We call it, we strip off data from gravity.
All right. Well, Billy tells me we have time for one more question.
Oh, no, no.
Billy's in charge of everything.
Bonus question.
All right, so Wamsi, I'll give it to you.
All right, thanks, Kris. Yeah, I was just curious about, you know, the journey over the last six years. You said, you know, it was deep learning and machine learning which, you know, initially got people really very excited but then the deployment challenges, everything kind of just didn't pan out maybe quite the way people wanted it to. I'm just wondering, when you, when you think about what the selling motion at that time was, who were you selling to? Who in the business was driving that versus now in GenAI, who in the business is driving that, and how is the sales motion today different?
Yeah, good question. You know, as I mentioned, six years ago, we were selling to cardiologists, so it was majority of it is line of business owner. The IT didn't have budget for AI at that time. As we moved forward, the IT started getting more control of that because one of the customers just woke up and found a whole bunch of computers and storage sitting somewhere in the room. I think that it started getting with the GenAI, especially cloud playing a big role, Wamsi, IT is getting more engaged. Now, the line of business owner are the ones who are pushing down the agenda, and the IT is trying to figure out how to implement it.
Whereas in the past it was line of business, line of business, and IT just watching and seeing what's happening over the time. That's why actually NVIDIA appreciates our partnership is because of our huge experience in the data center. You know, one of the things NVIDIA wants is to open up to the enterprise, and with our expertise in the data center and in the IT, I think that is where the partnership becomes very important because of our expertise in that. And now the line of business, yes, they have the idea, they have the budget, the board is pushing down, but it is still landing now on the IT to deliver the infrastructure. So that's the evolution happening. And cloud, actually, the GenAI and all the clouds, made it that pop more real for the IT.
I mean, also, just last thing I'll add is that, you know, we are gonna see more customers that just want to consume AI as a value rather than from a development perspective. And that, again, has much to do with the fact they don't have the data necessary to do things from scratch anyway or the sophistication. So, you know, for those more turnkey solutions, there'll be things that are kind of regularly things that our IT are gonna be the ones responsible for delivering, like copilots, agents, enterprise knowledge management, et cetera, et cetera. And then it'll be the lines of business that come in and want the highly verticalized solutions. But again, they're gonna be more commercial off the shelf. I mean, I think that's the mass market here.
I mean, give you an example. We within NetApp, we use our own ChatGPT version, right? I'm a line of business within NetApp, if you think about me. I didn't develop that tool myself. It came through our IT and the data science teams and said, "Hey, here's a tool you guys can go and use." So if you take our example, it is a great way of-- now, if you're just yourself and you wanna use AI to write a PDF paragraph for you, you don't need your IT. So it depends on the use case and what we're talking about here. That's why I said small to medium businesses are still, like, not that much, and then you come into the wider enterprise and then the top 50, that's where we see the curve growing, right?
All right. Well, thank you, guys.
Thank you.
Thank you so much.
I really appreciate it. We're gonna take an 18-minute break now, and we'll be back at 11:10 A.M.
All right, welcome back, everyone. I'm glad you made it back from the break, so now we're going to switch gears and talk a little bit more about core storage and all things ONTAP, so I'm happy to introduce Sandeep, who you guys saw last year, if you were here, and so before we get started, why don't you introduce yourself, say a little bit about what you do at NetApp, and then we're going to open it up for questions.
Hi, everybody. I'm Sandeep Singh. I'm the Senior Vice President and General Manager for Enterprise Storage at NetApp. I look after our portfolio of on-premises products. I have had the pleasure of being here at NetApp for almost two years now, and you know, over the course of this time period, I've probably spent, at this point, you know, a little bit less than 50% of my time traveling, meeting with customers, meeting with partners, and you know, also remaining focused and continue to rapidly expand our portfolio to have this ability to have a unmatched simplicity at scale, as well as transformational flexibility across our unified data storage portfolio. With that, I will open it up for questions.
All right, well, I know we have a question from the webcast that came in earlier.
Yeah, it was earlier on block. So I'll read it out. I think I can give a little more color to it. How has NetApp share gain views evolved for block storage as we've released new block products? And then can you talk about the install base leverage, so the fact that we have an existing install base already, has that improved our block traction, or can you talk about success in block-only environments that result in net new customers in NetApp?
Yeah. So first of all, we used to already offer block storage capabilities with our unified storage portfolio. Last year, we expanded our product offering with an all-flash SAN arrays or ASA-S eries of products. When we initially looked at overall block storage, we roughly have about 20,000 customers who rely on NetApp and trust us in some shape or form with their block storage workloads. We introduced A- series. We announced it with ASA A-Series last year, and then ASA C-Series, which continue to see great adoption across the board.
From a, you know, the use case standpoint, and across our customers and across net new customers, the way to think about the adoption, of ASA-S eries for the, you know, SAN workloads is kind of threefold. First, in terms of our overall installed base accounts, where we've got thousands of customers that trust NetApp, they love their ONTAP experience. With ASA, we're helping them bring that ONTAP experience to their block storage workloads. That represents then an expansion, in terms of adoption of the block storage workload there. And the importance, and the value for customer becomes what I call the simplicity at scale. What does that mean? If you think about customers, they are dealing with complexity that gets compounded through bespoke infrastructure silos.
When you think about the infrastructure silos across their, you know, overall NAS and file environments, and when you start to then expand that into silos of VMware or database application workloads, that is where when it's a bespoke infrastructure silo, they're having to deal with separate, inconsistent management, inconsistent automation, inconsistent data security models, inconsistent operational recovery workflows, inconsistent overall experience. We're enabling them to be able to bring that our ONTAP experience to their block workloads and be able to then get consistent management and automation that overcomes the need for talent and skills gap shortages, right? It gives them one consistent data security model they can trust in. It gives them the comprehensive yet consistent operational recovery workflows, and it gives them an overall consistent experience. So that's for our installed-base customers, and that's incredible value for them.
Secondly, for a lot of the customers that are continuing to work through and figure out what's the right feature in terms of their overall VMware deployment, we're helping customers to be able to come over, offload data management, and through that, get upfront savings in their VMware environment, all the way up to 25% savings there, and then be able to have this unmatched flexibility for the future in terms of the hypervisor or the container environment and/or the cloud storage options there. So that's the second key area where we see customers adopting NetApp in their block storage environment.
The third key area is essentially, as customers are looking at, they've got all the hybrid disk-based block storage systems. There's lots of them, and being able to modernize them to all-flash and being able to do that affordably. This is where finance departments, CFOs are still requiring and asking IT to continue to lower the IT budgets. We're enabling them to go modernize to all-flash and be able to do that affordably. As part of this, you know, this leverage and the flexibility that we enable customers with, they'll start talking to us, for example, in file environments, and then recognize, well, there's this flexibility that exists of bringing a consistent experience to the block storage environment. So it gives us this optionality to expand in to these options. I was just learning about a recent win that was exactly that use case for customers.
In other environments, once they've recognized: Well, I've got this standalone block environment, I can get simplicity at scale, that's helping us go out and win. And we now give them just a overall flexibility of, you know, having a end-to-end ASA portfolio that they can leverage us with in their block storage environments.
Hi, so maybe, on that front, when you talk to customers today in terms of refreshing their storage infrastructure, do you still feel like you need to convince some of these customers to move from disk drives to flash? Or is that a decision they've already taken, and really, what you're doing in terms of your conversation is really going within that portfolio and convincing them relative to competition, where you stand, and that's the conversation? The second part to that, when you talk about block, how do we get comfort that your customers on the block side are not really just sort of moving over from the unified storage they were using previously, and these are actually incremental opportunities that you're capitalizing on?
Yeah, great set of questions. In terms of disk to flash, the larger scale, there is a shift that's happening, where customers are optimizing for flash at a broader scale. When you look across our portfolio, we've just got a fantastic set of options available to customers in terms of the all-flash offerings. With that said, what I will say is that what matters to customers, especially with the unpredictable macro and tight IT budgets and IT, you know, budget scrutiny, is this notion of high quality, lower cost solutions. And when you look across the data life cycle, having the ability to have lowest cost of data over its life cycle is important for customers.
So where we continue to see great adoption across the board is, you look at, you know, primary storage, and that's gonna be typically all flash, right? But when you look at the data, anywhere from 60% or more of that overall data is gonna be you know, cold data. And being able to automatically, based on policy, be able to tier that data to lower cost storage options, gives us the advantage of enabling an end-to-end portfolio that is underpinned through ONTAP, of being able to seamlessly enable customers to go and adopt us for their primary storage workloads, yet also ensure that we're cost optimizing the cost of data, with hybrid flash options, as well as, overall object, with StorageGRID options available to them.
And then--
Mic.
How do we get comfort that the b lock customers are not unified customers moving over?
So in terms of the block customers, you will see the combination of the following. There are many customers who have their file and block environments in the same environment. They may have a large file environment and a smaller block environment over there, or it could be vice versa. This is where unified storage provides them with the best option of essentially being able to have file or block on that same shared underlying infrastructure. And we provide that. That's available to customers, and, you know, lots of customers are leveraging that capability.
On the other hand, if customers have a separate file environment and a separate block environment, right? This is where AFF for that file environment and then ASA for that block environment is the perfect fit for them, and this is where it becomes a net new expansion opportunity or a net new logo opportunity for us. So unified doesn't necessarily substitute for the ASA use case, where it's that standalone overall block environment that they're going for.
I know. Understanding that you've kind of been competing against, you know, folks who were more upstarts. For your C-Series product, you were competing against kind of, you know, storage startups that didn't have the full portfolio. But, you know, a number of your competitors have kind of announced that they intend to kind of offer QLC products. How do you envision that that kind of changes the landscape? Do you envision that that might change evaluation times? Or, you know, you feel like that, the argument has been made or the inroads have been made?
Yeah. So in terms of looking at capacity flash, you know, underpinned through QLC technology, first of all, we have just continued to see fantastic adoption of C-Series across the board. Yes, some competitors have gone in and announced products. We have not seen that make a difference. Why? First of all, when you think about the customer's needs, when they're putting, let's say, their, you know, tier two capacity focused workloads onto C-Series, data management is still critical. We are bringing this notion of comprehensive data management and still making it available even in the capacity flash series. Secondly, and similarly, in terms of our data management, when you think about ransomware and cybersecurity protection and detection and recovery, that is critical for customers across all of their data.
This is where we're also really changing the game in terms of enabling what's typically a post-process detection to that real-time detection in a matter of seconds to minutes. Being able to do that, which is designed for that +99% accuracy, that minimizes the false positives, as well as has the ability to be able to, you know, provide the accuracy of detection, right? What we, earlier, this year, became the first storage vendor with SE Labs validation to get that AAA rating, with this ARP AI technology, and to get that +99% , overall, rating, in that detection. That becomes critically important for customers as an overall priority, and then couple that with the ability to rapidly recover from ransomware attacks. So that's another area.
I talked about the complexity challenge and how customers can just simplify at scale. Rather than going and putting and continue to propagate bespoke infrastructure silos, customers are increasingly looking at, well, geez, when I start to think end-to-end, not just a point solution, what matters, most, and continue to simplify and get this notion of simplicity at scale, that only NetApp is able to deliver across a fully interoperable, portfolio that is then underpinned through the lens of ONTAP. Those become some of the prime motivations as customers are thinking through, capacity flash options. The last part I'll touch on is in terms of the economics, even being able to then cost effectively tier the data over the data life cycle and still get the lowest cost of data.
This is another one where in the competitive products, you end up getting bespoke silos versus a fully interoperable portfolio.
Hi, thank you. Wondering if you could discuss a little bit, just kind of in this tug-of-war over the years of, you know, movement to cloud and repatriation of some workloads, data sovereignty and whatnot. If you could just give your perspective on kind of where we are then now, 'cause we hear a lot of examples of both. And then related to that, as we move further along in AI, do you think the calculus around that tug-of-war changes at all? Thank you.
In that tug-of-war, I think across the spectrum of thousands and thousands of customers, we will see all of the above. We continue to see customers who are adopting the public cloud storage option. We've seen a lot of customers who are adopting the hybrid cloud storage options. We're also hearing about customers who, as they have scaled workloads and, increasingly with the overall cost pressures, we're hearing also about some customers who are repatriating their workloads.
The what is important and what we're focused on is ensuring that our customers have that complete flexibility to be able to have market-leading offerings for the on-premises options available to them, whether in a CapEx form factor or storage as a service side of it, have the leading public cloud storage offerings, the first-party native cloud offerings that are underpinned through ONTAP, where NetApp is the only one with the first-party native cloud offerings across each of the hyperscalers. Have the necessary technologies that are available to them for the hybrid cloud use cases, which then is inclusive of secure and efficient data mobility that becomes critical for customers across the board.
So we are ensuring that we have the necessary choice points as and when, whether they are going to go and be able to leverage the agility of public cloud for being able to seamlessly move their workloads and be able to still get the enterprise resiliency, the data management capabilities that they value, be available to them. Or if they need the hybrid use cases, we're enabling those, as well as if they're looking at repatriating, we're providing the right options available to them. We feel incredibly privileged to be in a position of providing this unmatched flexibility for customers and being that partner no matter what their use case there.
You asked a follow-on question, which was tied to AI, of how does that change the calculus there? When we think about AI, hopefully, this is building on the prior session there. When we think about AI, we're going to see the customers leverage, first of all, a lot of their enterprise data that sits on premises. In the enterprise AI use cases, ultimately, customers are looking at how can they truly unleash the power of AI and GenAI with the context of their enterprise data, right? So we're going to see a lot of that happening. We are enabling customers to have their AI or their data to be AI-ready, as well as bringing AI to their data, to what I call this notion of an AI data gap. So that's kind of the first area.
Secondly, you will see, obviously, there's, you know, lots and lots of investment in terms of AI, GenAI tool sets that are available in the public cloud. We want to ensure that we are providing the flexibility of enabling a kind of a seamless experience for customers to be able to leverage their data and making it available to their AI, GenAI tools in the public cloud. That's building on some of the announcements that we have made with, for example, the AWS Workload Factory or BlueXP Workload Factory, and various integrations that we've demonstrated in terms of the AI, GenAI tool sets in the cloud.
All right. Well, Sandeep, since the audience is reticent, one question I get from investors all the time is: do disk drives still have a place in a modern data center? Maybe you could talk a little bit about that.
Yeah. So great question. Look, the, in terms of overall media underlying it, it becomes important when customers are thinking about what is that cost of storage, and as you think through the life cycle of data. In terms of the disk drive, you know, that hybrid flash storage, what you're going to fundamentally find is it is still that lowest cost of storage, that you can get, from an on-prem standpoint. So the short answer is yes, disk drives still have a specific use case, that they serve in the customers', data centers. When you look at basically from a raw, you know, overall cost standpoint, flash is still not at a point, where it can substitute for that disk-based, storage perspective.
So what becomes important is when you start to think about, you know, the optimizing cost data through tiering, backup target use cases. When you think increasingly about cyber vault use cases as well, ultimately, you need the economics to be there, and that's where the disk-based options are still important.
Can you talk a little bit, maybe, about where you think we are in the cycle of customers sweating their assets, on-prem assets? It feels like, you know, storage spending has been relatively muted now for a fairly long period of time. Are you seeing any green shoots about things kind of starting to pick up in that regard? Or how much longer, given that, you know, you have a pretty good view into where your customers' utilization rates are and the capabilities are, how much longer could that extend out to?
I would say, look, overall, we're when we look at the, just the overall adoption, of systems, whether it's on the AFF A-Series side or C-Series side, we're continuing to see just a overall strong adoption. And, you know, utilization rates, when you think about sweating out the assets, one would assume the utilization rates would either increase, significantly or the customers are, sweating out the assets. I would say, generically, we're seeing that across the board. That's pointing to customers have a need. Their data fundamentally, you know, continues to grow, and they have a need to continue to go and you know, purchase more underlying storage capacity, and systems tied to it.
Hi, Sandeep. You mentioned as-a-service and consumption models. Can you talk a little bit about some of your competitors, you know, harp on this a lot, but it doesn't seem like at scale, it's really happening. So what's your sense of, you know, demand or appetite for the different consumption models as-a-service versus CapEx, across, you know, more broadly? I'm sure there's some of each in your broader customer base, but if you could just give us a little sense on how we should think about that potential transition over time.
Yeah, absolutely. Look, storage-as-a-service, various customers are transitioning or looking at storage-as-a-service across their spectrum, especially when, for example, refresh or net new purchases come up. The Keystone is our storage-as-a-service offering. We're continuing to see just an overall fantastic adoption across Keystone. I know some of the financial, you know, information we released that as part of the earnings announcement there. But, you know, we're focused on the enterprise segment. In the enterprise segment, we, you know, our position is we want to provide those leading options to customers, whether they are looking at CapEx, whether they are looking at storage as a service on premises, or whether they are looking at essentially cloud storage as a service options for them.
We're unique in that way of being able to provide that complete flexibility with the leading options to customers. We want to ensure that we're not pushing customers one way or the other. We're providing them the flexibility to adopt as they are ready, for the right options for them, right? With that said, we are seeing overall great momentum with Keystone, and it's important to not only have that momentum, to also provide that overall service level that they're you know continuing to then go and build out the adoption with Keystone. So we're incredibly happy of what we're continuing to see there.
The other thing I'll say is the customers who are looking at essentially, you know, they want to get that agility of cloud, and they're on a journey. We also see a cohort of customers who then are leveraging and transitioning to storage as a service on premises as well. And this is another one of those unique capabilities that we're bringing and making available to our customers.
Thanks, Lou Miscioscia, Daiwa. We just had the AI session, which was very good, but if we could go back to that for a second. If you look broadly at your customer base, could you just break it down, how many are in proof of concepts right now? How many are maybe not doing anything, and how many have actually gone in size from proof of concepts to deploying? And if you want to throw time frames into that, looking forward, that'd be great, too.
That would have been a great question, actually, for the last session.
We did, but, you know, figure you're here, so.
The way I'll talk about it more generally, right, is that we see an incredible opportunity forward-looking in terms of enterprise AI, right? The enterprises across the board have been looking at how do they go and take AI, Gen AI, and use that for driving up productivity, for delivering net new customer experiences, and/or enabling net new areas of innovation. Many have been looking at holistically and in concepts, and as well as the specific use cases that are going to be incredibly important within their context. What becomes important is essentially this notion of an AI data gap that I touched upon, where customers are challenged with: How do I get my--y ou know, I've got multiple data science teams, and how do I ensure my data is AI-ready?
How do I ensure the right datasets are available to the right data science team, teams? How do I ensure that the right levels of privacy and security are assured there? And then how do I ensure that overall efficiency is available, and overall model versioning, and the datasets associated with those, are available? Those become incredibly important for customers. This is where this notion of helping customers bridge that gap in being able to discover their data and prep their data and make that data AI-ready becomes important. Secondly, in terms of essentially being able to bring AI to their data, also becomes important in bringing overall efficiency to that end-to-end AI life cycle.
So when you think about the spectrum of how are we helping customers, when you think about the, you know, one set of customers has been the overall AI-as-a-service or GPUs-as-a-service, and a lot more of that use case is the overall foundational large language model training. This is where, if you think about it in terms of our portfolio, we're discovering where a lot of the those set of customers they want the overall scalable levels of very high levels of performance. This is where we have a SuperPOD with the NVIDIA SuperPOD with the BeeGFS plus our E-Series solutions that customers are adopting. That's one use case.
When you think about it in terms of the overall enterprise AI use cases, if they are not a lot of enterprise use AI is going to focus on as much of the large language model training, yet there are still customers that are going to go do large language model training or small language model training. All right. So we've got whether the overall SuperPOD that's there, or the overall AI Pod solutions that are in place for them to be able to leverage it. And then we also see when customers are on that beginning of that journey, in that data prep phase, they're modernizing their data lakes, that it usually takes them to object, and that's where our StorageGRID offering is seeing great adoption.
And then when you start to shift to the retrieval augmented generation, RAG, use case or inferencing or fine-tuning with their enterprise data, this is where our overall data management, a lot of the data management challenges that I was touching upon becomes critical. We've also put an overall converged solution that we call the AI Pod with Lenovo, with the NVIDIA L40S GPUs, with OVX that we announced back in May. That also provides them this converged infrastructure stack for them to be able to use for their RAG or inferencing type of use cases. And then our FlexPod solution is also being used in those instances.
Actually, we're up on time. So thank you, Sandeep. I really appreciate you coming here.
Thank you, everybody.
Sorry, I just know he has somewhere else to go, and I don't want to be the inconsiderate one to make him late for his next meeting. So I really appreciate it.
Well, thank you.
Thank you. But I think whatever questions you didn't get to for Sandeep and still have outstanding, Jeff can probably handle. So I'd like to introduce Jeff Baxter. He's our VP of Product Marketing. He can kind of handle a broad swath of all your questions. I know every session ended a little bit early for your taste, so you'll be able to pose all the questions to him. But before we open--
I'll just reflect whichever.
But before we open it up, Jeff, why don't you introduce yourself?
Sure.
Say a little bit about what you do and your time at NetApp, which I think helps provide the context for, why you can handle such a broad set of questions.
Yeah, sure. So thanks, everyone. I'll just probably take a seat, if you don't mind. So, so my name is Jeff Baxter. I run, Product Marketing here at NetApp. Before that, I ran Product Management for ONTAP, which is, the operating system that powers a lot of our on-prem systems as well as our in-cloud systems. I've been with NetApp now for sixteen years, so gone through several transitions in NetApp from--w hen I started at NetApp, I was actually in, in sales, so I was an SE. So I was out selling our actual individual storage systems, and we were a storage company.
And the evolution since then, in terms of going out into the cloud, and now the evolution that we're embarking on with AI, it's just the company has transformed two or three times over, as I think all of you have seen, and it's been just a remarkable ride. So, I've been privileged to be a part of it. I've been in all different parts of our business, gone from sales to product management, now running our product marketing as we continue to reinvent ourselves as the Intelligent Data Infrastructure company. So that's a little bit about me.
I don't know if Kris is going to like this question, but it's about--
If she tackles me off the stage, we'll know.
It's about pricing. So there's been a lot of-- I mean, obviously, you guys don't price per gigabyte or terabyte or anything like that.
Right.
But there's now, you know, a lot of storage as a service, there's a lot of new applications, there's more value being placed on ROI for AI. So how, as we enter sort of this new phase with your customers, how does pricing change for the better? And I'm sure there's some reasons it changes for the worse. Like, how's your pricing power look over the next few years?
You go on that one?
So I think you can talk about big trends. Right? What you see in the industry. ONTAP One would be a great thing to talk about.
Yeah. So I think in general, we're trying to be cost competitive in the industry, right? We definitely are an option where we add a tremendous amount of value-added services directly into the operating system. So, ONTAP One is our ability to add all the data protection you would need directly into our storage systems, more recently, adding anti-ransomware capabilities directly into our enterprise system. So I think we see customers, you know, terrifically valuing what we're seeing. I don't know if any of the sort of secular trends like AI or others, you know, I'm not going to comment on what the pricing will do there, right? I do think we continue to be cost competitive in the market, right? We continue to see wins against competitors.
A lot of those wins are based on value and based on cost. I think for our customers, when they look at our price tag, the important discussion for them is really around TCO, right? So as you said, it's not about dollars per gig. When we have storage as a service, customers, they're actually buying a given service level through our Keystone program. When we have customers still buying through a traditional CapEx model or a leasing model, they're really buying more based on outcomes rather than just raw dollar per gig, and when you build in, you know, best-in-class storage efficiency, build in the data protection, build in the simplicity of operating system that I assume Sandeep talked about in terms of making it simple at any scale, all of that tends to really drive down TCO.
So one of our best sales tools with customers is we'll sit down and walk with them through a TCO calculator and put in all their assumptions, all their power, right? Being far more power efficient in their data centers, for example, right, can drive cost savings. So when we get to the end of that, we almost always find that NetApp is a more efficient solution over the long run from a TCO perspective.
Great. Thanks. Hi, Jeff.
Hi.
I also have a pricing and packaging question that I actually wanted to ask Sandeep, but I think, as Kris said, this might be a better question for you.
Sure.
So if you think about the storage market in terms of Px Q, I mean, we're all, you know, investors or the investor community here, right? If you think about it, Px Q. It's been pretty fascinating because the P, right, the unit price, right, it's on a per gig or per terabyte, well, it doesn't matter, right? The P has come down significantly, but it seems like it. And I'm talking actually specifically on the all-flash side, right? It seems like it's kind of hitting. It's like, you know, there's only so much more to go for, you know, perhaps in near term. Meanwhile, the Q has increased dramatically, right? That, that's opened up, you know, obviously off the smaller base. So if you think along those lines.
How does the so if you were to apply it to, the question for you is like, if you were to apply it to the enterprise side and then also the multi-cloud hyperscaler side, how does this play out? That's kind of one part, and then the second part is: when you if you're like, you know, just about there in terms of the P, then are you thinking about programs that you can work with customers? So maybe, you know, 'cause their wallets are fixed or, you know, they're not growing nearly as fast as the volume growth is.
So are there programs you're considering where you can maybe help them bridge the gap, you know, maybe for near term, so that you can, you know, once you get past that hurdle, it unlocks this massive opportunity?
Yeah, so that's a multipart question. I think, you know, you're right, the data growth is outstanding, right? Especially going into the AI era with the massive growth of data there. So I won't comment too much on the future price or where we see that affecting, you know, revenues, 'cause Kris will tackle me off the stage. But what I will say is, I think for a lot of those customers, where they're concerned about, that's really where they're looking at storage as a service offerings, right? And being able to go out and apply certain service levels, right? Which tend to insulate them a little bit from variations in pricing and other things like that, right? So they're buying a service level.
And the other thing for a lot of our customers is, as they're not sure about what the rate of data growth will be, as they're entering sort of an AI space, pre-purchasing CapEx multi-years out, an alternative option can always be going with storage as a service, and that lets them basically grow as they need to and basically match their cost curve more carefully to what their actual consumption is. And so I think they're addressing some of the variability through the continued growth of our storage as a service. I think that's the main thing I would answer out of that.
Over the past couple of years, there's been a lot of changes in go-to-market, you know, maybe different incentives for kind of cloud sales. I just wonder how that's changed kind of product marketing, or h as the marketing to the customer changed, or has it really been kind of more go-to-market side?
Yeah, that's a good question. No, I think the marketing has definitely changed, you know, over the last 10 years. We actually went through a shift probably six years ago. I actually remember being at Insight, I want to say 2014 , when we first talked about putting ONTAP on the cloud, right? And we first did a demonstration of ONTAP running on the cloud. So we're now literally at a decade, and it was five years ago when we unveiled Azure NetApp Files, and we were a first-party native cloud service. I think at the time, everyone saw us as a storage company, right? And so from a marketing perspective, we went incredibly hard at being, a cloud-centric company, right? And really getting that message across and getting that ramp started with cloud.
Where we started to balance over the last couple of years is bringing it back to talking about that balance that Sandeep talked about, that basically all of our customers are in some way hybrid multi-cloud, right? They're either starting their journey, well along their journey. And so we've been able to really embrace that, and you've seen that change towards where we talk about ourselves as the Intelligent Data Infrastructure company, right? There's nothing in there about cloud or on-prem storage, right? It's about data infrastructure, regardless of where it lives, and about by applying intelligence to it. So that really is the message we've taken out to market, and I can honestly say all of our, like, even within my team, for example, our solutions team, our launch team, all that other stuff is integrated across cloud and on-prem. It's not separated anymore, right?
So we certainly have subject matter experts, like I'll have a marketing person on Azure NetApp Files. I'll have another marketing person on, Sandeep's AFF systems or ASA systems. But when it comes to how we bring those solutions to market for a customer in terms of a VMware solution or Kubernetes solution or database solution, all of that is horizontal, right? All of that is what's the right solution for you. And so when we go talk to customers now, it's not about, "We've got a great on-prem solution for you, and let me bring in a sales specialist to talk to you about a cloud solution." It's: "We've got a great data infrastructure solution for you.
Let's work with you on where your choice of data needs to belong, and we can help guide you along that path and use AIOps to look at your workloads." We actually have cloud advisors that will look at workloads and say, "This one's cloud ready. This one you could put in the cloud but might be more of a lift." And we can actually guide customers in taking that journey. So I think that's the short version of how it's changed the marketing, is it's really talking to customers about data infrastructure as opposed to either being a storage company or being a cloud company. We've elevated really the entire message.
Hi. More on-prem question? Just in terms of how you think about mix within flash changing with your customers, particularly when customers sort of used to use more high-end flash products, now moving more towards mid-end, mid-end products like C-Series, et cetera. How do you think about how material that mix shift could be, either because you didn't have those products earlier in the portfolio or for a variety of reasons, that they were more performance conscious, they were using more high-end products? How do you think about that mix shift, and how material would that be, and what would be the counterbalancing forces there?
Yeah, so I can't talk directly about how material that shift would be. I think that would be more, unfortunately, a Sandeep question or, or others. I can say I think we're achieving a healthy balance in customers, right? So customers are adopting the whole breadth of our portfolio, performance flash, capacity flash, even hybrid flash. And so we're seeing. I guess I'll just say, I think we're seeing a good balance there, and I can't really go too much farther than that as to how it would shift.
Thanks. Lou Miscioscia with Daiwa again. If we think of AI, before the NVIDIA moment last May and afterwards, maybe if you could talk about your product marketing budget, you know, however you want to define it people, whatever. I guess before, what it was, which I assume wasn't very high to just, you know, where it is now in proportion to whatever else you're doing.
Yeah. I think, like everyone else, we've seen the tremendous opportunity for AI, both in our customers and in our own business. There's been significant shift and interest and investment in resources in marketing AI, for example, in my own team. You know, I won't get into specific numbers, but it's been a significant burst in resources moving towards that as an investment engine, adding additional specialists. I think one key thing with AI is we have to be able to market to multiple different segments, including the infrastructure buyer, the data scientist buyers, some of whom don't know each other within the same customer, right? And being able to build messaging out to the market that embraces all of those different personas, so that's definitely required additional investment.
I think without previewing it at all, I think from the keynotes today and tomorrow, you will see just how much focus we've placed on AI and how much, in a lot of ways over the next day or two, you know, we'll have announcements that will obviously, in some way, concern AI, and you'll see a lot of the marketing messages that will come with those and how we unveil those on stage, and that's going to be a key focus, I think, of our selling and our campaigns over the rest of this, fiscal year and well beyond.
Hey, Jeff.
Hey.
In the beginning, when you guys were working with the hyperscalers, there were some fits and starts. It was a different motion, you know. And so can you talk a little bit about the learnings, and are all three now where you want them to be? Maybe talk a little bit about specific programs on how you've been able to get through the friction and kind of jointly sell these solutions to the end customers.
Yeah, so I'll give you what I know. Unfortunately, I'm not the subject matter expert, and I think you had Pravjit up a little bit earlier today, who maybe would be a little bit more inclined on those sort of questions. I think it took a while for us to learn how to sell an edition where it was not a direct SKU on a NetApp price sheet, right? I think we saw initial tremendous growth from NetApp customers who just wanted to run ONTAP in the cloud, right? And that was the very early versions. But when we moved to this first-party native service and co-selling with Microsoft and Amazon and Google, more recently, that took a lot of work on the go-to-market side to figure out how to do that as a co-selling motion, where we could add appropriate effort in.
And I think we've resolved most all those issues. I think the team, the go-to-market team, between Pravjit working with Ashish and working with Dallas and Cesar, and really the whole go-to-market team there, have really figured out how to make that moving forward. So, you know, I won't talk about, I won't talk about the future, but I think we're really satisfied with the progress to date of where the cloud business has gone.
All right, well, I have a question for you, and I'm going to build a little bit on Samik's question.
Okay.
I get this a lot from investors, is, right, we have the A-Series, the C-Series, and FAS products. They're all ONTAP, all unified. How do customers think and choose between them? How's the product positioning work across those different families?
Yeah. So, you know, at a high level, the product positioning is all around mission-critical, incredibly performance-sensitive applications. So, you know, from a technical perspective, it's sub-millisecond, typically sub-500 microsecond latency on those A-Series high-performance systems. That happy middle ground with the C-Series, the capacity flash systems, where you're getting low single-digit milliseconds of latency, so two to five milliseconds of latency, is really where a lot of customers are finding the vast majority of their workloads. And I think that's one learning, I think, over the last, you know, decade of the move to flash is we went from hard drives, and we went straight to incredibly performant flash. In some ways, that was, as an industry, I'll almost say gold-plated, right? You almost overshot the performance market in some ways because that was the only alternative.
There were no middle grounds along the gradation. What capacity flash has allowed us and certain parts of the industry, I think there's a lot of our competitors who have not caught up at all on capacity flash, but it's allowed us to go in and capture that middle ground where customers were actually being overserved on performance, right? They were being overserved on latency. And in doing so, that's one of the ways we've been able to achieve cost savings for them if they were literally overprovisioned in terms of their latency. So that's where the C-Series stands. And then FAS and hybrid flash is increasingly, and not exactly, it hasn't 100% moved off of primary workloads, but it's pretty much moving in that direction.
We tell customers in general, with only a few exceptions, you're not going to want to put primary workloads on disk any longer, but they're a fabulous place to replicate to. They're a fabulous place for disaster recovery. I think one place we're putting a lot of focus on is cyber vault solutions. So, we announced one of those earlier this year. I think you'll continue to see things at Insight about them. You'll continue to see us talk about it because we increasingly have customers ask us about how can we get logically air gap solutions that just keep the data cold and completely isolated from attack, and that's a perfect application of disk. That's something we can do within our architecture because we have our disk running on the exact same architecture as we have all of our other products.
The final point I'll make to Kris' question is well, two points. One is we have a whole bunch of tools, some that the customers have access to from an apps perspective, if they're an existing install base customer, that will guide them towards what sort of system would be right for a refresh. And then we have a wide variety of tools available for our partners and for our internal NetApp sellers, who will specifically say, workload by workload, performance by performance, this is the exact right platform to position for this one. And the good news is the capabilities are all the same, so they can still sell the value proposition of Intelligent Data Infrastructure .
The idea of which platform to host it on, on the back end, or which system to host it on, on the back end, is more of a speeds and feeds and architecture discussion, as opposed to a front, you know, start of the conversation discussion. Finally, we have customers who have all of them, all the above. Most of our large enterprise customers are going to have a mix of A-Series, C-Series, and FAS, sometimes mixing together in the same cluster even, seamlessly and using AIOps , tiering data between all of them. The idea is the data can flow to wherever it needs to at a moment's notice. Data can start incredibly hot on the A- series. It can sit there for user-defined cooling periods, a week, a month, whatever, automatically flow down to capacity flash.
It could sit there for another quarter, wait till you're past quarterly earnings or whatever reporting period you're in, and then tier that data off onto FAS and keep it there for the long run. And then if that data ever needs to be accessed again, it can seamlessly flow right back up that river. And so by building that sort of combined infrastructure, that's where customers are really able to optimize their service level directly to their cost.
Can I ask a follow-up to your question there?
Of course.
'Cause, you have an interesting perspective. On this—so, like, I think there's a perception in the investment community that, our all-flash growth is a transition of business from our FAS hybrid or HDD-centric business to the C-Series, and so the all-flash is just a transition in revenue. Could you speak to that C-Series business, whether or not... You know, you mentioned at the start of your answer that you thought it was a hole in terms of the latency that that product would have kind of addressed. Do you think that is then bringing net new customers to NetApp? Or do you think that it kind of smooths out that kind of product, kind of offering from low end to high end, and so that's, like, really what's going on, and so we're just kind of transitioning people that we're in a product that we didn't originally have, I guess?
Yeah. So I think it's more than just a transition. I think that we do have customers from competitors who are sitting on legacy disk arrays, who the C-Series opens up new opportunities to transition, right? They weren't able to transition to the A-Series at a price point. They didn't need that performance point. And so we've been very successful going out and penetrating sort of the mid-range of some of our competitors on their refresh and being able to move over. So it's not just, it's not just a refresh of our own internal disk. The other thing I'll say is, based on my previous answer, it's not like disk is going away entirely, right? So it's not a, it's not a one-for-one replacement. A lot of customers are... And, you know, a gentleman asked earlier about the data growth.
Some of that data growth is in primary data. A huge part of that data growth is also in secondary data, right? Because secondary data now can be used for building data lakes, for building data analysis. You need a third or even a fourth copy of data for some of the regulations and to have it locked in place in a cyber vault. So all of that growth in data can be served by the FAS market. So to the extent that even there's some of the FAS market moving up to capacity flash, which there certainly is, there are plenty of workflows flowing in the back end to keep the FAS market moving as well. So I wouldn't say, you know, without going too far into the future, I don't think it's a one-for-one refresh.
I think that there's legs on both of those for many years to come.
You guys have talked about the siloed people that you compete with.
Yes.
Is there a way to think about like, and you mentioned, like, the ability to supply customers so they can go up and down? D epending on data use. Is there a way to think about, like, where that evolution is? Like, how many customers out there, whether large or small, are still very much siloed and have to overcome that, especially in AI, versus how many have already made that transition, to being more flexible, like, you know, your vision fits?
I mean, there's a joke, just look at our market share, but that's probably not what you're asking. You know, that's a good question, and I don't think we fully know the answer. I do honestly think, all joking aside, none of our competitors have a unified stack, right? That has continued to be the NetApp differentiation. There are a couple competitors that, without going into names, just focus on one niche, so they just have one offering, and that's fine, right? I wouldn't call it unified in terms of being able to do structured, unstructured data, object, in the cloud, on-prem, and so almost by definition, any customer we deal with who is not a NetApp customer does have some degree of fragmentation within them. There's NetApp customers who have fragmentation, right, that we're working with.
That's one of the reasons we have our BlueXP unified control plane and are bringing all of it into our common orchestration set, but just about every customer we run into who isn't a NetApp customer has some degree of fragmentation unless they're a very small or sort of niche-focused customer. It's very rare for me to run into a customer these days that doesn't have multiple storage arrays from multiple competitors lying around in some state or the other, that they just have not been able to unify because each one is addressing a very specific need that they haven't been able to find anyone until they met NetApp, that could meet all of those needs out of a single operating system, out of a single set of appliances.
Hi, thank you. Kris may get mad at this one as well. I just want to go back a little bit. Before you had the QLC you characterized it as if someone was refreshing a hybrid or a disk-based system and they wanted to flash, they would be overserved more or less.
On latency, yeah.
On latency. So, but my guess is they wouldn't want to pay for that. So what was the NetApp strategy then, before you had QLC? Was it more discounting? Was it more share losses? And then if we take it to now, is there a risk that we start to see more workloads that are more on the hot side go to QLC because of macro or you know, just deal with the latency or the shortcomings? Is there a chance that there's a little bit of a, you know, price change or whatever, because it might be good enough for a little bit more of the workloads than what we're seeing today?
I don't know about price changes, and I won't go into those. I will say, you know, going back several years, I don't think there was significant discounting in regards to it being overserved, because people didn't realize necessarily that there was any middle ground. And so you know, the market kind of found the price point that was right, as it's wont to do. I do think that the fact that NetApp had hybrid flash offerings that were still incredibly performant, and we had that FAS business that continued to perform, meant that there were options for customers, right? Whereas if they went to some of the all-flash competitors, they really didn't have an option. So those all-flash competitors, they either had to pay the higher price tag, or those all-flash competitors were pricing.
I'll just say significantly below market at that point, right? So no, I think you saw the success story with our ramp to all-flash over several years and how fast we grew in all-flash, and I think we did it, without going into detail, I mean, you can go back and look at earnings statements from there. I think we did so quite successfully, and so I don't think it depressed earnings there. To the second part of your question, will there be as we introduce and as we continue to rapidly grow C-Series, will there be customers that refresh their A-Series over to C-Series? Probably, but that would have been probably a natural transition regardless, and I think we're continuing to see them growing more and more high-performance workloads.
I mean, if you look at these high-performance LLM trading models, if you look at high-performance inferencing, different things like that, there's a whole new generation of workloads that are no longer underserved by A-Series. So as much as there's the potential for some moderate shift of those, you know, overserved workloads down to capacity flash, right? There's as much potential, if not more potential, for new workloads coming in that are incredibly latency sensitive and want incredibly high throughput. If you're hooked up to an NVIDIA DGX system, for example, the cost of keeping it waiting is far in excess of the price differential between a C-Series and A-Series system.
So customers who are spending millions of dollars to build out that sort of training environment for NVIDIA are more than happy to pay that premium between the types of flash to not have any cycles of their GPUs sitting there in an I/O state, in an I/O wait state.
I guess it's back to me. So you mentioned BlueXP, and I think that's probably an understood offering from NetApp. A t least among this community. So maybe you could talk a little bit about what it is and how customers leverage it and the capabilities i t brings.
Yeah. So BlueXP is our unified, multi-cloud hybrid control plane. And in some ways, I think we get less credit for it because it's been this very gradual evolution, right? It started back in 2015, 2016, as, like, Cloud Manager. It was just a way to deploy some instances of ONTAP into the cloud. Over the last three or four years, it's really evolved into having complete management of basically the entire NetApp estate. So a customer's entire data estate sitting there, not just on-prem, but in all the major clouds. So we have customers that have chosen NetApp because the whole is more than the sum of its parts, right? They're able to literally see their entire data estate at a glance.
And then the nice thing we've started to do over the last couple of years is be able to add intelligence into it, right? So the whole Intelligent Data Infrastructure , yes, it's a marketing slogan, but each word actually means something, right? So the data infrastructure is what we built up, not just as infrastructure is on-prem, but as data across all of it. So we can actually see your data, we can see metadata, we can see what's changed about the data, and then adding the intelligence into it. So for example, we added a ransomware protection service that can actually understand not just individual LUNs or individual files, but actually understand workloads.
To know an entire workload, it will know where that workload is spoken, some on cloud, some on-prem, and it will be able to monitor it in real time for ransomware attacks and be able to respond, allow you to protect an entire workload in real time and recover an entire workload in real time, regardless of where that infrastructure is. It allows customers to not necessarily have to have an Azure specialist, an Amazon specialist, a Google specialist, an on-prem SAN specialist, an on-prem NAS specialist, and then a security team to come in and do the restores. You can literally have an IT generalist perform the job of so many of those specialists by being able to orchestrate entire restores directly out of that operating system. You can drag and drop to tier from on-prem to the cloud.
You can drag and drop to tier from one type of on-prem to another type of on-prem in another data center. You can set up your disaster recovery in a couple of clicks directly within that interface, and it's all included, essentially, no additional charge to customers directly within that interface. Once they get in there, they're able to see the entire state of their fleet. We've added AI operations directly in there, so there's a digital advisor that will tell them if they have any risks. It will tell them about things they could ameliorate. It'll tell them about the health of their entire environment, and it puts it all there at the, you know, tip of their fingertips.
And that has been, I say, honestly, one of the more hidden, but one of the more sticky features of NetApp, because once you can get a complete unified view of your data estate, it gets very hard to want to fragment it out after that.
All right, time for one last question, and if it's not from the audience, you guys are going to have to take it from me.
Thank you.
I was wondering, you know, there was the point in time when NetApp was talking a fair amount about hyperconverged, and we don't really hear that all that much anymore. But if you—if we just step back and say there is a role for hyperconverged and transition to cloud, maybe for a lot of customers, how do you think about that? Where does it fit in your discussions anymore or your messaging?
So I think in my own personal discussions, hyperconverged has almost disappeared, which is fascinating, right? Given how much prominence it had just a few years ago. There are a couple of reasons. I think a lot of the people who were initially looking at hyperconvergence realized that they could go to cloud and do it that way as a service, because the move towards hyperconvergence was really about trying to simplify the stack. And if you're trying to simplify the stack, it's a good sort of slippery slope to saying: I just want someone to provide it for me as a service, right? So doing that as a service on the cloud has replaced hyperconvergence in a lot of places.
The other part is, I think customers started to move down the hyperconvergence path and all of a sudden became very afraid of lock-in, right? So, if you move down the hyperconvergence path, for example, and you were on a hyperconvergence platform that used vSAN, or used VMware, you got an interesting surprise over the last couple of years, right? And so customers have been. Even if they're not on a VMware-based platform, they've been very conscious of the fact that if they lock themselves into a hyperconverged platform, they're very dependent upon a single vendor for all aspects of their pricing, all aspects of their stack. And as they're looking at how to adopt the cloud, how to adopt AI, I think they've all become very hesitant, for good reason, about lock-in.
And so we want to continue to partner and be best of breed, right? And we build these converged infrastructure stacks. So we have AI Pod with NVIDIA, we have FlexPod with Cisco, right? Sandeep talked about the OVX partnership we built out with Lenovo, right? So we build these converged infrastructure stacks that are just about as easy for a customer to consume, but if they decide they don't like NetApp tomorrow, it's all open standards-based. If they decide they want to switch to another server vendor, they can do that.
I think that flexibility, especially with uncertainty about workloads continuing to move to cloud, uncertainty about how AI is going to change everything, is for the most part, I think, scaring people away from hyperconvergence, with maybe the exception of the very, very low end, you know, sitting in some small shop somewhere. I think that's really where HCI is getting relegated these days.
All right. Well, that wraps up our session with Jeff. Thank you so much.
Thank you.
I really appreciate you, as always.
Thank you very much, Kris.
All right.
Thank you.
Thank you. All right. Yay. Okay, so now, we actually have a couple of customers that you can ask questions of. So I'm going to invite Scott and Casey up to the stage. Hi, guys. Thanks again, so much for doing this. We really appreciate it. This community doesn't always get to hear from actual customers, so I think it's really important to be able to get some real-world practitioner input in there. I'm going to just start by asking you guys to introduce yourselves, your company, and what your experience with NetApp has been. So why don't we start with you, Casey?
I'm Casey Shenberger. I'm a cloud platform architect at Hyland Software. I've been there a long time. And our company, we use NetApp to host our products as well as internally. I work in the hosting side, where people host our products in our own private cloud. We also do hosting in public cloud. We--
I'm going to give you a mic because I think you're.
Yep. All right. So, we use NetApp products. We use all-flash products for certain workloads. We have started using the C-Series NetApp products for other workloads that don't need as reduced latency. We use FAS systems in data centers where we don't have the performance requirements of A or C series currently. We also use the StorageGRID products to tier our data off to, and we are moving some of our workloads into the cloud with AWS. So there we use NetApp's products, Cloud Volumes ONTAP and FSx for NetApp ONTAP as well. So we kind of use all of NetApp's products across the board.
All right. And Scott?
Hello. Wow, a little volume. All right. So you can hear me, but a little bit too much. All right. I'll try to whisper. That was good? All right. So, Scott Brindamour, I work for Lumen. I'm the Vice President of Product Management. I have four different areas I cover, so our edge compute and cloud business that we provide, I can explain some of the solutions that we do there, NetApp is part of that business. I also have our data center strategy from connectivity, as well as solutions on top of data centers. AI is a big piece of that we mentioned earlier. Our hyperscaler strategy, so how we're going to market with co-innovating and co-selling with the big cloud providers. And I also have a wholesale business, so the legacy large pipes all over, serving and working with joint network customers and partners going forward. So I've been with Lumen for 18 years, so prior to Savvis Communications . That's where I started.
Got acquired by CenturyLink, and then when Level 3 and CenturyLink merged together, we became Lumen. So, NetApp's been an amazing partner. So you heard Jeff, you know, earlier, you know, talking about hybrid. Lumen's all about hybrid. Really, what we're trying to work with NetApp on is we're being coined, through our CEO and our executive team as the network for AI. There's been a lot of buzz around our stock, who's gone a little bit crazy lately with a number of the custom private fabric deals that have been sold, so to a lot of the cloud providers, social media platforms, et cetera. So in the billions, with billions coming, that they're looking for. So this real rush for AI is real.
We want to be that network, that central nervous system that connects the people, data, and applications together across wherever customers want to execute. One of the reasons why we're doing it with NetApp, we have a platform that's built on our network, which we call Lumen Network Storage, that combines the power of the Lumen network to connect the data to its destination that it needs to go to or where that data is originated. So great use case for AI.
We're in the midst of creating a solution with NetApp around how we go to market together to connect the network and the data, which I think are the two most important parts to build AI platforms, to continue to invest in AI platforms, and how you actually harness the data to use it and get customers on that journey. So that's why I'm here today. Happy to answer questions and talk a little bit more about our relationship with NetApp.
Any questions?
Yeah, I'd be curious to hear from you sort of, you know, as you did your evaluation work and you looked at across vendors, what were some of the key points that primarily led you to choose NetApp?
You want to start or. So I think flexibility. We standardize on their ONTAP platform, I know, as well as object storage, StorageGRID, so the combination of those products to be able to meet. We wanted to create a solution by which customers don't have to think about what performance storage they need to match it to the application and have a platform that's versatile that can support, you know, whatever hybrid infrastructure and applications they want to put on it, as well as connecting to the cloud, on the premise, and integrate with that seamlessly. Regardless of what. We're Switzerland. We work with every provider and platform provider out there. We're trying to make our product, you know, so it supports whatever the customer journey wants to be, and that's it.
We've done from the highest, you know, financial trading applications, real-time data access in oil and gas, to a general file system customer down in mid-market. The platform scales to the size and the capabilities and ability to automate the delivery of storage to a customer, and even down to the we can provide for our larger enterprise customers a go-to-market by which we do dedicated storage with them as well. I think the flexibility, the hybrid approach to work with everybody, it was talked about earlier when I walked in, that's the message that we had. Very good alignment of how we serve the market, how we work with the cloud providers, how we work with the data center providers together. There's a lot, a very symbiotic relationship there.
They focus on what they know, which is storage and data and apps. We focus on the network, and we combine together to create solutions together, and that's really what we've been successful with NetApp as well.
We have a similar situation, except for I obviously work on the technical side. It was a similar thing. We chose NetApp because of the unified interfaces that Jeff was talking about earlier. We have a very small staff to manage all the equipment that we have, so by using NetApp, we had, you know, high-performance tiers, mid-performance tiers, archival tiers. All of that was in the same platform, the same operating systems, the same expertise is required. So we looked at other platforms, and it meant, you know, we had to have one vendor for high performance, and we might have to have a different vendor for an archive tier. Well, that means that you have to have people who have knowledge in both or separate people with knowledge in one platform versus the other.
NetApp allowed us to just have one group and one set of knowledge, so that was a big help for us.
Yeah. And I would say, I've, I'm always having a storage vendor knocking on my door, trying to get a piece of our stake and our customers, and it's usually an easy conversation that we've got a platform that can pretty much do what we need to. The other thing I'd add, too, that in listening is the innovation, the ability to go to market and innovate and try new things and go to market together. We've done a lot of proof of concepts, a lot of prototypes. Some of them do well. I used to run an innovation team before I moved into this role. They're very willing to jump in with us and learn together and understand where the market's going.
So that's been big for us as well as we're trying to create the next big thing in the market. AI is a good example of that as well.
Maybe a bit more specific, product-level question in terms of NetApp's recent launch of block storage products, does it materially change your buying pattern with the company itself? And then maybe a second one, just in terms of utilizing the public cloud more over time, how do you see that engagement sort of changing in terms of are you using NetApp just because that's sort of the on-prem versus evaluating other sort of opportunities as well there?
We've been using NetApp's, you know, object storage, StorageGRID, for quite some time. And as we moved to the cloud, we still needed a way to do Data Fabric , you know, or some way to offload that, so we do use S3 there. But we chose to use NetApp Cloud Volumes ONTAP or FSx for NetApp ONTAP in AWS, both because we still got deduplication, we still get compression, we still get the performance, and we have the same knowledge set, like I talked about. We stuck with NetApp because that knowledge transferred.
Maybe a little bit in the back end was different, but ultimately, the storage knowledge required to function there, and we still got all of our performance, we still got all of our compression and compaction and those things that help us to reduce our data footprint. So that's why we chose NetApp, right? They're leading that field, too, so we stuck with them in the cloud as well.
Do either of you use the ASA products, or do you use ONTAP in a block, like unified block?
Yeah. Yeah, we use a unified block today, so yeah, I don't see any. I don't think any changes. I mean, we try to wrap it up as a solution to abstract the technology. It's flexible enough and hybrid enough, right, combined with object and StorageGRID. I actually see. With now, with we see all the data that's being created for not just AI, but IoT and data analytics that's being created. We're hearing a lot from customers that distributed is more what customer is really looking for. Moving. Obviously, the cloud providers want you to move all of it into the cloud providers.
More and more enterprises are taking a more distributed approach, so the ability to have a platform that can work with the cloud provider and the data and the apps you have there, but also connect to on-prem, dedicated, and anywhere in between, especially with an AI model, where the model's gonna be distributed, the data that you're feeding the model is gonna be distributed. That's really where we think, and NetApp is a perfect opportunity for us to scale, to deliver a footprint where we need it and scale up from there without having to change the platform, add capabilities as we go. E.g., you know, BlueXP and that whole control plane and portal, our ransomware and data classification has been gigantic for our customers as well, that have accessed that app.
We're working with them to productize that as a solution that we can give and provide to kind of as a multi-tenant access to customers as well. So they continue to add value on top of it, right? I see that they're they have the same vision that we do, that distributed storage as a service with solution capabilities, where we're not talking about the storage and the technology underlying. Obviously, the techno geeks are speeds and feeds and are thinking about that, but ultimately, you enable customers to solve a business problem with a combined solution on top of it, right? That's where I think NetApp and Lumen are really working very well together.
Their vision is similar, that we need to add and talk about the data and the value of the data and the outcomes you're trying to get from your data, reducing the size, reducing your risk, your security profile. You know, the ransomware features are great capabilities there, and accelerating what you're trying to do with your data to get value out of it for time to market. AI is just the newest version of that, right? So.
And just to clarify, when you say distributed, you're talking about like, what, what NetApp would call hybrid cloud, right? Like on-prem through the cloud.
Hybrid cloud, and we bring our what we consider to be edge is the network edge or the metro edge, which is the edge of our network where NetApp lives. It's in the network, it's part of the network. So any customer that's using our network can use NetApp as part of that, as well as to serve. We talk about it all the time. If you're a retail giant with 3,000 locations to put storage on all those locations in a footprint to do, you know, automated checkout using computer vision cameras and image recognition, can get pretty onerous and expensive to actually put all that gear and all that storage on every premise location.
Then by the time you finish doing it, which is usually, like, a three-year exercise, you have to upgrade the other ones, right? To be able to offer an alternative to that, some of it on-prem makes sense, but there's zoning regulations. If you have a small footprint for sound, there's complexity of supporting that across all those locations. So we believe that the metro location is a, is what we call the third execution venue. Everybody talks about cloud and premises. That's somewhere in the middle. That's the metro edge capability, that's where NetApp lives. And we enable all those abilities to do distributed, you know, applications and data data workloads with that as well. So very aligned to your hybrid cloud.
And I put the metro edge to add to that as well.
Hi, maybe for both of you, both different types of businesses, but when you think about, you know, the importance of storage in your budgeting or your, you know, your CapEx or however you want to look at it, can you talk a little bit about where that sits? I imagine it's a little bit more for Casey than for Scott. Then just, there's obviously a lot of demand for data. How do you see, you know, your spend on data solutions over time tracking with the data increase in data usage? You don't have infinite budget, so any insights you can offer on those? Thanks.
It's key to our budget, right? We, I mean, by hosting enterprise content, we ultimately, you know, we store people's data, and we have to manage it and maintain it, so it's a key in our budgeting. The key there being, you know, these features we get from NetApp help us reduce that, right? Compaction, compression, deduplication, they reduce that footprint to help us get down to, you know, a more budget-friendly answer to how that works, but it's always front of mind for me. Obviously, I manage the data in general, but it's very, you know, we sit down, and we work very closely with our NetApp team to make sure that we're taking advantage of all the features that allow us to reduce that footprint, but still maintain resiliency, you know, reliability, availability.
That's the key to us: don't reduce anything like that. Don't reduce reliability or resiliency. Allow to have ransomware protection, things like that, but continue to drive the cost down for, you know, per gigabyte of storage or per terabyte of storage, however we're calculating it.
Yeah, I would say that the data storage and what we're continuing to deploy and purchase is not slowing down anytime soon. It's escalating. But at the same time, I think the opportunity to optimize what you have in working with our customers, there's a huge optimization opportunity. Lots of customers are keeping a hold of lots of data that they should be archiving out. So having a solution like NetApp with object store to move it to a lower cost option to optimize what they have on their dedicated arrays with NetApp as part of the solution.
But also be able to have that tiered storage option that you can use what you need, rather than having, you know, an all expensive, high performance, all flash array that is going to cost you a lot more per terabyte than a tiered solution, by which you can lower your cost footprint as well. So us internally, I mean, with all the data assets and AI that we're doing in-house to understand inventory and customers and where we're deployed, the amount of data is going to continue to grow tremendously. We see it. We see the investments. We're moving a lot of applications and services to the cloud. We see it, it's exactly what our customers are doing as well.
So tremendous opportunity to reduce so they can keep their budgets while they're expanding their storage as they go forward. You need to do a little bit of both. As you start to, I think you're going to see it as customers start to adopt AI, they're going to try to make it, take advantage of more of their data and more of their assets that they don't, they're not collecting and they're not using today. It's going to continue to, you know, increase that growth as you go forward. So the optimization piece, to optimize as much as you can and modernize that infrastructure going forward is huge, as well as protecting that data as it grows.
As you start to move that, you know, that key data that gives you those insights in an AI model, you need to make sure it's protected and not available to the bad guys, right? So, yep.
All right, well. Okay, Wamsi.
Kris might not like this question. Maybe she would. But I guess if you were to give feedback to NetApp, like, what would be things that from a product perspective you would say are things that maybe you're running into pushing the limits, so to speak, where you know maybe they could be more helpful or maybe they already are and you're in discussions. But what would you say are some of the key things that you would give feedback to them on?
Yeah, yeah, for me, as a solution guy and a product guy, I mean, but I've spent most of my career on the sales side, too. So, I kind of get multiple perspectives. But the biggest thing and what I mentioned before is less speeds and feeds in technology about storage arrays and their capabilities and adding and incrementing. A lot of what I've used to hear was an update on the next array, the next technology, the next capability, which is great, I think, for my engineering team. But from a product and solution perspective and sales perspective, I have a bunch of network sellers that trying to understand everything that goes on in that space, and then layer on top of the storage conversation.
The hardest part in go-to-market has been melding those two expertises together to go after a customer jointly, where you both get a value prop and a benefit back to each other. Changing the conversation to a solution-focused conversation, where you're focused on joint customers' problems that we both see in the market, and then building solutions together to go after that problem, where we're both adding a piece of the solution, and then integrating it together as a conversation, where we're changing the conversation with customers.
So now you're not just selling to the, you know, the architect who's managing and developing the architecture and keeping it up to speed, but you're selling to the business owner who wants to take advantage of how I get value out of my data, right, going forward, and what do I need to enable that? What do I need to experiment with AI and GenAI models? I don't have the infrastructure. Can you partner with Lumen to give them the network and the infrastructure on top of NetApp that you can deliver a solution to? So that would be the big thing, is it? It's--t he company's definitely changed its positioning, its approach, and is becoming more solution-oriented, and they may have done it to the market, but didn't go as quickly with the partners, as you would see.
So as a customer of NetApp and a partner that we're going after the same customers, I've really seen that shift and like to see it more. As a person who's looking to you know, to create the next innovative solution in the AI space, they're leaning in, but it took a while before they got there. So that would be the big, big thing that I see.
From the technical side, you know, 'cause that's where I am, right? But, I think the biggest thing that we struggle with or that we continue to provide feedback to NetApp is, they need to continue to provide better ways for automation and scale. So right, as a hosting provider, we get lots of customers coming on. We're doing more and more, and we wanna, you know, we wanna have all that performance reliability, resiliency I've been talking about. But also, you know, we don't wanna we wanna maintain our staffing levels. We don't wanna have more staff to do that work, so we rely on automation.
As that, as NetApp is, you know, makes these shifts, and they start to transition, we need them to continue to give us the ability to do that in an automated way. There's some things there, here and there, that are tough to do that for. So we continue to work directly with NetApp and ask them, you know, "Let's make it so this is much more scalable and much more automatable." That's our biggest feedback to NetApp, I think.
All right. Well, let me ask the flip side of that question, which is, right, you've both used NetApp for a long time. What capabilities did we bring that surprised you or that you learned through actually using the product?
I think probably the one thing that was the most, I guess, that really caught us, and it wasn't necessarily a surprise, but the ARP capabilities, right? We were--
ARP is Autonomous Ransomware Protection.
Yeah, sorry. So the functionality, you know, kind of came around. We had been talking about needing ransomware protection and how we were gonna do that, and there were some, you know, all kinds of different methods for that. And then, NetApp started adding that directly on box, and now you can, you know, we can do that on box automatically. It's all GenAI based, so it's very accurate. So I think that was one thing that kind of, although not really a surprise, it was a big help to, to, you know, take away. We didn't have to go now look for a ransomware vendor that we could partner with, right? Our vendor just brought it along.
Not AARP, right? ARP.
Right, right.
Yeah, I would say the ransomware protection, so back to what I was saying, the shift to a solution mindset, security and ransomware is huge. The ability to actually sell that as an add-on service to our customer in the security market, which we do a lot in the security market ourselves, was huge. Like, that is an added value that I can monetize with my customers going forward. I think BlueXP, in general, the data classification capabilities that's built in, the control plane to see all of your storage across your hybrid infrastructure, all of that, the want that was mentioned earlier of automation and visibility and control, is gigantic.
So that's, as I said, that solution shift has happened recently, and I've been surprised at how quickly they've adopted new capabilities as well, that are benefiting not just Lumen to another capability that we can sell and add value to our customers, but for our customers themselves, that they made the right choice in NetApp. They continue to get value out of the platform as they go forward. So that's been gigantic.
I know you both talked about the value of the unified storage approach, but the fact that we have kind of ONTAP underpinning everything is what enables us to bring those incremental features and have them broadly scaled. Audience questions? Otherwise, you're gonna keep hearing from me. Lou?
So the AI session from earlier said a lot of small and medium businesses aren't really yet using AI internally, so I'm not sure how big your companies are, but are you all using AI internally yet? And if actually you think that, you know, your firms just aren't big enough yet, you know, where do you see possibly that happening in the future?
Yeah, we're pretty big.
Thank you.
Yeah. I'd say 50,000 , 53,000 employees as a telecommunications company that's trying to be a technology company, we're adopting. We're huge users on, on the Microsoft Copilot side, so we're probably the poster child for Microsoft around Copilot. Our CEO, Kate Johnson, came from Microsoft, so there's a, there's a connection there. But it's tremendously valuable in, in having data at your fingertips, transcribing and summarizing meetings for me, where I am triple booked. Like, I'm here at a conference for the next few days. All the meetings that I'm missing, I can get access to it. But just the data deluge of data that's available, trying to find that content across multiple systems and email and all the things that go on is huge.
So that's one example that we put in, that we started small, and then it's pretty much available to anybody. We thought the price tag originally was pretty high, and we limited it, but the value that we got out of that, right, of productivity from that solution is gigantic. Living in a world of tremendous meetings, that's valuable to us of how you can optimize people's time, and there's a whole regular course internally of enablement and adoption of that, which has been huge. We've been actually trying to, as a company who has infrastructure everywhere, every data center, every cloud, you know, millions of buildings across the U.S., and in Asia- Pac as well. We sold off our European assets recently.
The ability to understand those assets and using AI to not just trust the tech that installs something or turns up a circuit. The ability to have to use cameras to understand what's there, what the capacity is, what was installed, was it installed in the right place?
Do I get the nice blinky lights on, like inventory management, and actually putting that against demand, and bringing that data against, you know, sales force forecasts and demands on a particular location that may say, you know, a tech that's installing something for one particular opportunity or customer can look at the demand and say, "I'm not just gonna bring one chassis, I'm gonna bring two, and I'm gonna install that." So the ability to use the data for real time to train people of how to do particular jobs and capabilities, that's all automated, and to check their work after the fact, that's been gigantic. So I think throughout the process of understanding and data and how to use it has been tremendous across the business.
Did you just develop that application, the last one, yourselves internally?
Partner.
Yeah.
You name it. We're working with pretty much everybody, you know, NVIDIA and Intel, as well as Microsoft and Oracle, and some of the SIs as well, are helping us adopt it. So it's been pretty much across the board. I mean, it's a pretty incestuous relationship. We sell to them. They sell to us. Balance of trade is huge, so there's always someone looking for us to do more consulting or do other things with them as well. But the opposite of it is, you know, the partners that are willing to invest a lot with us to use this as a use case, as well going forward, and we do the same with them. So it's been really amazing.
I think for a big company that has the capital to do it, but also capital is tight in these times, it's. We think we've got a tremendous return, but we're learning what we need to do and how we need to go about it. I think it's very similar to what enterprises are. They're getting their feet wet, and they're learning how hard it is and how they need to focus, and where they're gonna maximize the value as well.
Casey, any comments from you?
We use-- I mean, we have AI, like you said, about meetings and, you know, some Copilot stuff. But really for me, I'm not super involved because I'm hosting customer data. I'm managing that underlying infrastructure, so it would be more on, like, the software that we write and that we host. And that's, you know, Hyland is--w e actually have our CommunityLIVE this week, so we're gonna have some new announcements, you know, about what we're doing with AI and where we're going there for, to allow our customers to have, like, a, you know, more seamless integration with AI to get access to the data that I store and manage.
Today I don't really use it a lot, but it's coming in our products, and it's coming in our cloud, and we will definitely, you know, embrace that.
Thank you.
We have one more.
Thank you. Maybe we could get both a technical and maybe business perspective on this question a little bit. We do hear from some of NetApp's peers about how power has become an increasingly important consideration around storage. So as you think about it, could you help us think through, maybe input from you specifically, how much of your budget is actually power budget being consumed by storage as a practitioner from a tech perspective, and where do you see that going? Is it different between all-flash versus disk? And from a business perspective, you know, how important is it? I know you guys are, I think, committed about 10% of the fiber from Corning or something recently like that, right? Like, that's a lot of fiber capacity, building a ton of capacity out there.
As you're building these large data centers, is your storage strategy going to need to change? And also, as power in that context, well, how do you think about that? Thank you.
Yeah, and power, I mean, power is a big part of our storage budget. And then, like you said, it's directly related to the type of storage that we're using. So as we do more performance, and SSD is huge, right? That gives us more I/O and less power. Additionally, we get it in less space. It's why we've embraced using C-Series. We have plenty of workloads we don't need sub-millisecond latency for, but we do need still, you know, high I/O access, and their latency tolerance needs to be low. That allows us to get a lot of capacity with a lot less power.
Because we're able to do that, and we use that with tiering, that same unified approach we've been talking about all day, that allows us to, you know, that archive tier maybe uses some more power, but it uses less because it's disk that's kind of just there and not accessed near as much. So, in all these new things, right, we choose sometimes the type of platform with NetApp we put our workloads on based on, you know, power consumption or what kind of space they're going to use in a data center. So, you know, that's how we handle that piece.
Yeah, power is always, like a big cost and consideration, especially when you're distributing infrastructure everywhere. So I think I mentioned earlier about having a platform that's scalable and modular, that can meet all the use cases without putting a, you know, the build-it-and-they-will-come kind of days at Lumen are gone. So how we optimize what we think we need now, and then how we can easily scale up, has been a big piece of what we do. And optimizing the same footprint everywhere to make sure we have a kind of scalable, supportable, you know, power infrastructure, cooling infrastructure as well. Now that we're, you know, getting into GPUs, things of that nature, they're at more order of magnitudes over what the storage system, which is, you know, unified and supporting across all workloads.
So I think from the storage perspective, we're pretty comfortable with the model we have. Dedicating it where customers need, you know, more power for customers. We've had, you know, big financial services, enterprise customers that have deployed dedicated capacity in areas, and making sure we're starting small and building up as well, so that we're not, you know, consuming a bunch of power and not using has been kind of what we think about. But then going forward, as you distribute, you know, network capacity and compute and storage and security capabilities across the platform, it's becoming harder and harder that a lot of these destinations are old central offices where we had all the telephone gear, which is now in the network, and which is now virtualized.
So thinking about how you put it in a low power, we're always looking to optimize the power of every device that we put in, as well. As well as how can we contain the power goes straight with the heat as well, right? And the heat in some of those facilities, to be able to cool them and do liquid cooling and some of those innovations as well, that a lot of vendors are bringing that. So we've been talking to NetApp about how we optimize that as much as possible as well, but the modular systems approach, you know, helps with that tremendously. So we don't have to deploy something gigantic and then be consuming a lot of power. It's a cost we can't get back, right? Because we're not optimizing that.
That's been the big piece of it going forward.
All right. Well, since the questions seem to have died down, I'll ask one final question, not necessarily specific to NetApp, but more around how you're thinking about the future of storage and, and your data infrastructures. You know, what are your. What-- When you look on the horizon, what do you think the biggest things coming for both of you are in terms of opportunities or challenges?
I think opportunities that we've been really thinking about, you know, composability of systems, and composability of data, and storage, that I don't have to buy. You know, I can assess and, you know, basically build and have a customer compose a system on demand that they need, whether it be compute, whether it be memory, whether it be storage or, you know, network backplane. So we've been, in my innovation time, we've been looked at a number of vendors for, around down that route as well. So how can I actually get integrated systems that have all of that together where I could build that modular on the fly? So we've talked to NetApp a little bit about that as well.
Not giving up any research and R&D that we've done, but I think that's, that's a piece, is how do you deliver just enough storage and just enough compute, enough memory to support the requirements of an application without, you know, pre-building and, you know, building it, and they'll wait until they come. But the capability I think that's already started is that the AI capability on top of that, to understand the data, categorizing that data, understanding that data, and making you aware or even automating some of the the maintenance capabilities around that data. Segregating the data off, trying to work across the infrastructure layers and the stack, including storage, that, you know, our vision or my team's vision on the edge side is to create an environment where the application dictates the infrastructure that you need.
So if you have composable components in a solution that you're delivering, including the storage, how can the application understand and predetermine how much storage, what type of storage, where it needs it, understands the network and understands the compute, and can build the system that it needs to support what it's doing now and then adjust as it goes forward. So kind of that autonomous system that's composable, that you can build up. So how do we support that in a business model going forward with NetApp as our primary storage partner as well? So that's one of the things that we think about that we've been trying to do.
We're probably early on in the technology, but that would really optimize where we're not putting a tremendous amount of infrastructure in one place or building lots of racks in a data center, that we can really compose it and just have just-in-time inventory, where I can swap out the latest, greatest performance of all flash arrays with something else going forward as the technology leapfrogs. It's big with GPUs, that I don't want a GPU that costs me, you know, $60,000 sitting there not being used. If I can afford it, it's locked into a server in a system. Same thing with storage. How do I unlock that and create systems that are modular and snap together as you need it, like Lego blocks? So that's what I think about.
I think for the future, for us, it's probably very similar, right? We -- storage, storage is growing at an unbelievable rate, right? It's not getting any smaller. We need more analysis, we need more ability to know what that data is and where it's going. So for the future, for us, it's just gonna be to continue to optimize. As we're storing that data, how do we, how do we optimize the space, the power, you know, its resiliency? How do we optimize the intelligence that we have about the storage so that we can classify it properly and move it to the right location? And those are all things that, you know, we work with NetApp on, we work with other vendors. You know, even for the compute side, the same thing, right?
We need to optimize all the capacity that we're using. So that's the future for us. We just continue optimizing that as much as possible to reduce those footprints.
All right, Nita.
Hey, just maybe on that, you know, I know probably over the past couple of years, you went through a bunch of cloud optimization type of initiatives. Just, is how you look at that optimization different kind of--o r is there any way in which it's different pre or post, kind of some of that evaluation you might have gone over in the past couple of years?
I don't know if it's any different. I mean, it is somewhat different because the technology changed, right? Like, before, in the past, it wasn't necessarily like: Can I put it on, you know, SSD versus spinning disk? That was all, that was the only two options, right? But now we have, you know, we have single-level cell, we have multi-level cell, we have quad-level cell, right? We have lots of different SSD options. So that optimization now, you know, and NetApp has grown that way, too, right? They originally only had all-flash, and that was on, you know, single-level cell. Now they have the C-Series, right? So, as we've as we continue to work down that optimization, as vendors bring multiple options, then that gives us the ability to kind of just see it.
So it's changed in that way, but it really hasn't changed from a-- we're still always looking a way to optimize that and put workloads where they belong.
Yeah, I think more from a solution and product perspective, that the cloud providers have almost abstracted a lot of the technology and gone to services. You know, like, what's the application approach? What's the service you actually need? And kind of abstracted the infrastructure away. So but it's also, they've made it easy, but at the same time, a lot of your eggs are in that basket, right? So, I think that's where NetApp has helped us tremendously to kind of be that neutral platform. But I don't see it any different, optimizing cost. I just always see the pendulum of people moving a lot to the cloud, but trying to move it away from the cloud. I don't think that's ever gonna change.
It depends on where you are in your maturity as a company, that you go through those patterns as you go forward. But again, simplifying and the ability to offer, you know, storage as a service, as a solution, that's kind of our vision of what we try to do. NetApp's been helping drive in that direction as well, but also be hybrid that they can participate with the cloud, they can participate on-prem, they can participate with us on the edge, wherever it may be going forward, and just abstracting that capability away and supporting the application and what it needs going forward. So that's the thing I have seen change a little bit, with the cloud providers have been leading in that generation.
But there's another angle that you need as well, is private and, you know, close to where your users are, et cetera, that the cloud's never gonna get to. Maybe it will be one day, but we're not there yet. It's always gonna be enough data and apps and performance that you need locally. But having that hybrid approach that supports whatever you need going forward is gonna be gigantic going forward.
All right. Last opportunity for a question. Otherwise, I'll release Casey and Scott back out to talk with other customers. Going once, going twice. Okay. Well, Scott and Casey. Thank you guys so much for coming. I cannot thank you enough. I'll probably take that from you.
Thanks. Good job.
Good job. I really appreciate it. Thank you so much.
Wow. Thank you.
All right, well, we're done. So thank you to everyone on the webcast. If anything piqued your interest here, please don't hesitate to reach out to the IR team. We're happy to get to any follow-on questions or connect after this event. For those of you who are here, lunch is outside. So thanks for staying for a late lunch with us. Again, there will be the keynotes later today at 4:30 P.M., and then the show floor is open after that. As Jeff mentioned, there are some announcements that you could expect to see in the coming days, once we kick off Insight officially with the keynote today. So thank you again, all, for coming, and always don't hesitate to reach out to the IR team if you have any follow-up questions. Thanks.