Thank you for joining today's Tech Talk with NetApp on Intelligent Data Infrastructure for AI. I'm Jason Ader with William Blair. I'll be your host. I'm very pleased to introduce the team from NetApp: Andy Sayare, Director of Strategic Alliances. I hope I pronounced that correctly, Andy. Russell Fishman, Senior Director of Product Management, and Kris Newton, VP of Investor Relations. Before we get started, Kris will read a Safe Harbor statement.
Hi everyone. Today's discussion may include forward-looking statements regarding NetApp's future performance, which are subject to risk and uncertainty. Actual results may differ materially from the statements made today for a variety of reasons. Please see our most recent 10-K and 10-Q filed with the SEC and available on our website at www.netapp.com. We disclaim any obligation to update information in any forward-looking statement for any reason. Back to you, Jason.
Okay, thanks, Kris. Welcome, everyone. I'll go through some questions for the team, and then we'll have some time for audience Q&A. Please pop your question in the Q&A box at the bottom of your screen. We'll get to as many questions as possible. Maybe just to get started, Andy and Russell, can you talk about your specific roles at NetApp?
Sure. Thanks, Jason. Yeah. So yeah, as you said, Russell Fishman, I lead the product management at NetApp globally for our solutions business. That covers a variety of SIMs, but for the purposes of this conversation, that is AI solutions. I've been doing AI solutions at NetApp for about five years. Prior to that, being at NetApp for just about 10 years prior to that, other Silicon Valley companies.
Great. Andy Sayare here. I lead our AI alliances. I've been working with NVIDIA, which was our first AI alliance, for about six years now. We've also added a number of other partners, and we can get into that in a bit. I've been at NetApp for 11 years total.
Okay, excellent. The first question that I'll propose to you guys is just a very high-level question in terms of the AI landscape today that you guys see. Maybe just talk about what you're seeing out there, how the space is evolving, where are we in sort of the maturity of the market for enterprise AI.
Yeah, I'll take that, Jason. Look, I think we're at a critical juncture in the adoption of AI. Most organizations have struggled to deliver ROI on their investments in AI. We see as maybe as much as 85% of these AI projects never making it to production. The concept of just doing AI just isn't sufficient any longer. What we see business leaders asking for is more certainty in the outcomes delivered by AI. It's driving a reckoning whereby AI is starting to be treated like any other enterprise-class IT service, albeit one with significant and transformative potential upside. The concepts that are sort of enterprise AI concepts such as manageability, high availability, security, and data governance that, frankly, up to now have been mostly overlooked during this gold rush of AI are suddenly being considered upfront during the planning phase.
For us, that's a pretty, that's a fantastic place to be because those are the, you're talking about things that really. NetApp does very, very well. What we're seeing is that AI is driven by data that exists in organizations and the ability to extract and make that data useful is. What we're starting to see customers really focus on. That idea of leveraging latent enterprise data for AI use cases is really. Critical to their success. I think we are just on the cusp of moving from a bunch of early movers from a customer perspective to mass market adoption of AI. That will be a combination of a bunch of organizations looking at custom use cases as well as organizations being willing to take more turnkey solutions, right?
Rather than having to build it all themselves, being willing to take something that's more off the shelf and apply it to their organization with a much higher certainty that the ROI they're seeking will actually come to pass.
Gotcha. So we're moving from experimentation to production. Do you think this will happen in the next 12 months, or do you feel like this is still a little ways out?
No, I think it's happening right now. I think what you're seeing is early adopters have gone through the pain, right? They've learned why. Honestly, we vendors such as NetApp, the market, I would say even industry analysts have started learning what makes a project successful and what doesn't make a project successful, right? Why does a project, an AI project that goes through a proof of concept stage, not make it to production, even if it passes POC, right? I think what we're seeing is organizations are quite rightly taking a more cautious approach on AI because the costs involved are significant and the rate of change in the market is significant too. If you're going to make a play here as a customer and make an investment, you want to be sure that that investment is going to pay off with some level of certainty.
I can't think of any other part of the technology landscape where a 15% success rate for a project is in any way acceptable and we shouldn't stand for it. I think as we start to see that number tick up and we see more certainty in these investments, yeah, we're going to see a rapid advancement in customers wanting to adopt AI because at the end of the day, the opportunity that AI offers in terms of an ROI is significant. It really is transformative. We see a number of repeatable use cases occurring here. It won't be a case of, can I get ahead of the competition as a customer? It's going to be, hey, I just need to keep up with what my peer group, the other companies who I compete with on a daily basis, and how they are leveraging AI.
Where do you see the kind of the most bang for the buck on use cases right now from your customers?
Yeah, and that's a really interesting question, Jason, because I think the standard answer that you'll hear is, well, it's a bunch of these highly horizontal use cases, right? Chatbots and customer service agents in particular will be the one that most folks go to. Then you hear, of course, a bunch of folks talk about personal productivity enhancements, sort of the co-pilots of this world. We tend to see that the personal productivity stuff, whereas it does have value, it doesn't return as much as organizations would hope. I think it essentially gives folks more productivity, but how they use that productivity isn't necessarily to the benefit of the organizations they serve. Maybe the phrase is more water cooler time is how I would describe it. I'm not saying it doesn't work. I'm wondering, we're not necessarily seeing the value hit the bottom line, right?
Which is the big challenge here. These use cases around chatbots and specifically as we move to more of an agentic view of AI, that absolutely is real. It has both helps on the bottom line because it's obviously making the ability to serve customers a lot cheaper proposition. It interestingly also has a significant uptick on customer satisfaction because customers get served quicker and more accurately. There's very few times when you talk about cost optimization in this world where there's a customer benefit as well as a supplier benefit. We are definitely seeing that in these chatbots. There's a lot of risk associated with that, which I think is part of the reason why you see some trepidation amongst customers in terms of adopting this. The concept of agents running wild and doing things that they shouldn't be doing is a very realistic concern, right?
Frankly, we have societal concerns as well, societal norms about what people are willing to accept in terms of the role of AI in managing their customer relationships. These are all real concerns. It's something that I'll tell you that data management organizations such as NetApp think very carefully about because the management and control and provenance and security and overall data governance required to make AI successful starts with the data, right? If you manage and control the data the right way, you get a better outcome. We have an education, there's education that we have to do amongst buyers that we can help. An organization like NetApp can help simplify that experience with what we call an intelligent data infrastructure.
Essentially what it is, is it's how do we make an environment where the data is managed the right way and the value of that data is extracted in a way that makes it useful for AI.
I would add that. One of the ways that our customers are mitigating some of this risk with data governance is thinking in terms of solutions that are departmental as opposed to enterprise, right? When you're talking about a salary from marketing not being able to get access to data from finance, that's a set of governance challenges that a company must address before they deploy large-scale solutions. If you were to, for instance, deploy a solution inside of the legal department of an organization, that's kind of a closed area where everyone can work more effectively by leveraging AI. If I can feed contracts into my AI system and have them look for risk inside of those contracts, boy, that's a help to the individuals in that department.
We do not have to really worry about that data leaking elsewhere in the organization or outside of the organization, right? Right. More verticalized solutions to some extent. Yeah.
Yes, absolutely.
Okay. And then just kind of to level set, I want to get into the hardware and some of the storage opportunities in a minute, but just to level set for the audience here. You guys have been around in the tech space at NetApp. Does this feel similar, this wave that we're, this platform shift that we're going through? Does this feel similar to the early days of the cloud? Or does it feel, in what way does it feel different?
Yeah, that's a good question. Look, I think it'd be fair to say that AI probably offers the greatest transformative business opportunity that we've seen since, I would say, since the inception of the internet, right? I don't in any way want to underplay the importance of what we're seeing with AI and the opportunity it offers us. That being said, where we've gone through these waves previously, Jason, what we have learned very quickly is that it's easy to look at something like this and say, well, it's all brand new, it's all different, right? The only way to go about solving for this problem or this opportunity is to start again, right? What we've learned very quickly is that that isn't a very good approach for most organizations, right?
What we want to do is actually really want to extend what they do today to be able to adopt AI in a more seamless fashion than making it as jarring as, hey, you need to start all over again. In the data space, that means adapting data to AI rather than starting all over again with your data. Because at the end of the day, the data, the asset that the customers sit, organizations sit upon, which is the data, that's the value that they have. You're combining the data with AI to get the outcome you're looking for. The key is to extract the value out of that data and make that data work for you in AI rather than say, hey, listen, there's a whole new thing going on. I would say that, look, you've seen obviously a lot of.
Activity on the compute side, on the server side, for example, right? A lot of spend there. That's because at the end of the day, you need to buy new, accelerated, new and unique accelerated compute platforms to do the things you need to do with AI. With data, it's more about adapting. Now, that's not to say that we don't look at AI as a workload and say, hey, listen, there's some new things we need to do here, right? From a portfolio perspective, NetApp has been and continues to evolve our platform to do the best possible job with AI, helping the customers that we have with the 100 plus exabytes of data under control. Running on NetApp systems, finding out ways to take that data and make it work for AI. Absolutely, we're doing that.
I do not want to make it sound like it is completely jarring because the reality of this is, as we look to sort of make AI more mainstream, the simpler it is for customers to adopt in their existing environments, the greater the adoption will be.
I would also add to that. Historically, when you look at new innovation in technology, the usual path eventually is that technology disappears. It gets embraced into everything. Instead of cloud being a thing, cloud is now part of enterprise AI data or enterprise data strategy in general. Similarly with virtualization, with name any innovation that's happened in our lifetimes, and those tend to be abstracted into everything else that we do. We're clearly not there with AI yet. It's early days, but eventually this will follow that path as well.
Okay, great. When you look at the portfolio, and Russell, you kind of alluded to this, but when you look at the NetApp portfolio, where do you see the biggest opportunities to kind of, again, adapt that portfolio for the needs of AI? And is the portfolio, does it need some rounding out?
Are there things on the roadmap that you feel that are going to be important for you in order to really fully capitalize on this opportunity?
Yeah, it's a good question. I would start by saying, I think that fundamentally, I will say that we think AI is probably one of the most hybrid workloads the industry has ever seen. I will make that statement upfront because that will be important in what I'm about to say. For us, there's no doubt about it. Data growth is accelerating. There are significant challenges that customers need to address in terms of being able to gather, manage, and gather and manage data in a unified fashion and extract the value out of that data. They need to be able to manage that data in a consistent way throughout whatever AI workload they are targeting. That's everything from traditional prepare to train to deploy. Some customers will obviously go straight to deploy because they don't want to go through training.
That was really more the turnkey that I was talking about before. The big opportunity for NetApp in particular is the unification of all those requirements into a very consistent way to manage data. It is something obviously that NetApp has made investments in for the last, certainly for the last decade, especially in cloud, that makes that very simple. Whether those workloads start off or end up in the cloud or just consume cloud services as part of them, we make that experience seamless. I think that the reason that has become so important, not even just from a simplicity and cost-effectiveness perspective, Jason, is that we see a rapidly emerging regulatory environment around things like the AI Act in the European Union, which, as many of your listeners will know, attendees will know, comes into force next month, actually. We start to see enforcement next month.
Of course, that affects anyone who does business in the European Union as well. What it's asking for is a measure of control and management of data throughout the AI data lifecycle that you're going to really struggle to do if you don't have a unified way of doing it, right? It puts organizations like NetApp at a huge advantage as AI becomes more mainstream because those sorts of things that are needed from a data governance security perspective and back to that concept of enterprise AI become even more important moving forward. We see data growth for sure, but the data, the unification of that experience from a data manageability perspective is probably even more important. I will just add from a portfolio perspective, which is one question you specifically asked. We're continuing to evolve our portfolio.
We have continued to evolve our portfolio as NetApp has been doing, has been working in the AI space for over seven years now that our portfolio has continued to evolve. We did make an announcement at our user conference last year, which is known as Insight, about some new products that we will be introducing to continue to expand our ability to effectively manage AI workloads. We do it today very successfully. We're going to do even more in the future. We talked about two products in particular, our disaggregated ONTAP product, as well as our AI Data Engine product. And between them, they allow organizations to effectively manage and scale their data storage and data management for AI workloads, as well as extract value from their latent data, which I think is going to be the big difference between the winners in AI and the losers.
It's going to be those that are able to harvest their data effectively.
Great. Can you just double-click a little bit on the challenge for the data storage guy at an organization with that AI brings to bear? You talk about unification and that's sort of like, I do not know if our listeners understand exactly what that means because some of them may not be that technical. But maybe just elaborate a little bit on the challenges that AI brings to the fore around storage.
Yeah. So yeah, that phrase unification, I guess, yeah, we should double-click into it a little bit. Look, I mean, organizations don't have data in a single place. They tend to have it scattered. It tends to be in many forms as well, right? From a storage perspective, data perspective, we tend to talk about the idea of structured data, unstructured data, and semi-structured data. We also talk about the different forms of data. We call it multimodal. Essentially, what that means is different types of data, whether it be text, image, video, audio, you name it. In an office environment, it could be PDFs and PowerPoints and what have you in Excel. All of that information is actually pretty difficult to bring together. Firstly, it's scattered. It may be scattered on different systems. It may be scattered in different physical locations.
What you need to do in AI, first thing is to bring that data together so that you've got access to all the data you could possibly need to drive your AI outcomes. Once you get that data together, which by the way is a non-trivial issue, they tend to involve things like data lakes, for example, to actually bring that data together and manage it. It is all about extracting value out of that data. That is not just about retrieving information from the data, but it is starting to understand the nature of the data, right? Some examples would be something like data classification. What types of data is actually, what types of data is in there? Is it personally identifiable information, for example? Are there other types of classifications? NetApp has hundreds of them, actually, that we use to apply to data.
Do we understand the quality of the data relative to other sources of data? Can I make sense out of this gigantic pool of data so that the data is not just thrown at something? We start to weight the data so that we understand that the more important data is considered first, ahead of data that might be of poorer quality. That is the sort of stuff that these guys and gals are starting to deal with. I will point out there, Jason, that the storage folks have not really been that involved in AI up to now. That is changing, right? I mean, I think what we have seen, and we have seen this across so many of our customers, is that it is coming down from the board, sometimes the C- level, but definitely the board level, that there is a net incremental investment that is being applied to AI projects.
They're typically driven with a view that there's a gold rush, a rush towards these opportunities where traditional constructs and limitations that exist inside organizations around things like IT and governance and CISOs and CDOs can be essentially ignored for the benefit of being able to react quickly to the opportunity ahead. The problem with that is that's why you get the 85% of the projects failing. Because when you don't include those people, those are the people that actually make these projects go live and production integrated into core business processes.
I think that what we are starting to see is that some of our more traditional buyers are now being way more involved in that decision-making process than we had seen initially, which obviously is only a good thing for an organization like NetApp that, well, of course, we have built and fostered relationships with the new personas, the new buying centers, the AI practitioners in particular. Of course, we have continued strength due to our existing install base and general knowledge of what we do in the industry amongst the more traditional storage buyers.
One distinction that might be interesting for the listeners here is that when data scientists or developers build a proof of concept, they have an idea, they want to prove that AI is going to actually do something valuable for the organization. When they pull that proof of concept together, you better believe that they handpick the data, they scrub it, they ensure that that data is going to help deliver the outcome that they want. You take that same proof of concept and move it into a production scale system. Suddenly, you're talking about maybe hundreds of times the amount of data that the POC used. There is no way to scrub that data in a way that will get to that same level as the proof of concept. The result of that is a number of failed projects. We see this transition from.
Building these proof of concepts to production systems is non-trivial. What Russell was pointing out, many of the elements that are going to be required to make a successful production AI system.
Great. Thank you, Andy. On the unification concept. We hear about this term in storage for many years, probably more than a decade, of the global namespace. Is that part of the approach with NetApp's unification of storage? And maybe just talk about, for non-technical folks, what that means.
Yeah. I mean, the most important thing for us actually is focused on creating a global metadata space, which we absolutely were focused on, which is essentially the extension of sort of the file data, if you will, or the actual information contained in a file. So a bunch of other stuff that kind of sits around the file. It's more like inference. Those comments I made before about things like data classification would be a good example of a type of metadata, data quality. Data lineage would be an example of the sorts of metadata. We are definitely busy adding a global metadata store to our systems. We are incorporating some elements of a global namespace into our solutions too. That is something we have previously announced. I don't want to overrotate on that, though.
I mean, I think the global namespace is a feature that makes sense in certain use cases. But for the most part, if you look at the totality of a data pipeline, you do not necessarily need or should access all the types of data at all points in that pipeline, right? For us, it is more about the ability to seamlessly access and move data wherever it is needed. That is more than just a namespace. That has a lot more to do with the ability to efficiently and effectively synchronize data between multiple sites, multiple storage systems, and more importantly, between on-premises and the cloud, and particularly the Neo clouds, which are these new GPU as a service clouds that we have seen come to the fore recently. I think that is probably a bigger challenge for people to focus on than even the namespace piece of it.
Yeah. That's a good segue to my next question, which is. A lot of investors, I think, have been told the story around AI as being basically 100% cloud-based and everything's going to be in the cloud. Therefore, certain vendors that are heavy on-prem will be disadvantaged over time because more of the training and the inferencing for AI is going to happen in the cloud. Can you just talk to that question or that concern that some investors might have about a vendor like NetApp that has a cloud portion of the business, a 10% or something of your business, but you're still 90% on-prem? If AI shifts more energy and dollars to the cloud, what's the impact to someone like NetApp?
Yeah. I mean, firstly, I think the way that we tend to think about it inside NetApp is that we are neutral in terms of our ability to service customers that want to leverage the cloud or on-prem for AI. More than just the underlying storage capabilities that exist in both of those places, Jason, because of the closeness of our relationship between us and at least the three main hyperscalers, right, being AWS, Microsoft with Azure, and Google and Google Cloud, we have a level of integration with their own PaaS and SaaS services that no one else in the industry has, certainly not in any way that's consistent across all three of the hyperscalers. For our customers, we can take a much more sort of customer-centric approach to this rather than driving them to either on-prem or the cloud.
Now, that being said, to answer your question specifically. We've all seen these, we've all seen cloud when it first got introduced and a massive focus on driving. A lot of customers, we saw customers who wanted to be all in on cloud. What we've seen, of course, is we've seen some pullback from that, right? We've seen some repatriation. Actually, I don't like the phrase repatriation. I actually prefer the phrase rebalancing because that's really what this is all about. It's about a balance. Anything that's dogmatic, it's typically the wrong answer. Those organizations that are able to find the right way to balance their investments on-premises in the cloud and take advantage of the reasons to do both of those are the ones that are being the most successful. I'll give you a couple of small examples, right? There is absolutely no doubt that.
Organizations can get started very quickly in the cloud. The hyperscalers have an incredible array of platform and software as a service offers in AI that with basically a credit card, you can get started and potentially scale from there, right? Really, it's super impressive stuff, right? Of course, there are challenges with that. There are cost issues. There are data gravity challenges. There are issues, even regulatory issues like the Digital Operational Resiliency Act, where certain types of organizations, certain verticals are required to be multi-cloud. You can very quickly find yourself locked into a single vendor view by being overly dependent on particular software stacks. Now, that's not to say it doesn't make sense sometimes. It absolutely does. We support an inordinate number of customers who are all in on the cloud.
What we see probably more often than not is a balance between cloud and on-premises, leveraging the cloud where it makes sense as part of an AI data pipeline or an AI data workflow, and then using on-prem where it makes sense. It could be due, as I say, to data gravity. It could be the ability to burst for bursty type workloads into the cloud. For many organizations that use NetApp, we make that a reality because we have the ability to make that experience seamless with a singular control plane and a consistent data plane that no one else in the industry has. I think that what we have to do as an education is to work to show all the.
Organizations the opportunity, the upside for NetApp, where we don't have the footprint today, to show them how we can bring those two worlds together and make AI adoption easier and more effective and efficient. I think data has a massive role to play in that. Again, I come back to my initial comment that if you're dogmatic about the way that you go after AI, I think it's probably the wrong answer. You want an adaptable and flexible data infrastructure to be able to best adapt your environment to the changing needs of your business. That's something that NetApp is really focused on.
Anything to add to that, Andy?
I mean, NVIDIA is going gangbusters, and they are selling quite a bit of on-prem gear. I mean, so there's kind of proof in the pudding there that it is not all in the cloud.
Okay. Awesome. We're hearing a lot about demand for kind of fast object storage. I assume that a lot of that is tied to some of the AI opportunity. What does that mean? What does fast object storage mean? Maybe just talk about your portfolio and how you're positioned for some of those opportunities.
Yeah. I mean, I think it's probably both fast file and fast object storage. I think it was probably how I would describe it. I mean, listen. Performance is an important element of the AI experience for sure for most customers. I think most folks tend to think about it through the lens of training because that's where all the big iron goes, right? And don't get me wrong, it is. Interestingly, in the training space, which has traditionally been driven by file rather than object, by the way, in that space, there has been, I would say, some standardization of what performance should be like. NVIDIA has absolutely taken a leadership position in establishing what good looks like, right? They have a number of certifications that they have in place around a product line called SuperPOD, where they establish basically what performance level they're expecting from storage vendors.
We have gone in and we've gone and had that certification. It's a pass/fail, Jason, right? The reason it's a pass/fail is because you either meet the standard or you don't because ultimately there's only so much data these GPUs can consume. You need to be able to feed the data at the right speed and maximize the utilization of those GPUs. Frankly, any performance above that doesn't really help because the GPUs can only consume it at a certain rate, right? The reality is that there's not a lot of differentiation in the market around that, at least at a performance level, because NVIDIA have set the standard. A bunch of vendors have come in, including NetApp, of course. We've met the standards.
Where we differentiate around the training environment is, as I mentioned earlier, that consistent method of managing data throughout the lifecycle, not just at the training part, but also the data preparation part all the way through to inferencing. And it is worth mentioning that I think folks tend to think about AI as a single workload. It is really many workloads. The performance characteristics required at each stage of this pipeline are quite different. At a scale, at inferencing level, the performance characteristics for storage are a bit different. We have absolutely been optimizing our portfolio to support all of these different areas, but with a very consistent way of doing it. I think that is probably where we differentiate quite a lot. Now, I have not spoken a lot about object yet. It is worth mentioning that.
If you look at, for example, the plethora of open-source tools out there, many of them consume data primarily through an object interface. NetApp introduced file object duality into our products some time ago. I think it's maybe a couple of years or more ago now. And that file object duality enables us to drive out of a single data source all the tools necessary to construct an AI application, whether they want to consume file or they want to consume object. Fast object is an evolving part of the market. There is at least a concept of a fast object standard, sort of what would be in technical terms known as S3 over RDMA, but it doesn't exist as a standard yet. Certainly something that NetApp and the industry as a whole is looking at. We believe that fast object.
Serving a model training is a possibility in the future, but not mainstream today. Okay.
Gotcha. So most of the training today is done on file-based?
Correct. Correct.
Okay. And. Is it wrong to think, I mean, I guess I've read this or heard this for a number of years that because of how fast the GPUs are, that storage is actually a bottleneck? Or I know for a while it was like network was the bottleneck, and now it feels like the network's gotten a lot faster. But. Is storage a bottleneck when you think about some of these. Kind of data center infrastructure?
It could be. I mean, I think firstly, the reason you said it the way you said it, Jason, is quite accurate because it's leapfrogging, right? What happens is we see a mark jump in one of these areas, and the requirements go up, and then we all jump up, and it kind of does this all the time. At any one point. At one point in time, it could look like one thing's a bottleneck, but then next week, it's the other thing that's a bottleneck. I wouldn't get too wrapped up in that. I will say that those performance requirements do evolve pretty quickly, and it's certainly a job for organizations like NetApp to keep up with those standards.
I don't want to underplay the work that we do, the massive amount of work we do to make sure that we continue to deliver the performance required for the absolute highest levels required by folks like NVIDIA, for example, in particular. Yeah. Yeah. Anyway, that would be all I'd say. I don't know, Andy, if you've got anything to add to that.
No, I think you covered it well.
Okay. Okay.
Okay. I want to shift gears to the competitive landscape. I just wanted to address two major topics. One is, can you talk about maybe some of the emerging players, emerging vendors, whose products were purpose-built for kind of high-performance computing and maybe AI use cases, and whether you're seeing more of those types of players now in the market. Beyond that, how do you explain very simply to folks what your competitive differentiation is when it comes to AI workloads?
Yeah. I mean. Of course, AI, we mentioned the word gold rush earlier, right? There's a gold rush for AI, and a number of vendors, new entrants into the space, startups have come in, and it makes sense for valuation reasons to talk about AI a lot. I don't blame them for that. I'm sure they raise really well based on the fact that they're AI. Whether they actually are or not, and how many of their customers are actually consuming the AI components, I couldn't tell you. I have my suspicions, but that's a different issue. Look. I've been in the storage business for a while now, and I can tell you that. This isn't the first new workload that's popped up, right? I mean, we see new workloads pop up every few years. It tends to.
Encourage a new entrant to come in who want to get into the storage space, but there isn't a lot of. The multiplier in the valuation for a storage company compared to one that's focused on a new workload is very different. Of course, they go to these new workloads. It's to their benefit that they talk about these new workloads like they need something that is completely and fundamentally different to what has come before it. Okay? What I can tell you from past experience in previous sort of evolutions of these workloads is that these organizations don't tend to survive very long. In fact, I struggle to think of even one company that survived in the long run having only focused on a single workload.
Even the ones that talk very much about a single workload, I think you're going to see them start to expand beyond that because there's only so much they can get out of a single workload, right? Frankly, the largest challenge that these organizations are having, and where NetApp does very well, is that these other entrants have to fundamentally convince a customer that not only is AI a completely different workload, but they have to start off by convincing them that they need to move all their data. Actually, no, move is the wrong word. They need to copy. They need to duplicate all their data from where it is today into something new and shiny off to the side, with all the complexity or the cost, firstly, that is involved, but also the complexity involved in expanding the manageability.
Maybe in the early days of AI, where those things were not really focused on, how do I manage it? How do I support it? That concept of enterprise AI that I mentioned earlier, Jason. Those are becoming much more important now, right? As folks start to think about what happens after we deploy this, who is going to run it? What happens when something goes wrong at 2:00 A.M. in the morning on a Saturday? Who is going to fix it? Those are the sorts of things that we tend to see here. We believe we are extremely well positioned. Yeah. We see the market moving closer towards us. Yes, they are a new entrant, but they all have those same issues I mentioned before. We see enterprise AI and that move in the market as being extremely positive for NetApp's prospects in this space.
In the long run, ultimately, we believe that the ability for organizations to leverage data that exists, the latent data that exists in their organization without having to fundamentally change the way they manage data will win out.
Your competitive differentiation relative to some of the—forget about the new players—but compared to the Dells, the HPEs, the Pures? How do you explain why you guys are special or different?
Yeah. I mean, I think firstly, against the new guys, it is about where the data sits today, the ability to manage multiple workloads out of single environments, and the ability to do it seamlessly between on-premises and cloud. I think if you add all those things together, that's huge. If you already own the data, which we do for many customers, obviously a huge position of strength in the unstructured data market, which is, of course, a lot of the fuel that actually is driving the current generative AI and agentic AI waves. Then you can understand that we are in a very unique position to be able to extract value from that data and make it ready for AI and much better than I think anyone else can. That puts us in a really strong position against the new entrants.
Frankly, it puts us in a really strong position against the folks who are existing in the business. Remember, we talked a lot about the importance of having that flexibility to consume cloud services seamlessly as part of an AI data pipeline. No one else. No one else has that relationship with the three main hyperscalers, and that closeness allows us to have that integration with their services. There is probably a lot more, Jason, and we could probably have a long conversation about it, but I think if I was going to just put my finger on it, that would probably be some of the most key points.
Okay. Great. If anybody has any questions, please punch into the Q&A box, punch those in. I've got a bunch more, so I mean, we can go a full hour. If anybody does have questions, please input those. Yeah. I mean, I wanted to ask about, I guess, the AI ecosystem. Because you guys have had a bunch of partnerships. I guess, Andy, you probably can be talked to a lot of those. We all see these press releases. It's hard for, I think, investors in particular to kind of understand the importance, the impact of some of these partnerships. Can you just talk to today where some of your partnerships are and which are the most important partnerships?
Sure. As you said, the landscape is changing almost weekly. It is probably one of the most dynamic areas there are in business today. I'm not going to go through an exhaustive list, but let me just talk about a couple of examples. As I mentioned, we've been working with NVIDIA for six years now. We've also, much more recently, stepped up our engagement with Intel. We're working on a number of solutions with Intel, particularly in the inferencing space, where we think there's ample opportunities for customers to take advantage of more ready-made tools to do inferencing. In addition to that, we're working with a few other hardware vendors, but I think notably a couple of interesting ISVs. One of them we've had a long-standing relationship with is Domino Data Lab. Domino is an operations or orchestration environment for data scientists who are building models.
One of the things that we've done is essentially given data scientists access to the storage environment without having to know anything about storage. For instance, in part of the lifecycle of building models, you think about, you'll need to make oftentimes up to seven copies of the data while you're building these models. I mean, obviously, there's a lot of inefficiency there. NetApp has a unique technology where we can make essentially zero-footprint copies instantly of data. We've given that ability right inside of Domino Data Lab so they can save a copy of the data and the model that they can refer back to in the case of model drift or in the case of an audit for later. Let's say you're building a model for driving.
You're going to want to make sure that you have all of that data archived with the model so that if the NTSB ever is interested in researching how something happened, you can show that to them. Tools for making copies, for sharing that data, for being able to have auditable copies of that data, all are built into the platform that the data scientists use every day. Those are the kinds of integrations and technology partners that we have formed alliances with. There are a number of open-source projects that we have connected to. We believe that customers should have a choice about what technologies they want to adopt. We do not want to lock customers into any particular choice. In most categories, we offer choices where customers could say, "I do not necessarily want to leverage Domino Data Lab for doing orchestration.
I'd rather use open-source tools." We have a way for customers to be able to take advantage of that as well. Bottom line here is it's a very dynamic environment. We think that we're in a unique position being so close to the data by bringing partners. We give customers advantage to be able to do things that would be much harder without cobbling together multiple tools on their own.
Okay. Excellent. Thank you. A couple of questions from the audience here. One is around the synthetic data market opportunity. Maybe just even broadening the aperture a little bit on that topic, which is you talked about an existing NetApp customer. They have a bunch of your storage systems, and they're going down the path of AI. Conceptually, it would just be like, "What do they need?" They already have NetApp storage. What do they need to buy new, right? I imagine with AI, there's just a lot more data that's created. Some of it's synthetic, some of it's not synthetic. Some of it is just, like Andy was talking about, you need more copies, maybe, even though there's a zero-cloning or zero-footprint concept there. Maybe just talk about an existing customer, what are the different opportunities for NetApp to expand.
Data storage that customer has with respect to AI?
Yeah. There are a couple of things I would say. Firstly, we certainly see data storage expanding with AI. If you look at the investment profile in total, Jason, I'm not saying that every dollar spent on AI is incremental in terms of budget, but it is incremental, right? We definitely see incremental monies coming into IT budgets to support these new AI workloads. That for sure is an expanding opportunity for NetApp, right? You said, obviously, that means we think that because these new workloads are coming in and there are certainly the way the data has to be manipulated and changed, we see new opportunities for expanding the amount of data that is stored on NetApp for sure, right? On top of that, we obviously see the need to build out, and we announced.
At our user conference last year that we intended to build out a set of rich data services to help customers extract the maximum value out of AI. We haven't publicly discussed our approach to monetization of these services at this stage. That is something that we are still in the process of determining.
What about the synthetic data market opportunity?
I think that's primarily for us a partnership opportunity. If you look at, I mean, I think Andy did a really good job of articulating our strategy from a partnership perspective. Customers and organizations are going to need to sort of stitch together and work with multiple vendors to deliver optimal AI outcomes. Our job is to make that as easy as possible for our customers, but also for the folks that sell NetApp, which means our channel partners, so that they can offer complete end-to-end solutions in a much more simplistic and supportive way than would otherwise be possible. Yeah, I mean, synthetic data fits most firmly in that category.
Okay. Another question here from the floor. I think it actually just, I think you just talked about it, which is that, will the primary way that NetApp monetizes AI be through procurement of NetApp storage products, or will there be a software stack monetization in the future?
Storage products for sure. We haven't announced anything else at this stage. I think at this point, I wouldn't have an answer for you on that front.
Okay. So it's TBD?
It is TBD, but I'll say in general, if you look at the types of services that are involved in making AI real for customers, it tends to be a combination of sort of infrastructure products and what I call services that have live service elements to them, right? Obviously, the monetization for those live service elements tends to be different for outright purchase products. I think as an industry, we see certain parts of these data pipelines typically moving more to subscription basis. Again, NetApp hasn't made a determination as to which way it wants to go at this stage, but as an industry, that's what we see.
Okay. Another question from the group here. The 8DD vendors have publicly said that AI adds about eight points to their growth rate, going from 15% baseline to 23%. I do not know what the timeframe is there, but I will take this client's word for it. How does NetApp think about the incremental growth from AI to your baseline?
We have not given any updates around that. We do believe that AI increases our growth opportunity, but we have not sized it yet.
Okay. Good try. We'll keep trying. The last question, just kind of the wrap-up. For both of you, and curious to hear if you're in full agreement here. When do you expect to see an inflection point in AI-related demand for storage? In other words, without putting a number, as Kris said, when do you think we'll sort of know that AI is like a material tailwind to NetApp's business? Do you think this is going to be in a couple of quarters? Do you think it's more like 6 to 12 months? Do you think it's 18 months? I mean, what's your best guess on sort of when the impact will be noticeable?
We have said on our most recent earnings call that we expect to see benefit in the second half of this fiscal year.
You guys are in April fiscal year, correct?
Correct.
Yes. Okay. So within six months, basically. That's kind of what you're saying.
We'll start to see the opportunity really materialize, and I think there's a long runway in there.
Yeah. Okay. Is that mainly because just sort of the maturity of these enterprise organizations in terms of understanding the ROI and the different infrastructure elements that are required? I mean, is that a fair statement? Maybe Russell, you want to take that one?
Yeah. I think I started the conversation, Jason, on that basis that we believe we're at this critical juncture, right, where we move past these early adopters into a much broader set of folks who are willing to jump on the opportunity that AI offers. I think that as an industry, we are all challenged to improve the accessibility of AI to organizations. I think as we continue to succeed in doing that, we are absolutely going to see a demand tick up.
Okay. And. I guess you're hosting this call, so. That's a decent sign. All right. I think we're at the hour. Kris, thank you for putting this together. And Russell and Andy, thank you both for your insights. And thank you to everybody for joining. Have a great rest of the day.
Thanks, Jason.
Thank you.
Thank you, everyone.