NetApp, Inc. (NTAP)
NASDAQ: NTAP · Real-Time Price · USD
108.28
-0.16 (-0.15%)
At close: Apr 28, 2026, 4:00 PM EDT
109.96
+1.68 (1.55%)
Pre-market: Apr 29, 2026, 7:56 AM EDT
← View all transcripts

NetApp INSIGHT 2023

Oct 23, 2023

Kris Newton
VP of Investor Relations, NetApp

Okay. Can we shut the door? All right. Thank you everyone who's here with us today and on the webcast. We appreciate your time joining the NetApp tech session here at Insight. Just as a reminder, we won't be covering any financial information today. We won't be giving you an update on the quarter. This is all about understanding our technology and our competitive advantage, and how customers use our technology better. So with that, I'm gonna kick it off with the safe harbor. So my apologies, it's a bit long, and I need my glasses to read it. Each of the 2023 Insight Financial Analyst Tech Sessions may contain forward-looking statements and projections about our strategies, products, future results, performance, or achievements, financial and otherwise.

These statements and projections reflect management's current expectations, estimates, and assumptions based on the information currently available to us, and are not guarantees of future performance. Actual results may differ materially from our statements or projections for a variety of reasons, including macroeconomic and market conditions, global political conditions, and matters specific to the company's business, such as changes in customer demand for storage and data management solutions, and acceptance of our products and services.

These and other equally important factors that may affect our future results are described in reports and documents we file from time to time with the SEC, including factors described under the section titled "Risk Factors" in our most recent filings on Form 10-K and 10-Q, available at www.sec.gov. These forward-looking statements made in these projections are being made as of the time and date of the live presentations. If the presentations are reviewed after the time and date of the live presentation, even if subsequently made available by us on our website or otherwise, the presentations may not contain current or accurate information.

We disclaim any obligation to update or revise any forward-looking statement based on new information, future events, or otherwise. So with that out of the way, let me just give a quick recap of the agenda. So George Kurian is gonna kick us off with some brief statements about what you can expect here at Insight. Then we're gonna have a couple of guys from our AI team come in. They're gonna do all open Q&A. You can ask them any questions you want. Then a couple people to talk about enterprise storage. We'll take a short 20-minute break and reconvene at 11:05 Pacific Time.

Then we'll have someone from product marketing come up. He can answer any questions you have about anything. We'll see, you know, we'll see if you can really stress him on that. Then a couple of our cloud guys will come up, talk about the breadth of our cloud solutions, both storage and otherwise. And then we'll have a customer panel with a couple of real NetApp users. And again, I encourage you, both in the room and on the webcast, to ask questions. We'll be fielding questions through the webcast. It really is your time to talk to the technical experts here at NetApp.

Once the session concludes this afternoon, you're free. Those who are here are free to wander about the Insight experience. The general session keynotes will start at 3:00 P.M. this afternoon. Please go to those if you're still here, and then there'll be a reception that you can attend from five to seven tonight. So hopefully, everyone who's here can do all of that and make it worth their while. Again, really appreciate everyone joining us and coming. And with that, it is my honor to introduce George Kurian.

George Kurian
CEO, NetApp

Thank you, K ris. Thank you, and welcome. To those who are on the live stream, thank you for joining us today. To those that you're here in person, thanks for making the trip. This is our first Insight in person since 2019, and the theme of the conference is very germane to the industry context in which we operate. It's about turning disruption into opportunity. What we've seen over the last few years is that the range, impact in terms of scope and scale, and pace of disruption is accelerating. Range, geopolitical, macroeconomic, supply chain. You know, you name it, you've got it, right? As the world transitions from a relatively stable geopolitical architecture in the post-war era to the dawn of a potentially new macro- global economic architecture and political architecture.

In terms of the pace of disruption, you're seeing the pace in terms of the ongoing impacts, as well as the scale and scope of those impacts accelerating. And what we have seen, as well as others have seen, is that those businesses and organizations that are data-driven are better set up to understand disruption and respond and transform it into opportunity than anyone else. If you look at the research conducted by the Boston Consulting Group and Google, they have shown that organizations that are data-driven were 5% better positioned to drive revenue or cost and productivity improvements or rapid response to disruptive change in 2022. That gap has widened to almost 14% in 2024. So the competitive advantage of being data-driven is wide and accelerating, and that is, of course, before the impact of AI.

AI, you know, organizations that are hybrid cloud and data-driven are even better positioned to capitalize on the AI trend, and we will share more about that tomorrow. What you'll also hear us talking about is that being data-driven is not an easy thing for customers to do. You know, there have been two widely adopted approaches, neither of which has been entirely successful. One, top-down: "Hey, build a monolithic data lake and dump all of my data in it." There was Hadoop before data lakes, and there were data warehouses before Hadoop. Those have not entirely succeeded, and they struggle to become enterprise scale as the nature of data and the need to derive value from those large investments continue to get pressured.

On the other hand, the other approach, which is grassroots, "Hey, let a thousand experiments and let every department try its own approach," has also not scaled. They have typically gotten stuck in proof of concept, where you've got a great proof of concept, but to try to take it from proof of concept to production is very hard to do. And so what we will talk about is that you need three things to be data-driven. One of them is an integrated data organization, where all of the people deciding your data strategy and data engineering and data science and business analysis work together. They don't need to report in one place, but they need to work together. And you need to have a clear-sighted view of what data and data projects have the most important business impact.

That's number one. Second, from an operating model, you need to treat data as a product, independent of the underlying systems. Today, for example, what you find is businesses operate and manage systems and business processes, right? So you go and talk to a customer, they have a CRM system, and they have an ERP system, and they have a supply chain system, and they have a BI system sitting on top of those systems, and then they got a big data system sitting on the side, and on and on and on. What none of those systems have is a complete view of a domain, for example, a customer or an employee or a vendor, and that integrates not only data from your transactional systems but from external data sources.

That's increasingly important because to make good data decisions, especially to power AI models, you know, from work that's going on in the industry and from which one of our guest speakers is well associated with, having good data is a major source of competitive advantage for AI. We will, for example, make the statement that AI runs on data, and data runs on NetApp, especially unstructured data, which is 80% of the world's data and is much faster growing than unstructured data, right? That's the second. You've got to think about data as an operational model, as a product, independent of the underlying systems. Then we'll talk about a modern data architecture, which is, you know, what's important there is that we are pragmatists. We say that you cannot evolve and transform every part of your data architecture.

That's just not feasible from a risk, cost, speed standpoint. So our position is that you want to build the right balance of transformation and modernization in some parts of your data architecture, with stability, evolution, integration in other parts of the architecture, because it gives you the right balance of flexibility and speed, risk, and cost. An important foundation of that modern data architecture is what we call an intelligent data infrastructure. That combines hybrid multi-cloud data storage with integrated data services and AI-powered cloud operations, monitoring, optimization, and automation. It builds on what we set out with the data fabric. You know, in 2013, we stood up on stage at Insight and said the world would be hybrid and multi-cloud.

That, you know, you would need to manage data across not only your data center but all the places that you would put your data, meaning all the leading public clouds and software-as-a-service solutions. We have delivered upon that. Today, our technology is a native service in all of the leading public clouds, and that position allows us to now provide much more capability for all the modern applications that run on it. The second is we recognize that the needs of clients have moved beyond just the hybrid multi-cloud infrastructure. For example, in data services, we delivered on the concept of portability and flexibility in terms of data management, but you see increased needs for security and governance of your data.

And so we've made important steps forward to bring what we think is the world's most secure data storage, whether it's in your data center or on the leading public clouds, backed up by, you know, guarantees to make your data always available—that's what we call secure by design, and we will have one of the industry's leading spokespeople for secure by design, Director Easterly, with us today.

We talked about the fact that AI is powered by data, and data runs on NetApp, and so we have an awesome demo that combines some of the world's leading GenAI tool chains in the public cloud with our data management solutions, so that you can version data, and you can integrate your on-premises data with the world's leading large language models in a way that's secure and protected, that allows you to comply with increasing mandates for data lineage and protection of private data. All of those tool chains, we're ready. We've got awesome stories for you.

And then, of course, the one of the important things that we will tell our clients is that in a world of constrained resources for IT and talent and the need to move faster, it's even more important to build silo-free architectures. We'll talk about Hadoop, for example, where everybody said, "Hey, let's build a silo that combines applications and infrastructure and storage," and it sounded awesome for a couple of years, and now everybody is going: "Oh, man, I got to re-platform out of Hadoop, which means I got to rip apart my analytics landscape, my computing schedulers, my operating system environments, and of course, my storage." Right? And hyperconvergence is the modern version of Hadoop.

So we'll talk about how you can build a silo-free infrastructure that's hybrid multi-cloud by design, that supports the needs of any app, any data type, any way you want it. And so I'm excited for our product teams to talk to you about the announcements we have. We have a rich innovation pipeline as well, and you'll hear more from us over the next few quarters as well. So stay tuned. Thanks for coming. I'm excited about having you all here, and certainly excited about the real customer problems that we're solving with our technology and the partnerships that we formed over the last several years. Kris, back to you.

Kris Newton
VP of Investor Relations, NetApp

All right. Am I on? Okay, I'm on. All right. Thank you, George. We appreciate your time today. I'm glad we got a few karate chops out of you. It's good to see that energy and impact. All right, so next up, I'm going to invite Russ Fishman and Andy Sayer from our AI group. And you guys are welcome to sit or stand, whichever you're most comfortable with.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

I think we'll all sit.

Kris Newton
VP of Investor Relations, NetApp

Okay.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

You guys-

Kris Newton
VP of Investor Relations, NetApp

I'll join you up here.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Yeah, perfect.

Kris Newton
VP of Investor Relations, NetApp

All right. And so while you guys are pulling all your AI questions together, we figured we'd get this one off first, because I know it is top of mind for everyone. I'm going to ask Russ and Andy to introduce themselves and say a little bit about what they do, and you guys can start queuing up your questions.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Perfect. Thanks, Kris. Well, thanks for having me today. Russ Fishman is the name. I'm responsible for product management at NetApp for our solutions business globally, most importantly, AI, and I'm joined here by Andy. Andy, you want to introduce yourself?

Andy Sayer
Director of AI Partnerships, NetApp

Hi, everybody. My name is Andy Sayer. I run our alliances for our AI go-to-market. I've been at NetApp for about 10 years, and lovely to be here today. Nice to meet you all.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Perfect. Well, listen, I think we talked about having five minutes of sort of an overview, and then we'll open it up to questions. So, you know, I'll start by saying, obviously, the market is moving very fast in AI, but it's not a new thing for NetApp, right? NetApp has been focused on the AI market for over five years. We have hundreds of customers involved who have invested hundreds of millions with NetApp in AI. And we're in a sort of, I would argue, a pretty unique position in that, rather than just taking our products and applying them to the problem statement of AI, what we have actually been doing is we've been investing in our products to make them AI-ready, and we've been doing that over the last few years.

So what we have now is this fantastic portfolio that can be applied in lots of different ways, puts us in a really different position to others, that I think will just put a sheen on their products and say everything's AI-ready. We've really built our products to help our customers adopt AI. You probably heard George talk about, you know, three things that the company's focused on. I'm obviously here primarily to talk about how we help our customers adopt AI, and really, the focus for NetApp is exploitation of data more than anything else, so this concept that customers are sitting on data that they that has value that they are unable to derive without the help of AI. That's what we're really there to help them with.

We're obviously working to invest with AI in our products. That means using AI to make our products better, things like anti-ransomware protection, that sort of stuff, and obviously making us much more efficient as a company when it comes to operations and development, et cetera, et cetera. But in terms of helping our customers build, you know, there's a few things that we do that I think no one else really does in the business. Firstly, AI isn't really focused on any one particular deployment methodology, so it's not really about on-prem, it's not really about cloud, it's really about all of it.

NetApp has this very unique position in the market, which is that we have a leading storage operating system, which is available on all the clouds and on-prem. It makes it much easier for our customers, regardless of where their data sits, where it's being generated, where it needs to be used, we bring all those worlds together. That's really important when we're not talking to IT, because typically, when we're talking about AI, we're not talking to IT, we're talking to data scientists, we're talking to lines of business, we're talking to CDOs and data owners.

These are unique and different buying centers than NetApp typically has addressed in the past, and we've really worked hard to craft our products and our message to go after that, which is why we have been successful. The success, though, is not just about what NetApp does ourselves; it's also about how we work with our partners. And so I, I was going to ask Andy, would you just speak for a few minutes about what we're doing with our partners and how our partner ecosystem has helped us here?

Andy Sayer
Director of AI Partnerships, NetApp

Yeah, absolutely. As Russell said, part of the success for NetApp in AI has been the co-innovation that we've done with our partners. Our leading AI, AI partner is with NVIDIA. NVIDIA and NetApp have together delivered five or six unique solutions that we've co-developed, co-innovated, and have brought to market, and as Russell said, have been adopted by hundreds of customers. So, we are in addition to NVIDIA, we also have other partners in AI. We work with Domino Data Lab, for instance. Domino has a MLOps platform that is available in both cloud and on-prem and works in a hybrid manner, so it's very complementary to NetApp's offering. In addition to that, we work with Run:ai.

We've worked with several of the computer vendors, including Cisco, Lenovo, and Fujitsu, and of course, we work with the hyperscalers as well. So all of these—this ecosystem comes together to bring solutions that work for our customers. As Russell was saying, you know, when you build AI for an enterprise, oftentimes you're drawing on data that's stored in one cloud or another, sometimes multiple clouds. Often it's on-prem, perhaps in multiple locations as well. And the key to successful AI for many companies is to be able to bring all that together, to be able to train models and be able to deliver the kinds of solutions that they're building. So that's a quick overview. And when we get to Q&A, we'll be happy to talk more specific about those.

Kris Newton
VP of Investor Relations, NetApp

All right, well, thanks, guys. I appreciate that overview. I think it was a great level set for what we do in the world of AI. So I will look out to everyone, and we got some questions.

Tim Long
Managing Director, Equity Research, Barclays

Oh, hi. I'm sorry. It's Tim Long with Barclays. Could you talk... Just a two-part question here: Could you talk a little bit about, as we move into this AI world, maybe talk about, like, quantity of storage and maybe, quality of storage? Obviously, some of the largest customers are not using NetApp or Dell. They're using, you know, white box solutions. So can you talk a little bit about how you see this transition to AI affecting, you know, vendors like NetApp? And you could talk—if you could just touch on the hardware side as well as the, the software side. I'm sure there's a pretty good software play as well.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Yeah. What I would start off by saying is that, you know, NetApp really focuses on the entirety of the life cycle of data. So that's the first thing. So, I think typically people will talk about this concept of speeds and feeds and pushing data to GPUs. You'll hear a lot of folks talk about that. That's really just one part of the life cycle, though. So, NetApp's really thinking about where the data is being generated, how it's being organized, how it's being unified, how it's being prepared, how it's being then stuffed into GPUs for training. Obviously, a very important part of it, but that's just one little bit of it. And then, of course, what happens after that?

So what happens when you know the results, the validation of that model happens, the iterative nature of that kind of life cycle of AI training. So there isn't really a good answer for what you're asking. I think if you really just focused on the bit where you stuff data into GPUs, you'd be very narrowly thinking about this market. But what we're seeing is that the entirety of that life cycle is what's driving revenue for us.

Kris Newton
VP of Investor Relations, NetApp

All right. Sidney, over here.

Sidney Ho
Equity Research Analyst, Deutsche Bank

Thanks. Sidney Ho with Deutsche Bank. Andy, you talk about this co-innovation with NVIDIA with five or six different products. Can you expand a little bit? How does that being adopted by customers, what's kind of attach rate? Can customers use other, other vendors during the process?

Andy Sayer
Director of AI Partnerships, NetApp

Sure, absolutely. So, you know, we started out five years ago with NVIDIA with a reference architecture. Essentially, what we did was we paired our storage technology with NVIDIA's server technology, the DGX platform, and their switching, the Mellanox switching, which they acquired. So, together, that made what we call ONTAP AI, which is essentially our operating system managing AI workloads in this converged infrastructure stack. So that was the basis for many of our customers initially getting started with us.

Over the years, we've made some variations to that based on the way customers are adopting that technology. So, for instance, some people don't want to build that infrastructure on premises. It's expensive. It requires a lot of power and cooling that most corporate data centers are not equipped to deliver, and so, customers are looking for alternatives to that.

So one alternative is what's now called DGX Cloud that NVIDIA offers. It's currently available in OCI, but it will be moving to Azure and GCP and other clouds in coming months. So their customers can then consume the same kind of architecture but not have to have it on-prem themselves. They're essentially renting it. Other customers are looking for a model where they can have their own dedicated equipment and once again, not put it on-prem, but put it in a colo like Equinix, which can then be connected to all the clouds and allow them to manage their own equipment, but in fact, let Equinix do the management of the daily operations of that equipment.

So there's just three variations of a delivery model for our reference architecture. In addition to that, we work with NVIDIA on their SuperPOD, which is, of course, large-scale AI training, often focused on large language models and other very large training situations. And between all of these, we have, as we had said earlier, literally hundreds of customers running on those today.

Kris Newton
VP of Investor Relations, NetApp

All right. Thank you. I think, Gloria, there's a question from the webcast? Okay. Never mind, then. Oh, all right. [crosstalk]

Victor Chiu
Equity Research Associate and Data Infrastructure and AI Semiconductors, Raymond James

Hello. Victor Chiu from Raymond James. Could you remind us which platforms specifically are strategically positioned to target, you know, the AI/ML workloads? Is it a combination of the A-Series and the StorageGRID unstructured solutions? And then, you know, in general, where do you see storage falling in the spending priority relative to compute and, you know, GPU hardware acceleration when it comes to building out AI solutions?

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Yeah, you're probably going to hate the answer, but it's everything. But, but it really is, right? And, and again, back to that whole concept of the life cycle. So everything from... Honestly, we still see hybrid disk, we see Flash, and that's on the sort of unstructured NFS side. We see Object. We obviously have a file object duality on our ONTAP systems. We also have the StorageGRID solution as well. And then we see a lot of take up with our 1P cloud solutions as well. So it's really all of it. And, and, and, you know, over time, you know, I don't see any one of them really pulling ahead. I think maybe we'll see more capacity Flash over time.

You know, that would be my guess as an industry at an industry level, as a focus, because a lot of that, again, a lot of that pipeline isn't stuffing high-speed data into GPUs. That's part of it, but I kind of think about that as the last mile.

Andy Sayer
Director of AI Partnerships, NetApp

I would offer one other piece here that's that I find very interesting, which is, you know, AI is all about data. I mean, that's obvious. Everybody understands that when you bring data together and be able to apply machine learning to it, then you can get insights that haven't been available before. And yet, when companies go and say, "I want to do AI," what's the first thing they do? They go out and buy GPUs, and the GPU servers are generally the first thing that customers think about. They begin to deploy those GPUs, and they realize, "Oh, my goodness, we can't keep these machines utilized."

So we need to have a data infrastructure that's going to allow us to keep those machines utilized, so it's worth the $500,000 per box that it's going to cost them to acquire those. So what happens very quickly in many of our sales cycles is we get brought in almost immediately following the purchase of GPUs, GPU servers, and then we help them build out a data strategy that works for AI for those companies.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

It's just worth mentioning, I think someone said, oh, you know, white boxes and that sort of stuff. The reason that people come to NetApp is for the data management. That's why we win, and we win consistently because of that.

Andy Sayer
Director of AI Partnerships, NetApp

Yeah.

Kris Newton
VP of Investor Relations, NetApp

Just as a quick translator, Russell mentioned 1P cloud services. For those of you who aren't fully versed in the NetApp lingo that is the first-party cloud services, so the Azure NetApp Files, AWS FSx for NetApp ONTAP, and then most recently, Google-

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Cloud

Kris Newton
VP of Investor Relations, NetApp

-Cloud.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

NetApp-

Kris Newton
VP of Investor Relations, NetApp

NetApp volumes. All right.

Andy Sayer
Director of AI Partnerships, NetApp

And ZMB.

Kris Newton
VP of Investor Relations, NetApp

Let's get David here because he was-

David Vogt
Managing Director and Senior Equity Analyst, UBS

Yeah. Thanks, guys, for doing this. David Vogt at UBS. Just maybe as a follow-up, can you kind of explain sort of what the revenue opportunity or the software mix looks like as we move from traditional storage use cases to, you know, multi-cloud AI use cases, you know, higher GPU utilization, to your point about adding on a half, you know, half a million dollar box? Should we expect a much more robust software story going forward as this permeates to multiple different sort of end customers and use cases relative to where we were over the last five to `10 years?

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Well, I mean, so back to that point about data management, right? If, if you look at NetApp's portfolio, and so you think about BlueXP, for example, as our unified control plane, if you think about our essentially a software-defined storage products, if you think about other things like Instaclustr, for example, and what was known as Cloud Data Sense, but now known as BlueXP Data Classification, we're starting to see a lot more pickup in those areas, right? And that makes sense because, you know, when AI started, it was very much a data science conversation, right? So LO, lines of business, data scientists, they're the ones we're really engaging with. What you started to see was, you know, CDOs and data owners getting very concerned about how their data is being used, right?

Almost like a moderator in a nuclear reactor, right? They're coming in with their carbon rods. They're slowing everything down. And there's this sort of push and pull between the data scientists who want to get out there and move quickly, and the data owners who want to slow everything down. So that— What's interesting, of course, is that NetApp has this amazing portfolio capabilities that can address both sides of that, right? So what we're starting to see is a lot more engagement on the data owner side as well. So, you know, my expectation is that more of our data management capabilities will be consumed over time.

Andy Sayer
Director of AI Partnerships, NetApp

Yeah, and I think the other factor to consider here is regulation. We fully anticipate that regulation is going to hit the industry, and customers are going to need to comply with that regulation, and we're encouraging customers to start looking at that today and using tools, as Russell mentioned, that you know can automatically filter out personally identifiable information, for instance, so that never makes it into the models. Or to be able to create auditable models, so that if a customer or if the government, for instance, wants to go back and look at how your AI came up with a particular answer, you have a stored image of that model with its data at the time that that was created, which we think is going to be very important moving forward.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Yeah, and there's... Last thing I'll just say is that, regulation compliance, I guess you hear that all the time. What we hear more from customers than anything else is commercial concerns about the commercial sensitivity of data and the reputational risk of having that data leak. It wouldn't necessarily be a legal issue in some cases, but it would be, obviously, a reputational issue.

Andy Sayer
Director of AI Partnerships, NetApp

Absolutely.

Kris Newton
VP of Investor Relations, NetApp

All right, let's go to Gloria with a question from the webcast.

Operator

Hi, this is a question from Aaron Rakers from Wells Fargo: Does the DGX Cloud solution utilize NetApp ONTAP AI as primary storage back-end versus other alternatives?

Andy Sayer
Director of AI Partnerships, NetApp

So I'll take that. So, currently, no. So currently, DGX Cloud is leveraging an alternative storage for its scratch space. So this is the area that the GPUs use as extra space from their internal storage. What NetApp is doing is working adjacent to that to help connect the data from multiple sources, multiple clouds, and on-prem, to be able to bring that data together, to be able to train those models.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Yeah, back to the life cycle point again, right? Which is that, you know, scratch space, in general, is just the last mile, right? It's ephemeral data, which means that it's not stored long term, it's not protected. The data management capabilities are generally not included there. What we're finding from customers is that that's not a complete solution. Yeah.

Kris Newton
VP of Investor Relations, NetApp

All right, how about Meta in the back?

Meta Marshall
Executive Director and Senior Equity Analyst, Morgan Stanley

A couple questions. Maybe first, you know, you guys talked about, okay, a customer is getting a NVIDIA server and then figuring out that they need to, you know, have a data management solution behind that. But in many cases, they haven't even gotten the NVIDIA server yet. They're still waiting. So I guess just where are—like, and, you know, same with kind of the regulatory conversation. So I guess just where are you in your kind of customer—or where are your customers kind of in their conversations of thinking about this? And then maybe just as a follow-up, like, over the next five years, is the greater opportunity for you guys on the data management to prepare for training, or is it on inference, eventually inference?

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

That's some good questions there. Well, I mean, I'll take the second part first there, yeah, and we'll go back—we'll bounce back from there. If you think about the life cycle of AI, right, inferencing is the runtime, essentially, right? It's a little iterative, so maybe I'm oversimplifying it, but you know, training is development. There's been a lot of focus on training in the last few years because everyone's been working out how to exploit AI. But we're moving rapidly into a new phase, and that's the operationalization of AI, right? And operationalization is all about inferencing. It's interesting because it's a completely different buying center. So in some of our more mature customers, especially in financial services, for example, who have already heavily adopted AI, AI is becoming part of their core business processes.

When something's part of your core business processes, you're worried about all the same things that you would be with any other enterprise service. So that's manageability, observability, operationalization, supportability, right? And that's a different buying center, but that's very well aligned with NetApp's traditional messaging, right? Our products are designed to be easy to manage. They are regularly accepted by IT-owned departments. They're well situated in data centers, et cetera, et cetera. So we do see a huge opportunity on the inferencing side. Inferencing is going to be 85% of the runtime, if you will, the life cycle, but not necessarily 85% of the revenue. I'm not saying that. Before Kris slaps me. I'm not saying that. Obviously, it's a very different set of performance characteristics and storage requirements around that. You want to take the first-

Andy Sayer
Director of AI Partnerships, NetApp

Yeah, so, the other thing I would say is that, you know, we mentioned we've been doing this for five years, and there are a lot of customers doing AI and machine learning across a wide variety of use cases where there are DGXs and other systems deployed, and data management has been an important piece of those deliverables. So we could talk about use cases in manufacturing for defect detection. We can talk about computer vision cases.

We can talk about a whole bunch of different cases across healthcare and life sciences and financial services, where customers have deployed these solutions. Remember that these large language models have really only captured our imagination here for the last 10 months or so. While there is a lot of interest and a lot of noise about that, AI's been around for a while, and there's been a lot of customers that have deployed these systems.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

I would just say one other thing, you know, in terms of specifically DGX, right? So DGX is a relatively small part of the training market. Mostly, you know, NVIDIA's been quite open that they are selling into OEMs, like, you know, like the Dells and the HPEs and that one, and what have you. And we sell successfully into all of those environments. We're not really tied to NVIDIA servers. We work with all of the big manufacturers. Last thing I'll just say is that, you know, some of NVIDIA's most advanced training GPUs have... You know, it's well understood that they have a significant supply chain issue right now relating to the lithography that they're using, the 2-nanometer process, through TSMC.

There are a bunch of other GPUs out there that are much more readily available that, you know, NVIDIA's been openly pushing people towards, including things like the L40S and what have you. So we don't see the market gummed up at all, if that's what you're asking. Not at all.

Kris Newton
VP of Investor Relations, NetApp

All right. Irvin, right down here.

Irvin Liu
VP, Evercore ISI

Hi, thank you. This is Irvin Liu with Evercore ISI. So do you see a share gain, a share gain potential, you know, or opportunity presented by AI, or is most organizations going to stick with their incumbent vendors and avoid a major upgrade or a major transformation prior to jumping into the AI journey?

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

I think I'll start off by saying I think we're extremely well positioned. Again, back to that comment that we've been building for AI for five years, or five and a half years. So I think we have a portfolio that makes us competitive in non-traditional NetApp, in non-NetApp customers. That's how I would describe it. Again, because of our portfolio, because of our capabilities. So yeah, I mean, I think there's, you know, you know, there's always the opportunity out there.

Andy Sayer
Director of AI Partnerships, NetApp

I would just add to that. I think, you know, you're getting more specific. Customers tell us they want to be able to aggregate their data from multiple sources. It's difficult for many of our competitors to do that. It's something that we have put a lot of investment into and are able to bring that data together for customers. So we think we're very well positioned to help in this hybrid world.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

And lastly, I'll just say, It's also the engagement strategy, right? So it, you know, so firstly, the ability to go and converse with a data scientist, understand what their life is like and be useful to them. And understand that intrinsically, data scientists, for example, don't care about infrastructure. That's not an interesting thing to them, right? So, you know, the first question is, who are you and why are you here? But we've become very good at connecting the challenges that a data scientist has in accelerating AI adoption and development to our underlying value. That's how I'd describe it.

Kris Newton
VP of Investor Relations, NetApp

All right, why don't we grab Steve here in the middle?

Steven Fox
Founder and CEO, Fox Advisors

Thanks. Steven Fox with Fox Advisors. I think on the last conference call, George talked about how the real uplift in AI comes with the industrialization of the applications, and you touched on it a little bit with healthcare and defect detection, stuff like that. Can you just sorta talk about how you think that develops? 'Cause if that's really when the S-curve takes off. What do you envision, like, over the next few years, where those applications are most likely to develop and develop quickly?

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Yes. I'll talk from an industry perspective, primarily. So what's really interesting about our experience is that, it's not really tied to any particular industry vertical. And actually, there hasn't been... We haven't seen a lot of commonality in use cases. The engagement model with customers has typically been around, let's understand what data you have and how we can exploit it, and then find, you know, the right application that would actually be useful, is kind of how I'd describe it. That has started to change at an industry level. So we're starting to see a number of horizontal use cases appear, come to the fore.

Probably the most obvious one is based on generative AI and chatbots, specifically customer service chatbots, which, you know, I think that what's really happened is that it's moved from a, "This is a way for customers to innovate," to a, "This is required for us just to stay competitive." And so if you talk about the S-curve, that's where I think the S-curve is hitting, because it's gonna hit every single customer wants to talk to us about generative AI, and they wanna talk to us about those sorts of use cases.

And those become very repeatable, mostly because of the use of these pre-trained models, what they call foundational models. So we're starting to see a lot of that. So yeah, I think, yeah, from an industry perspective, I think we're just at that point, that inflection point right now.

Andy Sayer
Director of AI Partnerships, NetApp

I could also add that I was fortunate to attend the TED AI Conference in San Francisco last week. Did anybody get to go out to that? It was a fascinating conference, speakers ranging from Andrew Ng to Stephen Wolfram. One of the things that I learned there was that there are 2,700 funded startups for large language models right now. I was blown away. About 35 of them have done, you know, foundational models. The rest of them are all building on top of OpenAI and Llama 2 and a bunch of other models, and going very specific with the particular area of focus they're going after. So we're gonna see an explosion of large language models over the next couple of years, that are gonna be very tailored to specific industries and specific use cases.

Kris Newton
VP of Investor Relations, NetApp

All right, we have time for one last question from Mehdi. It's gotta be a quick one, though.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Make it quick. Thank you. Mehdi Hosseini, Susquehanna International Group. Just, as a follow-up, talking about, actually the decision-makers, implementation, talk about data scientists. I think a lot of these points you're making, you're referencing to the folks that are involved in training the model. What I wanna learn from you is, in the next 15 seconds, is: Who is actually going to deploy this at the enterprise level? Who is actually gonna be working with CIOs and CFOs in deployment of these trained models?

It's great that there are 200 startups, but to go from a startup to actual use, to actual deployment, to actual realization of improved productivity is gonna be the key. And to me, quite frankly, a chatbot has been the most frustrating experience, and I compare it to Alexa. Alexa didn't really lead to significant growth in storage, but deploying of AI for product, for realizing, productivity improvement, I think could be a key. I just wanna learn from you, how are we gonna go through that journey? Who are these decision-makers gonna employ to do that?

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Yeah, there's a lot there to unpack, so I'll try and make this quick 'cause I'm gonna get in trouble otherwise. So firstly, yeah, IT. IT are the folks that are operationalizing and deploying this stuff, right? Ultimately, they're the ones that are tasked with waking up at two in the morning if something goes wrong and getting it fixed. So there's a lot of focus on getting IT ready to do this stuff. But you know, the data scientists are helping make the decisions, but the lines of business have the checkbooks, right? And as I said, the CDOs and data owners are the ones that are kind of holding it all back a little bit because they're concerned about regulatory compliance, commercial concerns, et cetera, et cetera. So, yeah.

Kris Newton
VP of Investor Relations, NetApp

All right.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

I know-

Kris Newton
VP of Investor Relations, NetApp

Well, I have to give you the hook now.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Yes, okay.

Kris Newton
VP of Investor Relations, NetApp

Thank you, guys, so much.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Thank you.

Kris Newton
VP of Investor Relations, NetApp

I really appreciate you coming. I know there are more AI questions. Just reach out to IR, we can help hook you up.

Russ Fishman
Senior Director and Field Advocacy & Solutions Technology, NetApp

Yes, thank you.

Kris Newton
VP of Investor Relations, NetApp

After we announce earnings next quarter.

Andy Sayer
Director of AI Partnerships, NetApp

Thanks, Veronica.

Kris Newton
VP of Investor Relations, NetApp

All right. Now, I am super excited to introduce Octavian Tanase and Sandeep Singh, who are going to talk about enterprise storage. So guys, come on up. You are welcome to sit or stand, whatever you're most comfortable with. It's entirely up to you. Whatever you want. All right, standing. Okay. So... Oh, sitting. Okay.

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

Sitting.

Kris Newton
VP of Investor Relations, NetApp

Peer pressure.

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

Sitting it is.

Kris Newton
VP of Investor Relations, NetApp

Okay. All right. Well, why don't we kick it off with each of you introducing who you are and what you do at NetApp?

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

Hi, everybody, I'm Sandeep Singh. I'm the Senior Vice President and General Manager for Enterprise Storage. I've been with NetApp now coming up on 11 months, and you know, why I'm super excited about being here at NetApp is we are enabling unique outcomes for our customers across the spectrum of helping customers to be able to save money with the lowest cost of storage over the data life cycle.

Helping them simplify at scale, and through that lens, increase productivity, lower risk, and helping them become more secure and protect against ransomware and cybersecurity attacks, and through that lens of having the most secure and protected storage infrastructure, helping them become more sustainable as well, and then ultimately harness the power of cloud and AI as and when they're ready. That's why I'm super excited to be here at NetApp, and to be here with you.

Kris Newton
VP of Investor Relations, NetApp

All right, Octavian?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Good morning. My name is Octavian Tanase. I'm the engineering guy, so I'm ready to answer your question, and your question, and your question again, you know, from the perspective of somebody in engineering.

Kris Newton
VP of Investor Relations, NetApp

Perfect. Some of you might recognize Octavian. He's presented at these events for us in the past. He's been at NetApp, not as long as me, but a good piece.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Well, you know, you guys could have bought more, you know, I would have been retired, but here I am back here.

Kris Newton
VP of Investor Relations, NetApp

All right, so with that, we're going to open it up to questions. Otherwise, I'm going to have Octavian re-answer the questions that were asked and-

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

But there was an awesome question. I'm sorry, I didn't catch your name.

Kris Newton
VP of Investor Relations, NetApp

Mehdi Hosseini.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Mehdi. Okay. So,

Kris Newton
VP of Investor Relations, NetApp

Just behind, in the back.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Right. So-

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Can I ask my question?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Please.

Kris Newton
VP of Investor Relations, NetApp

Sure.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Oh. I just want to know, like, if enterprises are going to go hire consultants, or if Dell is going to build out their consulting, how is that going to play out? Because I think, ultimately, enterprises are going to limit their IT staff and rely on somebody else to come in and tell them how to deploy it. And tell me if you disagree or how is that going to play out.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

I actually don't know who hires consultants at NetApp. I think it's mostly the CEO or the board. So let me tell you what I think is happening in engineering, right? Everybody, you know, in any large organization, your ROI comes from making your engineers more productive, right? And generative AI, it's a great technique, you know, for that, right? So there's a concept of copilot. So if you... you know, somehow are able to introduce in the edit, compile, debug, you know, process that an engineer goes through, an AI assist, you know, that engineer will be able to write better code, you know, more secure code, more effective code. Right?

So I believe that any large enterprise that has a pool of engineers will want to use generative AI. The question is, can that be done in a secure, less risky way, right? So if you... let's say you have some proprietary information, you don't want to lose copyright, right? Because all of a sudden there's a generative AI engine, you know, that uses that. So, we believe that there are safe ways to do that, right? I think, first of all, you can create a taxonomy for your products in code and say, what is core versus context? Let's say, for your context code, where, you know, you may not necessarily care about your IP, you can use, you know, straight up a OpenAI, you know, public LLM.

You know, for something that is more sensitive, perhaps you can use, I think Russell talked about a pre-trained, you know, LLM that you can deploy within your enterprise, and you can augment that with some interesting, you know, proprietary information. I think that process is called fine-tuning, right? So we expect a lot of these enterprises to come in and take petabytes of data and information and augment with, you know, with that data, the pre-trained LLM, so it could be perhaps a good copilot for somebody in engineering. Make sense? And I don't know if this is going to be done through, you know, consultants or not. I think just the way you've seen an explosion of consultants in the cloud space, probably there will be an explosion of consultants in the AI.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

So why do the-

Kris Newton
VP of Investor Relations, NetApp

On the mic, please, so the webcast can hear you.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

May I have a follow-up?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Please.

Kris Newton
VP of Investor Relations, NetApp

You got a mic, so...

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

So-

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

I have time. I mean, we can do-

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Sure.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

We can do.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

So, is there a business opportunity for NetApp to help customers to do this? Is there a business to build around this in terms of deploying it? As you sell terabytes of storage, but then who's going to be at the other end to deploy it?

Kris Newton
VP of Investor Relations, NetApp

So we're not talking about future business endeavors nor financial things at the event, but I will say we are helping customers plan and decide their AI data management journey, already today, right? We're working with hundreds of customers who are currently deploying AI, and we're helping them figure out what that data management behind it is. So, I mean, generally, yes, there is opportunity for us to participate at a higher level with our customers. Do you agree?

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

100%. What you just articulated, Kris, is exactly the pattern that we're seeing out there. All of the learnings that we're getting in working with hundreds of customers, on their journey to AI or GenAI, we're taking those and then sharing those best practices with the customers.

Kris Newton
VP of Investor Relations, NetApp

All right, David, in the front.

David Vogt
Analyst, UBS

I just want to follow up on that point. So as it becomes more pervasive in terms of training models, inference, data sovereignty, what is the differentiation that all of these, you know, all of these sort of applications bring to an enterprise? Doesn't it become ultimately somewhat table stakes in the sense that there's multiple offerings, they tend to be somewhat similar, and sort of, you know, you have to do it, but there's really no revenue uplift for the corporate, not for NetApp, but I'm saying for the company, and it's more of a cost-saving tool at this point?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

I think it's hard to think of a company that has such a complete, end-to-end capability to address all the phases of, machine learning and AI, right? So, I think you guys have, you know, talked a little bit about inferencing, and, you know, it was, you know, emphasized as very important. But most people will probably have to acquire data. And there is a phase where, you know, people take data and put it in a data lake and try to cleanse it. That's a process that takes, you know, time. It's automation-intensive and so forth, and you need that large, you know, practically infinite, you know, storage container, you know, to do that.

That storage container needs to support heterogeneous data sources, not just unstructured data, but sometimes structured data as well. So, you can, you know, have this data lake that has that simple interface to read and write data into. We got that. Then there is a phase where you're training your data based on some algorithms that you've chosen. You need a lot of throughput, right? So you want to be able to take some of this, you know, data from the data lake, you know, in a very simple, you know, cost-effective way, move it in the place where you train the data. That has to be very close to compute, right? To those, you know, NVIDIA DGX systems or GPUs that, you know, they've talked about, right?

And then, you know, after you've trained the model, then you're going into the inference phase, where latency, not throughput, is the most important thing, right? Because you, you know, you are, you're gonna ask me a question, you want a quick answer there. So I feel that our, you know, flash systems, you know, running the ONTAP Data Platform are uniquely positioned to, in one system, support that whole data pipeline from data acquisition in the data lake, to the model training, to the inference. What do you think?

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

I think what Octavian is sharing is basically, we provide the customers that flexibility end-to-end with really, you know, infrastructure that is ready for AI. To that prior comment that was made, which is basically AI works on data. Data runs on NetApp, and as we look at basically, you know, generative AI, it's tied much more to unstructured data. That is where we are looking at how we can enable unique use cases for customers there. And at a very high level, in terms of what you mentioned, you know, I see kind of three ways it becomes important for organizations to think about. One is the productivity enhancements that Octavian was just talking about. Developer productivity becomes incredibly important from an AI perspective.

The second area is all of the supporting functions of how AI can bring a productivity boost there. And then the third key area is really: how can AI help design new customer experiences or net new overall, business and, you know, revenue models or product, models there? So that's where I think AI becomes a, both a table stakes as well as a opportunity for customers to innovate with their data.

Kris Newton
VP of Investor Relations, NetApp

All right, I think we have a question on the webcast.

Operator

This is from Samik at JP Morgan. He's alluding to the move of AI to on-premise, to leverage data. His question is: If that mix shift does happen, how can you increase your differentiation to some of the other large storage vendors? Is adding more products to the portfolio or simplifying the portfolio the way to go?

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

So with the data, AI running on data and largely the, you know, lot of the data sitting on-prem, you know, NetApp, it gets the opportunity now to help customers leverage their data. Certainly, you know, the lot more of the AI and generative AI is working on the unstructured, data sets. So first is basically with the data that's already, resident on the NetApp, storage infrastructure, we can make it seamlessly accessible, to the data engineers, to the data scientists. That's through the integration with the MLOps, platforms. So that's one of the areas. The second area that was talked about in the prior session is really enterprises, as they're looking at adopting and deploying, predictive AI and generative AI. Certainly, the data privacy becomes important.

The model traceability, and the associated data sets tied to those models become incredibly important. This is where NetApp's data management capabilities become pivotal in enabling customers with a overall model traceability, model versioning use cases. That's where we see a tremendous opportunity of helping responsible AI deployments in the enterprise for customers.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

That, that's awesome. Can I ask something about Kubernetes? Okay.

Kris Newton
VP of Investor Relations, NetApp

You absolutely may ask something about Kubernetes.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Okay. Well, because... So most of these AI applications, ML applications, are modern applications, right? So the lingua franca for application scale and virtualization right now is Kubernetes, right? It has been born in the cloud, it's pervasive in the enterprise. So we believe that, our investments that we made in building a Kubernetes middleware for applications to simplify the life cycle of deployment, you know, and protecting these applications will come in handy. Because, we believe that many customers will go back and forth, you know, between, you know, the on-premises estates and the cloud, you know, depending on the type of services that they want to take advantage of, right? Moreover, we have some interesting technology. It's called FlexCache.

You're not gonna remember, but it's like a caching technology that, you know, kind of helps one take—you know, takes the data, you know, from its source and make it available to, you know, to compute. But there is more. You know, the—I thought that they were gonna talk about GPU Direct.

GPU Direct, it's an interesting technology, you know, by NVIDIA, that it's trying to kind of simplify data access from, you know, from the, you know, let's say, the server or the network storage device that has, you know, the data all the way to the, you know, to the GPU itself. So the way that works is, right now, let's say you would be talking, let's say, you know, over, you know, using NFS. So that would be me talking to you, you're the CPU, you're the GPU, and I'm telling you something, you're telling her, you know, you know something, right? Then we invested in a technology called RDMA, which is now I'm gonna be able to not say something, but the brain, right, the memory, will be connected to his memory.

And then, you know, back into, into, you know, Kris's, you know, memory. GPU Direct, basically, it's memory to memory between, you know, the data and, and Kris's brain, where is the GPU. So that's a, you know, technology that we've implemented recently, and we have tremendous performance, you know, results in, in, you know, like 170 Gbps in getting the data from network storage directly into the brain of the GPU. Did that... Was that okay?

Kris Newton
VP of Investor Relations, NetApp

I think that was good, although it's frightening to think that we would have a direct memory connection. All right, I know some people in the audience have questions about QLC technology. I'm sure of it, or we can keep talking about AI.

Victor Chiu
Equity Research Associate and Data Infrastructure and AI Semiconductors, Raymond James

Just one last quick question on Victor Chiu from, from Raymond James. In the intermediate term, do you envision the growth and demand for AI solutions potentially cannibalizing, you know, traditional storage orders, or, or do you see it as a, you know, purely an incremental opportunity? Maybe elaborate on, on how that plays out.

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

I think that will continue to bear out in the market. The reality is that data is essential to every organization, and so data continues to grow. Data is the underlying infrastructure tied to all of the application workloads, so customers absolutely need to continue to service the need for the underlying data infrastructure to support, whether it's their high-performance file, or their virtualized environments, or their containerized Kubernetes environments, right? In addition to looking at, basically, how do they harness the power of AI, make it a competitive advantage before it becomes unnecessary table stakes across the board. So I think there's an opportunity and, and a need, more importantly, for customers to do both versus one or the other.

Kris Newton
VP of Investor Relations, NetApp

Okay. I will go with Tim, and then we'll get you.

Tim Long
Managing Director, Barclays

All right, I guess I'll ask the QLC then.

Kris Newton
VP of Investor Relations, NetApp

Thank you.

Tim Long
Managing Director, Barclays

Yeah. Just outside of AI, that's obviously one of the trends is, you guys have been several quarters now with the QLC-based product. Can you talk a little bit about, kind of what that's doing to segmentation, within the market, and, you know, how... Where are we in the continuum? Is this something that's gonna, you know, permeate more through different applications, use cases, and in different parts of your portfolio? So if you could just kind of give us a sense as where we are and where this is going, and kind of your differentiation with it as well. Thanks.

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

Yeah. So I'll touch upon kind of the little, just a short history of kind of what we introduced earlier this year and where we are. Earlier this year, we announced and introduced Capacity Flash through the lens of the NetApp AFF C-Series. And that was designed to help provide a basically value proposition of near the speed of flash at hybrid economics for customers, and really targeted at three key use cases. One is basically customers who have hybrid flash or hybrid storage systems with 10K-based hard drives, making it, you know, affordable for them to transition over to all-flash and be able to get faster, denser, more sustainable. That is continued to happen.

Secondly, for the application workloads, whether it's virtualized applications or database applications or other file environments, where roughly 2-4 milliseconds of latency is more than enough for those application workloads, capacity for flash provides that best price performance for the customers. And then the third use case is targeted at the disaster recovery and secondary storage use cases. Since our announcement and the launch of C-Series, we announced it in February, we started shipping in March, we're seeing with capacity flash, with C-Series, it's become the fastest ramping product within NetApp's history. So that's where we are at this point.

We're now in our portfolio, when you look at it, basically, we have our AFF A-Series for the best performance for the performance-intensive application workloads. We have capacity flash as providing that price performance, you know, proposition of near speed of flash with hybrid economics. And then we've got FAS with the lowest cost overall. All of these design centers are fully interoperable. So when I mention basically, we're able to provide customers with the lowest cost is, you know, with data we know is dynamic.

A lot of the data is cold, and so customers are able to leverage C-Series and seamlessly, you know, leverage either our fast systems or our StorageGRID systems and have automated and granular tiering, as well as tiering all the way into the public cloud, for providing that lowest cost data over the data lifecycle. The other thing I'll mention is, NetApp is unique in enabling all of these use cases, as well as across NAS and unified, as well as block and object, with an underlying single storage OS. That's NetApp ONTAP, thanks to Octavian and team.

And what that is enabling for customers, whether it's for serving their different application workloads or structured or unstructured dataset, or across on-premises, or with the first-party native cloud storage services in the public cloud, is the simplicity at scale, not just within silos, but simplicity at scale. And that's enabling customers to remove complexity of bespoke infrastructure silos. It's enabling them to increase productivity and lower risk by having consistent management and automation, consistent and comprehensive data security, consistent and comprehensive data protection across the board, and an overall consistent support and vendor experience.

Kris Newton
VP of Investor Relations, NetApp

All right. Thanks. I know Sidney had a question here.

Sidney Ho
Equity Research Analyst, Deutsche Bank

Thanks. Sidney Ho with Deutsche Bank. Just want to follow up with the C-Series. Right now, it sounds like you guys are targeting the 10K hard drive. If you kind of look at roadmap for the company, is that a roadmap that potentially cannibalizing 7,200 as well in the future?

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

I won't be able to comment on anything roadmap.

Kris Newton
VP of Investor Relations, NetApp

You can opine about technology trends and when flash starts to erode that.

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

Yeah. I think, so when we look at the future, we're continuing to look at opportunities for helping customers be able to get to all-flash and be able to, through that lens, right, accelerate, get more efficient and denser, as well as more sustainable. We will continue to look for those opportunities, into the future. When you think about basically with all-flash, certainly QLC technology has much-- made it, much more, such that the storage operating systems that are flash-optimized, that can write to flash in a flash-friendly manner, are able to make sure that the endurance levels are such that customers can get a, you know, seven to 10-year life cycle with those, systems. And that's how, what that, that is enabling customers to then be able to shift over from the existing 10K hard drive-based systems.

With the 7.2K nearline SAS systems, you know, that is still further out in time. When you look at basically customers being able to get the lowest cost through that lens, and still be, especially from a hybrid flash perspective, of still being able to get really good performance, as well as high availability of the systems, that, you know, from a flash dollar per gig, just a raw perspective, that's still further out in time.

Kris Newton
VP of Investor Relations, NetApp

Just in case it wasn't obvious, ONTAP is one of those flash-optimized-

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

Yes.

Kris Newton
VP of Investor Relations, NetApp

-operating systems.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

QLC is a cool technology. We like it. Even though, you know, initially we thought, man, you know, four electrons on the cell, they're flipping them on and off, they're, you know, it's gonna be hard. Apparently, they have their own personality, and you know, they're quite resilient, right? I'm making a joke because initially, when that technology was introduced, you know, people weren't really sure of the right cycles and all that stuff. And what we've proven in the last few years is that QLC technology, it's awesome, right?

Kris Newton
VP of Investor Relations, NetApp

Yep.

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

Yes.

Kris Newton
VP of Investor Relations, NetApp

All right. Additional questions? You guys pepper me with QLC market opportunity, customer use cases all the time. I can't believe you're so quiet right now. All right, well, to wrap it up, I know you guys spend a lot of time talking to customers. You know, what are some of the cool things that you're seeing customers doing with NetApp technology and the unique reasons that they tell you they're choosing us, that what NetApp can do that no one else can do?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

They're very excited about QLC.

Kris Newton
VP of Investor Relations, NetApp

I shouldn't have shut down the AI conversation, apparently.

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

So I spend roughly 50% of my time traveling and meeting with customers and partners. In terms of the unique things that they're able to enable with NetApp, the one pattern that I've seen is customers who are migrating and, you know, looking at harnessing the power of public cloud and being able to deploy and run their mission-critical application workloads. NetApp is enabling that for them. At the same time, those customers, as they're consolidating data centers, and they're looking at bringing much more of that agility and a cloud-like experience on-premises, we're seeing the combo of where basically the customers are leveraging NetApp Keystone for on-premises, and then the NetApp first-party cloud storage services for being able to deploy their application workloads in the cloud. That is a unique area.

The second unique area, you know, Octavian touched upon FlexCache. And so, you know, customers who are, you know, EDA or high-performance file use cases, where they're going to have a large presence on-prem, but then the developers or other, you know, personalities or personas within an organization want to be able to leverage subsets of data in through the lens of cloud, FlexCache becomes an integral portion of that technology that customers are leveraging to be able to get that hybrid workflow enabled for them. We're enabling that for customers. The other area is when you think about virtualized estates and VMware environments. Customers are going through a massive upgrade and, you know, tied to that, a refresh cycle.

Those customers, as they're looking at basically, how do I optimize my on-premises today and then basically get the full flexibility and future-proofing for hybrid multi-cloud environments, tomorrow? NetApp is enabling that unique, use case, for customers, where NetApp is the only certified and supported, you know, enterprise storage with VMware hybrid cloud across all three, public, you know, major public clouds. So let me pause there and invite Octavian to share some thoughts as well.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Customers love us. But mostly, beyond the technology, which I love, they love the fact that we're integrating with the ecosystem, right? So at the end of the day, when you deploy a data management and storage system from NetApp, you want to make sure that that works well in whatever landscape that you're deploying, right? It works well with a Cisco environment, it works well with a VMware environment, it works well with a Kubernetes environment.

It has the right APIs to enable a data protection vendor to build, I don't know, forever incremental, skinny, you know, replications of the data. We are, you know, we're learning a lot and doing a lot of, you know, development with our AWS, GCP, and Azure cloud partners. So I feel that the competitive advantage that we're building and why many of our customers appreciate us, it's the investment that we have in the ecosystem.

Kris Newton
VP of Investor Relations, NetApp

All right. Well, thank you. Last call for questions. Anyone, anyone?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Unless you have more AI questions.

Kris Newton
VP of Investor Relations, NetApp

Yeah.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

We'll be here for-

Kris Newton
VP of Investor Relations, NetApp

I don't mean to stop your AI questions.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Okay.

Kris Newton
VP of Investor Relations, NetApp

Okay. Well, thank you guys very much. I really appreciate your, your time, and I'm sure we'll make sure that you're seen by this audience more often. So thank you.

Sandeep Singh
SVP and GM Enterprise Storage, NetApp

Thank you.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Thank you, everybody.

Kris Newton
VP of Investor Relations, NetApp

All right. And so now we're going to take about a 20-minute break. We will reconvene at 11:05 Pacific Time. Thank you very much. All right. Welcome back, everyone, to the NetApp Financial Analyst Tech Session. We've got a few more sessions this afternoon, and kicking that off... Or I guess it's still morning, it just seems like afternoon. Kicking that off, we've got Jeff Baxter, who's VP of Product Marketing here at NetApp. Hey, Jeff.

Jeff Baxter
VP of Product Marketing, NetApp

Thank you.

Kris Newton
VP of Investor Relations, NetApp

Have a seat.

Jeff Baxter
VP of Product Marketing, NetApp

Thanks.

Kris Newton
VP of Investor Relations, NetApp

To get it started, why don't you say a little bit about who you are and what it is you do at NetApp?

Jeff Baxter
VP of Product Marketing, NetApp

Sure. So, hi, everyone. Thanks again for joining all of us. I'm happy to be here. I also feel like it's afternoon . So I run Product Marketing at NetApp. I've been at NetApp for 15 years, so longtime believer and veteran here. I started out in our field organization. I worked in our product management organization for a while, and I've had the privilege for the last few years to lead our product marketing organization.

Kris Newton
VP of Investor Relations, NetApp

Well, great. Okay. And again, like the other sessions, we're going to open it up for Q&A. And I'll just warn you, if you guys don't come up with questions, then I'm going to ask questions, so that should be a good encouragement. This... Jeff, as Head of Product Marketing, you've got the purview over kind of all the enterprise storage platforms and really have been helping define our Hybrid Multi-Cloud strategy.

Jeff Baxter
VP of Product Marketing, NetApp

Right.

Kris Newton
VP of Investor Relations, NetApp

Why don't you say a few words about what NetApp is uniquely doing around hybrid multi-cloud, and how what we've done with the cloud vendors positions us and truly differentiates us from other, storage players?

Jeff Baxter
VP of Product Marketing, NetApp

Sure. So, you know, I think you heard from George this morning around intelligent data infrastructure and our positioning there. And really what's underlying it is the fact that we're the only enterprise storage vendor out there that has a single operating system, a single platform that can work for any data, any workload, any application from a technical level, right? File, block, NVMe protocols, object protocols, all across the board. And I think that's interesting from a technology perspective, and it makes life easier for our customers, but what that really lets us do is be sort of a force multiplier as we expand that out to the cloud. So, Ronen and Pete will be coming up here in a little bit, and Ronen runs our first-party cloud business, and we'll talk about how we're able to extend that out into the cloud.

And the nice thing for us is, when we operate as really the only vendor available as a native first-party cloud service, we can do that on a single operating system. So it really helps center the R&D that all of Octavian's team does, gentleman who was just up here, into that single sort of focus point, and then we can expand it out to reach the entire market. And so the really interesting opportunity for us that we've, I think, exploited so far and can continue to exploit is not just on-prem customers, not just in the cloud customers, but really customers that span both. And I think that's one of the few places that NetApp is truly uniquely positioned to be able to exploit in the market.

Kris Newton
VP of Investor Relations, NetApp

All right. Questions? Okay, we have some questions. David, then we'll get to you, Meta.

David Vogt
Managing Director and Senior Equity Analyst, UBS

Great. David from UBS. Thanks, Jeff, for doing this. So when you think of your go-to-market from an enterprise customer perspective, the fact that you can deploy across multiple different sort of platforms, how has that changed over the years as customers are looking for potentially maybe other vendors that can do something at, you know, at some point down the road? I know you're first to market with a lot of your offerings, but how has the competitive landscape changed over the last, you know, call it five, six, seven, eight years from your perspective?

Jeff Baxter
VP of Product Marketing, NetApp

So it's a great question. I think that in the competitive landscape, obviously, everyone's moved to all-flash, right? And that was... Sandeep discussed that, and some of the others discussed that. I think that a lot of our competitors are still looking to try and rationalize their portfolio offerings. I think that to their credit, they've probably gone down from some of the major ones, from having seven different storage operating systems to having three or four, right? We still think we have a lead and a pretty substantial moat there, quite frankly, competitively, in that regard. They've also started to, and we take this as a validation of our strategy, we've seen, especially over the last year, people starting to tiptoe into having their offerings on the public cloud, typically as marketplace offerings.

And as you're probably aware, we first had a marketplace offering of ONTAP in 2014. So, and that was our first, called it ONTAP Cloud at the time, right, and put it on AWS in 2014. And so I still think there's a very substantial moat there. There's really two parts to the moat. Can you technically make all of your storage operating systems, especially if you have multiple ones, work on every different cloud? Is there any technical impediment to all of our competitors doing that? No. I mean, and most likely, they've all indicated that eventually they'll get there. So maybe that's a five-year moat that we built, maybe a ten-year moat. The other is from a business partnership perspective, and where we're the only ones that are actually provided as services by Microsoft, by Google, by Amazon.

That is harder to put a year on, right? That's, that's harder to say, you know, will they cross that barrier? Will they be able to say, "Hey, we're an important enough partner that Amazon is willing to invest in the co-engineering to build Amazon FSx for NetApp ONTAP?" Same thing with Azure NetApp Files. For a lot of those cloud vendors, they look at our 30-year track record in building what we believe to be the absolute best enterprise file system on the file side, and that's something that we don't think is replicated by any of our competitors.

So once you have us, especially for a file system, do you really need to add a second or third competitor to have an enterprise-class file system? I think the answer is no. Even if the answer turns out to be yes, it's several years' worth of co-engineering to do that.

Kris Newton
VP of Investor Relations, NetApp

Okay, great. I'm glad everyone's back. So Meta actually was, yeah, early in the-

Meta Marshall
Executive Director and Senior Equity Analyst, Morgan Stanley

Meta Marshall from Morgan Stanley. You know, in the past, maybe a lot of the cloud customers were kind of new to NetApp, you know, versus kind of being customers who had been on-premises. I guess, just over time, how have you kind of adapted marketing to kind of bring customers along on that journey from kind of on-premises to cloud?

Jeff Baxter
VP of Product Marketing, NetApp

Yeah, it's a great question. So just even in my organization, right? So we unified, we used to have them separate. My product marketing organization covers both cloud and on-prem, right? Octavian, that you saw before, has a unified organization. So we have a general manager for enterprise storage, Sandeep. We have a general manager for cloud storage, Ronen, who you'll meet shortly, right? So we treat them as separate businesses from that perspective. But in terms of going out to the customers, every one of our, you know, gold pitches, when we go out and do a one-on-one conversation with a customer, it's always covering what we do on-prem, what we do in the cloud, and how we link them with the hybrid cloud data services.

'Cause that's the other important piece is, it's not just about, yeah, you can store your data here, and you can store your data there, but being able to link them together, right? And so increasingly, that's where we're seeing a lot of crossover from customers. We have those cloud-first customers that, in some cases, they're gonna be cloud native forever, right? They, they may have been cloud first and cloud only, and they'll stay cloud, and that's 100% we're on board with that. Some of them may have discovered us first in the cloud, and then when they go to do their next tech refresh of their on-prem environment, they discover that there's compelling advantages to refreshing to NetApp, and so we gain a competitive advantage on-prem as well, so.

And we definitely, we definitely try to exploit that in the market where possible because, it gives us an, it gives us an in to that customer, it gives us an introduction into that customer, and it also gives us a compelling technical differentiator as to why they should refresh their competitive on-prem gear to our on-prem gear.

Kris Newton
VP of Investor Relations, NetApp

All right, Matt, and then Steve.

Matt Sheerin
Managing Director and Senior Analyst, Stifel

Yeah, thank you. Matthew Sheerin from Stifel. I'm hoping you can talk about your, your marketing strategy by customer segment. You've got enterprise customers, you've got thousands of partners that you work with markets. Could you maybe differentiate, you know, those sectors, and also how you leverage your partners, whether it be the MSPs or VARs t hat you work with?

Jeff Baxter
VP of Product Marketing, NetApp

Yeah. So I'll do a little bit of that. I'll, I'll also note, I tend to be more on the product marketing side of things, so I don't wanna speak for our CMO, but I'll, I'll give you a little bit there. And, and then we can always do some follow-up as needed. We, we do tend to segment, so the larger scale enterprise customers are, you know, higher touch, as you would expect, and we work directly with. We partner with and have our, our marketing really on a, I would almost say, customer-by-customer basis, right? So it's sort of a surround and, and go directly to where the customer is, from that approach. The commercial and the... So you are talking marketing or go-to-market, more particular? Either one. Okay, pick, pick my poison.

So I think, you know, the go-to-market is obviously the sort of higher touch marketing for the enterprise and our big global customers, right? And we tend to align there with, and partner with our hyperscaler partners, right? So we go to market directly with AWS, Azure, Google, in calling on those accounts, as well as handling the on-prem side of the business. The commercial, as you mentioned, a huge part of our business is through VARs, through our partner network, and, we've continued to expand that.

We released the new Partner Sphere program and have kind of revamped a lot of what we've done with our partner ecosystem. And so doing a lot of that commercial business, a lot of that go-to-market, is predominantly driven by the channel for us, and I don't expect to see that change. And so I think it. That's sort of the basic segmentation, I would say.

Kris Newton
VP of Investor Relations, NetApp

Yep. Okay.

Steven Fox
Founder and CEO, Fox Advisors

Steven Fox with Fox Advisors. In your opening remarks, you said that as a native vendor first-party cloud service provider-

Jeff Baxter
VP of Product Marketing, NetApp

Yeah.

Steven Fox
Founder and CEO, Fox Advisors

You can expand into the entire market. So how does, how does, how does that play out? Like, what do we think about that meaning in, like, one-two years versus three-five years? What would be the outcomes that we should be looking for?

Jeff Baxter
VP of Product Marketing, NetApp

I don't want to get into-

Kris Newton
VP of Investor Relations, NetApp

No roadmap ideas.

Jeff Baxter
VP of Product Marketing, NetApp

Yeah.

Kris Newton
VP of Investor Relations, NetApp

But you can talk big concepts.

Jeff Baxter
VP of Product Marketing, NetApp

Yeah. Yeah. I, I think what I meant by that is it doesn't limit us to the existing NetApp install base on-prem. It allows basically any Azure customer, any Amazon customer, any Google customer to take advantage of our services natively within those clouds. So it dramatically lowers the barrier of entry for people, right? The barrier of entry for on-prem is, can be measured in weeks or months or even longer, just in a typical tech evaluation cycle, and kicking the tires, getting something into the data center, deciding if you wanna do a wholesale tech refresh, and then you typically stick with that for three years, five years, or longer.

In the cloud, right now, someone could be logging into the Azure portal, spinning up an instance of Azure NetApp Files without any interaction from NetApp whatsoever, and be up and running, decide if they like it or not, and make a decision within 30 minutes to become a NetApp customer. And so that's what I think the immense opportunity is for us going forward, is to really expand that, you know, service addressable market to every Azure customer, AWS customer, and Google customer.

Kris Newton
VP of Investor Relations, NetApp

Yep. All right. Oh, oh, here, then I'll get you, Gloria.

Irvin Liu
VP, Evercore ISI

Hi, Irvin Liu with Evercore ISI. So I wanted to ask about the Keystone bare metal offering. You know, is the use case of this offering meant to target the, you know, transformation and modernization of current on-prem workloads, or is this more meant to help customers repatriate certain workloads? ... back from the public cloud?

Jeff Baxter
VP of Product Marketing, NetApp

Yes. So it's honestly not to be trite. It's honestly meant for both, right? So there's some customers who are looking to just continue their modernization journey. In some cases, they want to reduce their data center footprint, and they're going to – and so it's standard colo model, right? Which is nothing new in that regard. I think what Equinix Metal allows people to do is have that colo model where they're collocated next to the different clouds, right? So it plays perfectly into our strength. If we think of our strength as being around hybrid cloud and hybrid multi-cloud, if you're placing bare metal NetApp storage near every major cloud, you eliminate a lot of any latency or distance sort of considerations.

And so that's really what allows the Equinix Metal with NetApp storage to have a lot of interest for those customers. And then, obviously, for repatriation customers, I mean, from a NetApp perspective, we're willing to support customers wherever they want to be. So we are, in a lot of ways, very neutral to that sort of discussion, right? So we provide TCO calculators, we provide all the information to customers. If they are on a given cloud, we'll help them optimize in that cloud with our Spot portfolio that Pete Lilley will be up here to talk about. We'll help them optimize their storage spend by moving to Azure NetApp Files or AWS FSx. If they still find that their cloud spend is in excess of what they think they could spend if they repatriated, then we make it incredibly easy for them to repatriate.

In a lot of cases, if they've decommissioned their on-prem data centers, Equinix Metal would be a perfect location for them to repatriate to. So it really, for us, is about giving customers the freedom of choice to operate wherever they want, and building that up. The nice thing about Equinix Metal is that, you know, NetApp Keystone is all about removing friction from the buyer experience, right? Turning it into a storage as a service. Now, you remove the friction of where is it located, actually installing it on-prem. One of the challenges with storage as a service compared to the public cloud is, even if you decide to do it and procure it, it still takes time to ship the box into the data center to stand it up, right?

The latency is still measured in days or even weeks, compared to cloud services, where it's instantaneous gratification. Using something like Equinix Metal, because it's already staged there, they can get a very cloud-like experience, but they're operating still on bare metal storage within a data center since it's pre-provisioned there, and we're running it for them on Keystone. So it's kind of a, it's a little bit of a best of both worlds there in terms of immediate access to bare metal storage, but with more of the cost economics of running it on-prem.

Kris Newton
VP of Investor Relations, NetApp

All right, I think we had a question from the webcast.

Operator

This is a question from Wamsi Mohan from BAML. In NetApp's view, does HCI matter anymore? Why did NetApp emphasize HCI a few years ago, but doesn't talk about it anymore? Hyperconverge.

Jeff Baxter
VP of Product Marketing, NetApp

So I think the HCI market remains where it is. I think NetApp decided to deinvest in being a part of that market. I think a large part of that is we saw some of the workloads that were very common for HCI deployments moving into hybrid cloud deployments. And so when you start to look at VMware Cloud and virtual desktops moving into cloud-delivered models, when you start to look at software being delivered as software as a service, like Office 365, as opposed to being delivered on individualized desktops, it just started to become, I think, clear to us and clear to a large part of the industry. I don't think this is a NetApp-specific phenomenon. I think we've seen the entire industry- Information. Siri would like to say-

Kris Newton
VP of Investor Relations, NetApp

Siri wanted to help.

Jeff Baxter
VP of Product Marketing, NetApp

Yeah, Siri want to... Siri. Siri converged infrastructure, everyone. So, a new market segment. So I, I think, yes, we've generally disengaged there. I think for most customers, we have converged infrastructure stacks. We continue to invest in our FlexPod partnership with Cisco. We think that meets the needs of customers for simplified on-prem infrastructure. And then, for the most part, some of these new buying models, some of these things like Equinix Metal, like public cloud, services the need for simplified infrastructure that HCI was trying to solve in a more, I, I think to be honest, sort of complicated methodology.

Kris Newton
VP of Investor Relations, NetApp

Yeah. All right.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Thank you. Just going back to ONTAP, where are we with the ONTAP evolution? Is there one version of ONTAP that is close to end-of-life, and if so, would that create opportunity for NetApp to go through an upgrade cycle?

Jeff Baxter
VP of Product Marketing, NetApp

It's a good question. So ONTAP itself isn't end of life, and we continue to offer generally. Our general past track record has been to every six months have a new major version of ONTAP. And so that continues to be our vision for being able to do that. We tend to, you know, we have a hardware roadmap. I'm not going to go into the details on it. Obviously, at any time we release a new generation of hardware, that creates an upgrade cycle. One of the things that NetApp does as a cultural principle is we try not to force upgrade cycles through hardware or software obsolescence, right? It's a choice.

It's something that we think has gained us loyal customers over time by not saying: Hey, there's a new version of ONTAP, and you need to buy the new box that just came out this month in order to run it. I think that would be a short-term gain for us, perhaps in terms of creating a bump, but in terms of creating customer dissatisfaction, that's not the way generally the industry has evolved. It's not the way NetApp has done business for as long as I've been at NetApp.

And so I think from a customer satisfaction and trying to do the right thing by the customer, there will always be things that new hardware can do for them in terms of performance, in terms of efficiency, in terms of density, and we're always going to continue to innovate there. But if we can offer software innovation on customers' existing platforms that they have under support, we're going to continue to do that. We think that's the right thing to do for customers.

Kris Newton
VP of Investor Relations, NetApp

All right, before we get to Tim, I'm going to inject a question.

Jeff Baxter
VP of Product Marketing, NetApp

Yes.

Kris Newton
VP of Investor Relations, NetApp

How easy is it for a customer to upgrade from one version of ONTAP to the next?

Jeff Baxter
VP of Product Marketing, NetApp

It's totally non-disruptive. So it's... And we even will automate the entire rollout across an entire cluster. So we've been doing non-disruptive upgrades. We'll have customers who have been on 10, 15 years' worth of ONTAP that'll just sit there and roll forward, and not only non-disruptively upgrade their software, but non-disruptively upgrade their hardware within a cluster, right? So we've made that incredibly easy, and we've introduced new programs, even like our storage lifecycle program that we introduced over the last year, that allows people to essentially subscribe to hardware replacement as a service. It's a financial engineering model, right? But we've backed it up with the engineering to be able to just non-disruptively replace controllers. So I tell people it's basically the, you know, free iPhone every two years plan, right?

Where you're paying in advance for the controller, but that allows you to have it be as a standardized sort of part of your ongoing OpEx or support budget, as opposed to a CapEx bump every three-five years. And so that really, we think over time, that will allow customers to sort of stabilize their spend as well as create, more guaranteed refresh opportunities for us.

Kris Newton
VP of Investor Relations, NetApp

Okay, now to Tim.

Tim Long
Managing Director, Barclays

Thank you. I wanted to go back to kind of go to, go to market. I think it was maybe close to a year ago, there was talk about increasing focus and penetration on small, mid-size businesses. I think you mentioned something in one of the other answers about kinda changing the partner program or ecosystem. So can you talk a little bit about where you. You know, what changes are you making there, and what kind of success are you seeing, so far as you, as you try to, you know, increase the breadth of the go-to-market?

Jeff Baxter
VP of Product Marketing, NetApp

Okay, I may need to defer that question.

Kris Newton
VP of Investor Relations, NetApp

I can, I can help out with this one. Right. So we did look at how do we diversify, right? NetApp is really strong. We sell to every Fortune 500 company. But there's big opportunity to continue to push down market. So we did launch a new partner program that makes it easier for our partners to earn money by leading with NetApp. We also introduced entry products to the product portfolio to better address that market.

So, definitely, it's a conscious push for us to make sure that we continue to broaden and work with a broader swath of customers. You know, once you sell to 500 of the Fortune 500, where do you go? So that's what we're focused on. I would say, so far, it's going well. Good initial feedback on the partner program, and then the new entry products are also performing well. All right, any other questions? Yep.

David Vogt
Managing Director and Senior Equity Analyst, UBS

Thanks. Since you sell to every Fortune 500 company, can you kinda discuss the go-to-sales motion from cloud-native customers to enterprise on-prem solutions? You know, how involved is the cloud partner in that sort of discussion, go-to-market bringing them on board? What's the sort of sales motion in terms of timeline effectively? Like, how long does it, you know, from kicking the tires to... I know obviously someone could just spin up y ou know, an instance immediately. A more complicated sale or a more robust sale, how long does that generally take?

Jeff Baxter
VP of Product Marketing, NetApp

Okay. For the latter part of that question, I'd like to ask you to defer to when Ronen is up here, our GM for that business. I think he probably will have a better, better answer to that. I'm not trying to skip the question. I just want to get you the right expert for it, right?

Kris Newton
VP of Investor Relations, NetApp

He'll be on shortly, so.

Jeff Baxter
VP of Product Marketing, NetApp

Yeah, he's next. I think he's next. So, you won't have to wait long to ask that question. From the—how we engage from a go-to-market perspective, can you restate... That was- So-

Kris Newton
VP of Investor Relations, NetApp

All right.

Jeff Baxter
VP of Product Marketing, NetApp

Yes, sorry.

Kris Newton
VP of Investor Relations, NetApp

Let's get the mic.

David Vogt
Managing Director and Senior Equity Analyst, UBS

You know, from going from an on-prem customer to a, you know, a cloud customer sounds pretty

Jeff Baxter
VP of Product Marketing, NetApp

Yeah.

David Vogt
Managing Director and Senior Equity Analyst, UBS

straightforward from a

Jeff Baxter
VP of Product Marketing, NetApp

Yeah

David Vogt
Managing Director and Senior Equity Analyst, UBS

... extolling the virtues of moving in that direction. But a customer that maybe spun up an instance that's cloud native. Working backwards to a more on-prem solution if they want a hybrid solution.

Jeff Baxter
VP of Product Marketing, NetApp

Yep.

David Vogt
Managing Director and Senior Equity Analyst, UBS

What does that sort of motion look like, and how involved are the public cloud partners in sort of that process?

Jeff Baxter
VP of Product Marketing, NetApp

Yeah. It's a good question. So I mean, the public cloud partners, as you'd imagine, right, aren't the most enthusiastic about pushing stuff out of their own clouds, right? With that said, it's a symbiotic and realistic partnership, right? So we have, especially for those large Fortune 500 companies, we have dedicated teams that work, and they know from each one of these organizations, which major partners they're engaged with. They engage directly with their partner in AWS, right? So we tend to align our cloud-selling organization. It's actually aligned based upon the hyperscalers' individual regions, right? So if Google Cloud has a selling region, we'll have a regional director assigned to it or a DM assigned to it, and so we organize the same way they organize. They kind of go with the hip.

The idea is, if something comes up as they're both co-engaging with the customer, where they say: "Hey, I'd like to have an on-prem presence for this as well," then that's the lead that the NetApp rep will take off. It's not, we wouldn't expect the Amazon rep to go try and help to push to close that deal. That's our responsibility. But we've built a pretty successful partnership, and they're very realistic about the fact that these hybrid cloud architectures exist. I mean, that's the reason for some of the things like AWS FSx for NetApp ONTAP. It's not just the on-cloud, but they recognize there's such a large ONTAP install base. They recognize that the reverse will happen.

Kris Newton
VP of Investor Relations, NetApp

All right. Other questions? Back to me then. Okay, so,

Jeff Baxter
VP of Product Marketing, NetApp

Only softballs, right? That's the rule.

Kris Newton
VP of Investor Relations, NetApp

That's right. Tell me why NetApp is so great?

Jeff Baxter
VP of Product Marketing, NetApp

Tell me why NetApp. Yeah, exactly. Where did you get your shoes? Lovely.

Kris Newton
VP of Investor Relations, NetApp

For this question, right, at the beginning or the end of last year, we announced a whole set of new products in the portfolio. We introduced the C-Series, which is the QLC-based technology.

Jeff Baxter
VP of Product Marketing, NetApp

Yeah.

Kris Newton
VP of Investor Relations, NetApp

We also introduced the All SAN Array.

Jeff Baxter
VP of Product Marketing, NetApp

Yes.

Kris Newton
VP of Investor Relations, NetApp

When Sandeep and Octavian were here, we talked a little bit about C-Series, but we haven't touched on the ASA yet.

Jeff Baxter
VP of Product Marketing, NetApp

Yep.

Kris Newton
VP of Investor Relations, NetApp

Maybe you could say a few words about why we introduced that product. Because ONTAP does block

Jeff Baxter
VP of Product Marketing, NetApp

Yes.

Kris Newton
VP of Investor Relations, NetApp

So why is there a ASA in the family?

Jeff Baxter
VP of Product Marketing, NetApp

Yeah, I said softballs. So I think that there's actually two parts for the ASA. It's interesting, and I think one is around market opportunity and really market presence, and the other is a technology answer. So the technology answer, I think, is simpler. By having a block-optimized, simple solution with the ASA, we're able to just make it really dead easy for our block customers, simplifying the interface, simplifying the setup, and more importantly, we're able to do things like symmetric active-active technology, which allows for much faster failover times, which tend to be more important for block-critical workloads. So that's a feature. Being able to be symmetric active-active has typically been limited to sort of legacy frame arrays. In fact, EMC Symmetrix, right?

If you think about where the name came from 30 years ago, something like that, right, it was all about being a symmetric, active-active architecture, right? So being able to do that at a very low, affordable, modular, all-in NVMe price point, that was where we really focused the ASA. So that's the, that's the technical answer, and that's where it differentiates. And by the way, still the same exact ONTAP, still able to be managed in the same way, same APIs, ability to replicate between the two. So we're not changing the operating environment, and we're not splitting off. There's not a separate code stream for Octavian to have to manage. We just have these optimized features that we're able to basically flip a switch on and turn on in a SAN-only implementation. The market side of things is, sometimes...

And this is actually my job on a daily basis, right? NetApp doesn't get credit as a block storage vendor a lot of times in the industry, right? Because we were built 30 years ago, we introduced network-attached storage, right? And 20 years ago, we introduced unified storage. We get a lot of credit on that side of the fence, but what's not recognized is that we have 20,000 customers that run SAN storage, right, across 50,000 storage arrays. We have 5,000 of those customers, of those 20,000 customers, have NetApp storage arrays that they run nothing but SAN block workloads on. So that tells us, A, there's a market. It tells us, B, that we perhaps do not get the credit or the coverage, and so perhaps we're not getting that automatic consideration and opportunities.

If a customer is building a short list and they're a customer that we don't have a touch point with, are we always making their shortlist for block storage? I think that's an open and a good question and an opportunity for us. By putting the ASA out there, it allows us to have a focal point for our marketing, for our go-to-market team, to go out there and aggressively say, "Yes, we are in the block storage market." It also gives us some pricing flexibility to go after the block storage market in a way that doesn't necessarily arbitrage our unified storage business. So all of those sort of three reasons, I think, are really why we went into the ASA market.

Kris Newton
VP of Investor Relations, NetApp

Okay, great. Still no hands. All right. So we also introduced an entry-level product in the A-Series family the A150. Someone earlier today asked me kind of, you know, why did there seem to be, not only from NetApp but other vendors, a big flurry of entry-level products?

Jeff Baxter
VP of Product Marketing, NetApp

Yeah.

Kris Newton
VP of Investor Relations, NetApp

Why did we introduce a lower-end product into the portfolio?

Jeff Baxter
VP of Product Marketing, NetApp

I think there are a couple of reasons. One is the desire to... and just to repeat what you said, right? If you've already conquered, you know, if you've already taken the Fortune 500, where do you go, right? The other, I think, important point is the cost of flash has gotten down finally, economically enough, that it doesn't make much sense to build an entry-level system if the cost of the storage on the system still blows out an entry-level customer's budget. With the advances in not just bringing down the cost of TLC flash, but with QLC flash and others, it starts to get to the price point where it just makes sense for customers that are on hard drives in the entry-level space to adopt entry-level all-flash technologies.

So I think the reason NetApp did it, and probably the reason a lot of the market did it, is because we finally reached that inflection point on pricing, where you would get down to a price point where a mid-sized business or smaller commercial business could get into all-flash technologies.

Kris Newton
VP of Investor Relations, NetApp

All right. And then I think one of the things that NetApp offers that's probably not well understood is BlueXP.

Jeff Baxter
VP of Product Marketing, NetApp

Okay.

Kris Newton
VP of Investor Relations, NetApp

It would be great if you could explain a little bit about what BlueXP is? Mm. and how it differentiates us and helps our customers.

Jeff Baxter
VP of Product Marketing, NetApp

Yeah, so BlueXP is our unified control plane. We rolled it out about a year ago, based on a lot of the technology that we had started to build for our public cloud instantiations. It allows our customers, in a single pane of glass, to manage all of their on-prem storage as well as all of their cloud instantiation. Not just the ones bought through the marketplace, but also things like Azure NetApp Files or AWS FSx for NetApp ONTAP, so they can manage... And it allows our customers really to go through two different models. If they're primarily Azure-centric, for example, they can manage their Azure NetApp Files entirely through the Azure portal, right? They're essentially a Microsoft customer. They are a Microsoft customer. They're thinking of it through that lens, right, and everything integrates there.

On the other hand, if they're hybrid multi-cloud, and they're more, say, a storage customer, right, that just happens to use multiple different clouds, they can go through our unified control plane and have the same single experience on Azure NetApp Files as they do on AWS FSx for NetApp ONTAP, as they do on-prem with our AFF line. And so that allows us to do very cool things in terms of data services built on top of it. So it's one thing to just say, "Hey, I can provision software," but if I can drag and drop from an on-prem system to a cloud system and set up replication, in a couple of clicks, it's something that basically none of our competitors can do. And you can see from then on, we can add additional services, tiering, caching.

All these services we've built over the past few decades, we're now able to expose through that single pane of glass.

Kris Newton
VP of Investor Relations, NetApp

All right. Well, I think our time is up. I have at least one of our next speakers, so I will set you free.

Jeff Baxter
VP of Product Marketing, NetApp

Thank you.

Kris Newton
VP of Investor Relations, NetApp

Thank you very much for your time. I appreciate it.

Jeff Baxter
VP of Product Marketing, NetApp

Thank you all for your time. Appreciate it.

Kris Newton
VP of Investor Relations, NetApp

All right, so now we're gonna get into the world of cloud, specifically. So since Ronen's here, I'm gonna invite him to come on up. Here's Ronen Schwartz, who's the head of our first-party cloud storage services.

Ronen Schwartz
SVP and General Manager, NetApp

Hi, everybody.

Kris Newton
VP of Investor Relations, NetApp

Hey, Ronen.

Ronen Schwartz
SVP and General Manager, NetApp

Good to be here. Yes.

Kris Newton
VP of Investor Relations, NetApp

Thanks for coming. Someone else is gonna join us, but I think he's getting mic'd outside. So let's start. Have a seat.

Ronen Schwartz
SVP and General Manager, NetApp

Thank you.

Kris Newton
VP of Investor Relations, NetApp

Why don't you tell us who you are and what you do at NetApp in a better way than I just did?

Ronen Schwartz
SVP and General Manager, NetApp

Good afternoon, everybody. I can't believe I'm only 24 hours here, because based on my voice, it sounds like I've been here a little bit longer. Hey, Pete. My name is Ronen Schwartz. I joined NetApp about three and a half years ago, basically to lead our first-party cloud cloud journey. And this actually includes, from a leadership perspective, leading the engineering team, the product management team, the strategic alliance that we have with the three hyperscalers at this stage.

Kris Newton
VP of Investor Relations, NetApp

Okay.

Ronen Schwartz
SVP and General Manager, NetApp

Short, short description, yes.

Kris Newton
VP of Investor Relations, NetApp

That's great. And then, Pete Lilley also just joined us. Pete Lilley comes from the Instaclustr acquisition, so you might notice a bit of an accent when he talks.

Pete Lilley
VP and GM of Instaclustr, NetApp

That's right.

Kris Newton
VP of Investor Relations, NetApp

Pete, why don't you-

Pete Lilley
VP and GM of Instaclustr, NetApp

The au-

Kris Newton
VP of Investor Relations, NetApp

Introduce yourself and what it is you do?

Pete Lilley
VP and GM of Instaclustr, NetApp

No worries. He also comes from Australia, as you say, and he's traveled a long way.

Kris Newton
VP of Investor Relations, NetApp

Yeah.

Pete Lilley
VP and GM of Instaclustr, NetApp

So while I was waiting in the chair. No. Thanks very much. Take my glasses off. I'm Pete Lilley. I'm the VP and GM of the Instaclustr business. I was actually, I'm actually one of the cofounders of Instaclustr, and I was the CEO of the business leading up to its acquisition by NetApp in May 2022. And I'm responsible for all of Instaclustr's business as part of the Cloud Ops portfolio, which there are three businesses in that, in my group under Haiyan Song, which is Cloud Insights, Spot, and the Instaclustr business. So all of the product development around what we do from a platform and enterprise open source is really part of my business.

Kris Newton
VP of Investor Relations, NetApp

All right. Well, great. So you can see we can cover all things cloud here, and I'm sure you guys have questions, otherwise you're gonna have to listen to me ask more softballs. So all right, we got Meta in the back.

Meta Marshall
Executive Director and Senior Equity Analyst, Morgan Stanley

Maybe just on the Cloud Ops portfolio, clearly bringing together kind of all of those different acquisitions has been kind of a journey for you guys. Just where do you guys like where has the, you know, the synergies kind of come from, all of those different acquisitions and kind of pulling them together, and where is kind of the work still being done there?

Kris Newton
VP of Investor Relations, NetApp

Certainly talk about how we're using Instaclustr and attaching NetApp cloud storage.

Pete Lilley
VP and GM of Instaclustr, NetApp

Yeah. Yeah, yeah, I was absolutely. So from a Cloud Ops view perspective, you've got Cloud Insights, Spot, and Instaclustr. I think there are tremendous advantages between the three products themselves in terms of being able to leverage each other's capabilities, to be able to drive more automation, more capability, and make it, make the Cloud Ops portfolio effectively more intelligent, in as intelligent data infrastructure. The other part, which I think is really, really interesting from, from my perspective as the GM of the Instaclustr business, is the bridge that Instaclustr can help make between the cloud-native part of the business and NetApp's traditional storage business.

So, we're able to leverage cloud first-party storage, which I'm sure Ronen will be very happy with, but we've just done our first integration of first-party cloud storage through a solution called Postgres, where Postgres on Azure NetApp Files. 300% performance increase, up to 300% performance increase for leveraging Postgres on ANF, which is amazing compared to what's available in the hyperscaler Postgres deployments and what open co-traditional open core and other competitors can do with that technology. And the other fantastic part about that is it's not just about the price performance, it's about being able to bring some of the advanced features to the usage of that technology that's pretty amazing.

So data tiering, advanced replication, quick snapshots, disaster recovery, these are problems that enterprise users of Postgres have had for a long time, and the ability to one-click deploy or API call deploy that technology out through, through in Azure is amazing, being able to leverage that. And I think at the same time, on top of that, with Instaclustr and what we're doing with hybrid cloud customers and hybrid cloud environments, is being able to offer the same as a service cloud experience in any cloud, so any hyperscaler, so whether it's Azure, AWS, GCP, or on-prem, is offering customers a pretty unique experience in using and leveraging this very powerful enterprise open source software.

Ronen Schwartz
SVP and General Manager, NetApp

... Maybe I'll add also, I'll add two things to what Pete was just mentioning. The first one is when you look into database optimization, there is a layer of storage that you can optimize for the databases, and this is something that the hyperscalers have done to a certain degree to some of their databases. And through this partnership, we are basically pushing the performance, the efficiencies, and other capabilities, basically, to the best that you can. So I would say, like, I also really appreciate the knowledge and the depth of implementation that the team is bringing, and this is pushing us to bring an overall better, better storage. I would give one more example, which is Cloud Insights.

Basically, today, a lot of the AWS field is using Cloud Insights as a way to demonstrate and show the customers that are looking into migration use cases, this is how your existing environment look like. This is how it will look inside AWS, and it's basically helping us or helping AWS, in this case, accelerate the migration to the cloud. So I think there is many points of synergies. We gave two examples that are, I would call it, like already implemented in a, in a good scale.

Kris Newton
VP of Investor Relations, NetApp

All right. Gloria, I think you had a question from the webcast?

Operator

Yes. Aaron Rakers from Wells Fargo. As we think about NetApp's multi-cloud integration, do you have any color on how many of your traditional on-prem customers are leveraging NetApp's native cloud offerings? How has this progressed?

Ronen Schwartz
SVP and General Manager, NetApp

Obviously, we do know.

Kris Newton
VP of Investor Relations, NetApp

You can talk about what customers are doing, the deployments, the reasons why.

Ronen Schwartz
SVP and General Manager, NetApp

Yes. So-

Kris Newton
VP of Investor Relations, NetApp

Sorry, I've apparently put the fear of God in everyone.

Ronen Schwartz
SVP and General Manager, NetApp

Extremely well-trained. One of the advantages of having a cloud solution is you actually do know what customers are doing and how, at least on the high-level degree. But, very specifically to the question, we're seeing across many, many verticals, customers that are implementing in the cloud, I think one of three patterns. The first one is basically expanding their data center into the cloud. They do it in a case that they are pressed on a short-term storage availability, they have long-term plans of managing data centers, et cetera. In this case, what they're doing is tiering, shifting backup, and DR into the cloud. This is, I think, one pattern, definitely very well embedded and implemented.

The second one is basically a migration or building the same, similar workloads instead of on-premises, basically in the cloud. We've seen a massive growth in SAP in the cloud, Epic in the cloud, we're seeing databases that are moving to the cloud. VMware, there is a lot of push in VMware moving to the cloud. Customers like that, it's not that they are, moving their or leaving their on-prem, but they are choosing for different workloads, which one should be on-premise, which one should be in the cloud. Again, multi-year of customers adopting it in a very big way. And I think the third pattern is basically customer that are innovating in new workloads, and sometimes they are doing it in a cloud-first approach.

I think the most common one, and if you have the chance, I recommend you see the demo, is a lot of the AI workloads are innovated or starting from the cloud. Not always, but that's a very common pattern. I think others are implementing Kubernetes as a platform for their applications in the cloud. I think in all three of them, we're seeing really good adoption. And not just good adoption in the last six months, but actually good adoption in the last few years.

Kris Newton
VP of Investor Relations, NetApp

All right.

Ronen Schwartz
SVP and General Manager, NetApp

But maybe I will, I will say one thing is we are using the cloud as a way also to acquire new customers. Obviously, AWS, Azure, and GCP have a very broad market reach. There is a lot of customers that NetApp have not necessarily has its customer on-premises, that will have their first NetApp ONTAP experience in the cloud. I think it's true also for the rest of the portfolio, but it's, it's not limited to the... It's definitely not limited to the customer base.

Kris Newton
VP of Investor Relations, NetApp

All right. Mehdi.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Just one follow-up, and I may have missed this. If you're focused on expanding business in the cloud, how do you prioritize with AI projects? And I'm asking this because, to me, AI is more of a, an on-prem deployment. Do you find yourself competing with the other parts of the company? Question A. And question B is, if I'm right that AI is more of an on-prem, then what are your thoughts about the future of NetApp's cloud business model?

Ronen Schwartz
SVP and General Manager, NetApp

I'm trying to see how do I answer the, whether AI is an on-prem or cloud, or cloud part. I think there is, customers are making choices about where to design, where to innovate, where to deploy, and in many cases, these choices involve both on-premises and the cloud. I've customer that I know that have developed in the cloud and deployed on-premises. I have customer that have developed or started development on-premise and deployed it at scale in the cloud. So I definitely don't see it as one or the other. I also don't think that NetApp should, I would call it like, that NetApp should decide for the customer where are they gonna do the AI innovation.

Our goal is to support the customer with the best of breed storage, optimizing their AI promise, their AI goals. I do think that there is an interesting change in the market with GenAI, that is basically giving more value to unstructured data, when previously machine learning and so on brought more value, I think, to the semi-structured and structured data. I think the patterns there are a little bit different. We are gonna support our customers in both of these journeys. I think the GenAI, just naturally from OpenAI to Google Vertex AI, et cetera, there is a lot of the GenAI that is done cloud first. As I said, I think our goal is basically to support the customers wherever they are.

I, I do wanna call out, I mean, we did demonstrate, as part of Google Next, together with the bunch of announcement that we will have made, you could have seen, basically one storage partner, NetApp, that actually have already done the full integration to Vertex AI, that actually is supporting the customers in how to augment the LLMs or the data models with proprietary data in a secured way, how to bridge on-prem and cloud data. I think this is, this is NetApp.

If you're here in the event, you'll be able to see similar demos, I think one on main stage later today, and I think two others through the sessions, of how tightly we're integrated into the GCP AI or how tightly we're integrated into AWS, both SageMaker, as well as the, as the Gen AI technology, and the same for Microsoft. So you'll see us doing that across the board. We're doing it also fabulously on premises as well.

Kris Newton
VP of Investor Relations, NetApp

All right. Tim?

Tim Long
Managing Director, Barclays

Thank you. Two, if I could, the first one might be a quick one. One, do you think there's going to be, as things evolve on the hyperscale or public cloud, any hardware play for NetApp at all, or is this gonna predominantly be software? And then second, on the software side, could you talk a little bit about, talent and, you know, resources and, you know, competing with, other, you know, high-profile tech companies? Because I, I think with some of the acquisitions, maybe there had been some departures. So if you could just talk more broadly about how you're building kind of the internal engine here and be able to keep fueling it with software talent. Thanks.

Ronen Schwartz
SVP and General Manager, NetApp

You'll start?

Pete Lilley
VP and GM of Instaclustr, NetApp

Two parts. You wanna do part one, and I'll do part two, or no, no? Maybe we just get... What was the first question? Could you repeat the first question again?

Tim Long
Managing Director, Barclays

Is there any hardware play?

Ronen Schwartz
SVP and General Manager, NetApp

Yeah. I know what-

Tim Long
Managing Director, Barclays

-that you're gonna pull in onto that question?

Ronen Schwartz
SVP and General Manager, NetApp

I'll start with that. You'll start with the second one. So definitely there are specific workloads that will be best supported by a single-tenant solution. There are customers that that's what they want. How does the customer consume it from a subscription and so on? I think there is a lot of flexibility, there is a lot of flexibility there. So I do think there is hardware opportunities, there is hardware opportunities as well. I think in some places you will get full visibility because it will be basically public. In other places, you're just behind a service that is basically running, and you remain anonymous from that perspective. So I definitely see, as larger workloads are moving, in some places, single tenant will make a lot of sense.

Pete Lilley
VP and GM of Instaclustr, NetApp

On your second question, and thank you, it's a really good question. Yeah, one of the value, and I'll talk from an open source perspective, enterprise open source, is one of the interesting value propositions that Instaclustr brings to customers is that actually finding resources to run these types of technologies at scale with deep open source knowledge is actually really, really difficult. It's highly competitive. There isn't a sufficient level of expertise out there for all of the industry to consume the available talent pool. And so as a business, it's actually really, really critical to have your own internal programs to be able to develop the engineering talent that you need to have to continue to sustain the capability and the competitive advantage that you've got.

We realized that from day one when we founded the company, that Instaclustr, this is a story long before NetApp, is that that would be a constant challenge, and yeah, we had to build a robust program to create and train, raise, and sustain exceptional open-source software engineers, both from a DevOps perspective and a development perspective, and we continue to do that today.

Kris Newton
VP of Investor Relations, NetApp

And then, just to add, Tim, I think you were asking about NetApp internally. How do we attract and retain key engineering talent? I think, not to speak for Ronen-

Ronen Schwartz
SVP and General Manager, NetApp

You want to-

Kris Newton
VP of Investor Relations, NetApp

... but I will. You know, one of the key advantages we have is ONTAP, right? We are the only company with a single primary storage operating system, and that enables our R&D to leverage really broadly, right? So we get a lot of leverage, and maybe you wanna actually add some color to that.

Ronen Schwartz
SVP and General Manager, NetApp

Yeah. So I think when you look at it, and I think the first part, which you're absolutely right, is we are basically building on the foundation of ONTAP with a lot of talent and skill set that we have and we have built through the years, and basically optimizing that into the three clouds. So we're building on a very, very strong foundation, unlike, you know, you build some new storage offering from scratch, right? We're also leveraging at scale, and especially in storage, it's really, really important, the testing framework, the scale framework, the resiliency, and all of that, we're leveraging it at scale.

I think for the cloud-specific talent that we have, I think, you know, like any other company, we are based on people, and that's the most critical asset that we have. I sometimes tell my team: if you want a front seat into the cloud journey, into the AI journey, we are actually that front seat, right? Like, you see three cloud vendors firsthand, as the one P working with them, and you get to see a lot of innovation as it comes to the market, even ahead of the time that it comes to the market through this tight partnership. So I think it's a very, very exciting place to be in general, and definitely for the developers.

Kris Newton
VP of Investor Relations, NetApp

All right. Got a question in the back.

Frederick Gooding
Equity Research Associate, William Blair

Hi, how are you doing? Frederick Gooding with William Blair. Just wondering if you could expand a little bit on the go-to-market motion. You guys talked about a lot between on-prem and cloud. I'm just, just wondering a little bit more about how, you know, you're telling sales reps to think about that. And more specifically, also, I believe Jeff mentioned, you know, you made it, in terms of your, your relationship with partners, and you guys are making it easier for them to make money. I wonder if you could provide a little bit more color on that.

Kris Newton
VP of Investor Relations, NetApp

So probably not the right team to talk about the VAR partner program that we just launched, but certainly talking about how we go to market with our cloud partners is a great one, and I can follow up with you on the other one.

Ronen Schwartz
SVP and General Manager, NetApp

Yeah. So I think what we have I would call it, like, aligned, especially since the beginning of the year. We aligned our specialist cloud sellers, those that are focused, and we have team that are focused separately on AWS, on Azure, and on GCP. We basically made them align to the structure, to the GTM structure of the hyperscalers themselves, with the goal of basically the group behind them to create the design wins per workload, and then for the sales team to basically focus on enabling the hyperscaler sellers, but even more important, enabling the hyperscalers workload specialists. Some places, they're called black belts, in other places, they are called workload heroes.

Every hyperscaler have a little bit of a different terminology, but we're basically aligning ourselves with them so that we are empowering them in parallel to basically empowering the end customer. This is actually giving you a very, very good leverage, right? If you're the hyperscaler seller, you have about, I don't know, 200, 300 things to potentially sell. We are basically helping them identify the best solution for the customer, for the different workloads, and that's kind of our focus. We do it systematically, meaning that there is design wins, there is published calculators and so on, and then basically helping them, the customers and the hyperscaler sellers, in basically proving the value, leveraging these tools.

So that's basically the main motion. There is actually a motion of the hybrid customers. I think Jeff was just describing it toward the end, which is hybrid customers that are using the cloud for resiliencies, using the cloud to augment their on-prem. I think this is... And NetApp supports that through the direct sellers that work with the customers and basically through the regular pre-sales organization.

Kris Newton
VP of Investor Relations, NetApp

David.

David Vogt
Managing Director and Senior Equity Analyst, UBS

Great, thanks. I just wanna go back to the comment you made about working with your customers that are designing or developing applications, either on-prem, and they're moving to a public cloud or innovating in the public cloud, moving back, and it's not your kind of position to kinda tell them how to run their business. But can you talk to how you bring CloudO ps to that conversation to help them manage potentially incrementally higher costs or complexities or technical challenges that they may face going back and forth in either direction? And should that ultimately be sort of a, a, you know, an incremental service that most customers take because you're trying to optimize and solve for, you know, potentially complex solution?

Ronen Schwartz
SVP and General Manager, NetApp

Yeah. You want me to start, or you start?

Pete Lilley
VP and GM of Instaclustr, NetApp

Off you go. You look like you're about to.

Ronen Schwartz
SVP and General Manager, NetApp

Yeah. I'll, I'll start a little bit and, and then you'll continue. I think that you said it already, and especially in the last six-nine months, cost is an important thing for each and every customer, right? So as we are bringing, I mentioned just the TCO calculators and all of that, right? So basically, part of actually giving the customer the right recommendation for a workload does include the technology best practices, but it does include the total cost of ownership recommendation. And this is actually where the Cloud Ops portfolio is coming to play, right? And it comes to play in multiple levels. It comes to play in how much is that a compute-heavy workload? Can we help you with compute optimization? Is this a database workload?

Can we help you with full service of the database, delivering the database as a service to you as a customer? Can we help you with insight that help you find the bottlenecks across your entire environment so that the workload is optimized? So I think if you go from the workload, then storage is a critical component, but the entire optimization where our Cloud Ops is focused on is a natural next steps.

Pete Lilley
VP and GM of Instaclustr, NetApp

It is, it is. And, you know, we're able to do, bring to bear the, you know, the complex team of technical account managers, the sales and account management teams and the engineers, the customer success engineers, to get engaged with the customer that's looking to deploy in multiple contexts, and look for and position, you know, the right kind of solutions at the right time that really bring the benefit to the customer at whatever stage of the cloud journey they're in, whether it's going one way, going the other, or going both ways at the same time.

And then, you know, talking about, you know, my part of the business, which is the Instaclustr business around enterprise open source, you know, the experience that we offer the customer is really an as-a-service experience, both on-prem and in the cloud. So it's natively the same. And we'll get more the same as we continue, you know, developing those capabilities. And so really, it does come... You know, the customer has all the maximum flexibility, but they just need the right access to the right tools and capabilities at the right time to help them make those TCO-based decisions.

Ronen Schwartz
SVP and General Manager, NetApp

I think the-

Kris Newton
VP of Investor Relations, NetApp

Yeah, David, can you run the right mic back to David?

Ronen Schwartz
SVP and General Manager, NetApp

Yeah. Just to finish one last point on this. Customers that only do migration as is to the cloud, they get very limited benefits. The idea is that you migrate and optimize and/or optimize and then migrate, and this is where it's a very natural fit, and we're able to guide the customers on that journey.

David Vogt
Managing Director and Senior Equity Analyst, UBS

That was gonna be my follow-up. So I know it's not your purview in terms of helping them design or innovate their applications, but when are you brought in in a discussion from the cloud op side? Is it as the innovation is happening, or is it a customer says to you, "Hey, we've got this great application we're working on, this workload. Do you think we should develop it in the cloud? Do we develop it on-prem? Or is it more of a, a secondary consideration at some point in, in the journey, effectively, from the customer's perspective?

Pete Lilley
VP and GM of Instaclustr, NetApp

So we see customers, I think, at all stages. So it can be, "We're doing something new, help us understand what the, you know, what the deployment challenges might be." They may have a strategy in mind. Something about a customer that we're working with at the moment that is looking to make a move from a hyperscaler-deployed environment back to an in-house one, and they're giving all of that consideration around TCO. And so there's an engaged process in that and having a discussion with the customer about all of the benefits of doing so, and what that cost will be and how that falls out, and what the opportunities are to optimize.

And, you know, my experience working with our customers has been that, you know, as Ronen was saying, optimization doesn't necessarily happen out of the box either. It can often be a peak, followed by some optimization, followed by another peak, followed by some future optimization as features and capabilities evolve. But the whole goal of the trend over time is towards optimization.

Ronen Schwartz
SVP and General Manager, NetApp

I think that workload implemented in the cloud is not a one-time opportunity.

Pete Lilley
VP and GM of Instaclustr, NetApp

Yeah.

Ronen Schwartz
SVP and General Manager, NetApp

Even if you arrived in a late stage of the current workload, the next one is just around the corner. Right? So- As you say- You can arrive... If you show and explain the value, you'll be early in the next one.

Pete Lilley
VP and GM of Instaclustr, NetApp

Exactly. I mean, the other types of customers that we'd experience from, you know, from the planned and doing a project through to, "My cluster is on fire, please, please help us, you know, now, because it's a critical application," is another example of a customer where you're almost in rescue-type optimization to get that customer stable, and then you bring them into the optimization discussion off the back of that.

Kris Newton
VP of Investor Relations, NetApp

All right, Sidney.

Sidney Ho
Equity Research Analyst, Deutsche Bank

Thanks. Sidney with Deutsche Bank. Well, this is... I'm gonna toe the line about financials, but the last earnings call, you did talk about the shift towards first-party storage services versus subscription. Is that it's simply the cyclical nature of the subscription business going up and down, or is it a more strategic move that towards first-party storage, whether it's from NetApp or the customer perspective? Thanks.

Kris Newton
VP of Investor Relations, NetApp

I'll clarify our statements and then hand it over to Ronen. Basically, we said we believe our emphasis and our biggest opportunity is around those first-party storage services. We see that as an absolute unique differentiator for the company, a massive opportunity, and that's where we're really putting the wood behind the arrow. Ronen.

Ronen Schwartz
SVP and General Manager, NetApp

Yeah, I'll second what you said. I think if you look into our general into any customer at any stage, whether they were or were not NetApp customers, very high percentage of them have an existing, relatively large commitment to the hyperscalers. Being a 1P or a first-party offering means that you do not need. There is no need for a new contract. There is no need for a new engagement. There is a need for the workload or the team that works on the workload to make the right choices when it comes to the infrastructure in this workload. It's a massive advantage. This advantage translates from a financial perspective to consumption, 'cause they don't need to sign a new agreement. They don't need to have a new commitment.

They just need to start using this environment, and that usage is translating into consumption, and that consumption is what you see eventually in the financial report. I think what we're saying is that now that we have three hyperscalers with agreements like that and so on, we'll see the consumption really translating that into revenues and so on.

Kris Newton
VP of Investor Relations, NetApp

All right. Well, thanks, guys. I really appreciate your time. Thanks everyone for your great questions. I'll set you free to go talk to customers.

Ronen Schwartz
SVP and General Manager, NetApp

Thank you. Yes.

Pete Lilley
VP and GM of Instaclustr, NetApp

Thank you.

Anthony Lloyd
VP of Technology Services, OpenText

Thanks, everybody.

Kris Newton
VP of Investor Relations, NetApp

All right. Thank you. Thank you. All right. Well, now I think it's the part of the event that you guys have all been waiting for, some actual, real customers that you can hear from and what they're doing and what their big challenges are. So with that, I'd like to invite Anthony from OpenText and Phil from Lawrence Livermore National Labs to come up on stage. And I did that from memory, so I greatly apologize. It's been a long day already. Hey, thank you so much. Hey, Phil, thank you so much.

Philip Adams
CTO, Lawrence Livermore National Laboratory

Thanks.

Kris Newton
VP of Investor Relations, NetApp

All right, why don't you guys have a seat?

Anthony Lloyd
VP of Technology Services, OpenText

Thank you.

Kris Newton
VP of Investor Relations, NetApp

So we're going to hopefully get most of our questions from the audience, but I'm going to kick it off by just asking each of you to introduce yourselves, where you're from, and kind of what your IT challenges and environment look like. That might take a while, but it'll give them some time to queue up some questions.

Anthony Lloyd
VP of Technology Services, OpenText

Sounds good. I'm Anthony Lloyd. I'm VP of Technology Services at OpenText. I manage all of the infrastructure and operations for corporate IT. That covers everything from data centers, cloud, network, telecom, storage and compute, end user services, site support. Gosh, and there's more. The service desk, the operations center, and pretty much anything that touches an application or an end user. So I'm responsible for corporate IT, so we have a line of demarcation between corporate IT and the commercial side of the business, just because we protect all of the back-end systems, the HR systems, financial systems, things of that nature. The commercial side of the house really handles all of the customer-facing applications that we sell for revenue.

Kris Newton
VP of Investor Relations, NetApp

All right, great. Phil, a little bit about you?

Philip Adams
CTO, Lawrence Livermore National Laboratory

I'm Philip Adams. I'm CTO for the National Ignition Facility at Lawrence Livermore National Laboratory. I'm responsible for all of the infrastructure, everything from the underpinnings that runs the control system, to how we analyze and process data, to being able to make sure that we have a 30-year scientific archive that is there available for our researchers and our visiting scientists. It's quite a bit of, quite a bit of a task and challenge to be able to manage something that broad, that vast, for requirements that, that is always changing.

Kris Newton
VP of Investor Relations, NetApp

Right. Well, you guys are also doing some really cutting-edge innovation and technology, so that's got to be a pretty data-intensive environment. How do you think about setting up your IT environment to deal with the massive quantities of data that you must face?

Philip Adams
CTO, Lawrence Livermore National Laboratory

You know, we tried very hard to make sure that we looked at ourselves as not as a unique entity in the environment. It's very easy, especially for a national lab, to say: "Okay, we're going to go off. We're so different and, and varied in our needs that we're going to go build something unique." And what we ended up doing is saying: "Look, we've got the same Lego blocks that are available to almost everybody else.

Let's take that innovation that has been done in industry and assemble it in a unique way to be able to do, you know, low latency operations for a control system that has a way of being able to provide, you know, life cycle management of data over time and append that to our databases. You know, being able to leverage technology as it is makes sure that my team can spend more time helping the scientists rather than trying to uniquely innovate things that industry has already figured out how to do.

Kris Newton
VP of Investor Relations, NetApp

All right. Well, and Anthony, when we were talking last night, you were telling me about the massive M&A pipeline that you have to deal with and integrating all these different companies. Why don't you say a few words about some of those challenges?

Anthony Lloyd
VP of Technology Services, OpenText

Certainly. So at OpenText, we grow by acquisition, and because we grow by acquisition, we have the opportunity, and I'll use that term, of trying to figure out what the best solution is to integrate things that we may not always know about. We go through a due diligence exercise, but you don't always get the full picture until you get under the covers. So part of this is having flexibility, not only to be able to integrate different technologies, different solutions from different sources from around the globe, and be able to do that in a seamless manner that allows us to do it in a short period of time, in a very secure manner, and not have to recreate the wheel every time we do it.

NetApp gives us a lot of those capabilities because we operate in all of the hyperscalers. We have the ability to acquire and integrate anywhere in the world, and one of the beauties that we have is no matter what that environment is, we can have a solution from NetApp that allows us to get that done successfully and quickly.

Kris Newton
VP of Investor Relations, NetApp

All right. Phil, how do you use NetApp?

Philip Adams
CTO, Lawrence Livermore National Laboratory

We use a lot of NetApp technologies in our environment. We leverage FlexPod and AFF in order to get low latency compute in our environment and low latency access to data. We've leveraged FabricPool and StorageGRID, and behind our Oracle databases, to be able to give a transparent life cycle, data life cycle management on that data set. We leverage SnapManager and SnapVault in order to be able to do a comprehensive backup environment. So pretty much listed- Yeah ... quite a bit of NetApp's portfolio of, applications in the, in the suite.

Kris Newton
VP of Investor Relations, NetApp

Definitely. And I see you nodding along, so it sounds like you're also using a pretty broad swath of our technologies.

Anthony Lloyd
VP of Technology Services, OpenText

Basically, the same technology, then add a few more. So we really rely a lot on ONTAP, Google Cloud Volumes. We use a lot of that technology because it allows us to quickly... integrate different solutions and not have to go out of the box to figure out if we have to have a different way of doing things every time we encounter a different acquisition model.

Kris Newton
VP of Investor Relations, NetApp

I'm loving because I'm seeing you nod. So definitely, you guys are using so much of the NetApp technology- You're forgetting the different names of the products. That's kind of exciting to me. All right. Well, let me turn to our audience and see if there are any questions. If you guys have questions about how these guys are utilizing technology, the challenges they face. It's your chance to ask real customers real things. Steve?

Steven Fox
Founder and CEO, Fox Advisors

One of the hardest things to figure out is-

Kris Newton
VP of Investor Relations, NetApp

Oh, say who you are for the podcast.

Steven Fox
Founder and CEO, Fox Advisors

Oh, sorry. Steven Fox with Fox Advisors. So one of the hardest things to figure out as an outsider is how you become more efficient with storage. It's generally thought to be a consumable, but it seems like every cycle you're able to consume more with less. So can you talk about maybe how you use NetApp to do that, and what that means for your infrastructure purchases now versus maybe three years ago?

Anthony Lloyd
VP of Technology Services, OpenText

Certainly. Well, what we've been doing is primarily, if it's in the cloud, obviously, we're using ONTAP or Cloud Volumes, but if it's on-prem, we have various solutions depending on if it's a filer or if it's an A700, depending upon the performance and everything else associated with it. What we're doing now is really trying to move to more of a, a, a consumption-based model so that we don't have to make those CapEx investments.

So we're really looking at, you know, Keystone and looking to how we can use that as capacity on demand, whether we use it on-prem or whether we use it in the cloud, that allows us to quickly deploy what we need, not just buying additional capacity, which you end up doing when you deploy on-prem, but you buy what you need, and then you can quickly spin up additional capacity or downsize it as you need to based on your needs. That gives us a great deal of flexibility, and it really helps us to manage our financials much more efficiently.

Kris Newton
VP of Investor Relations, NetApp

Phil, anything you-

Philip Adams
CTO, Lawrence Livermore National Laboratory

Yeah, I'd say that, you know, the first order, the National Ignition Facility, is a research project where we're trying to study the phenomena of high energy density science. So we capture all the data, we store all the data for 30 years, because of just how hard it was to get the data in the first place. You know, and so in a couple of ways that we've tried to be, you know, I think the... Some of the, what you were asking about is, you know, how are we thinking about, you know, reducing costs and whatnot. One was the life cycle management that we did on the data to reduce the total cost of ownership of storing that.

Once we get a better understanding of the types of data that we need, you know, fuel our machine learning algorithms to be able to understand exactly the types of data that you really want to store, we can get a little bit more efficient in terms of being able to do localized processing nearest to the diagnostic endpoints. Maybe then, you know, have that be a little bit more intelligent at that point and only feed the data up that we really want to store long term.

But right now, when we're in a cycle right now where we're, you know, everything's new, everything's amazing, you know, you don't really know where the breakthroughs are going to be, you've got to keep it all for right now, and I think, you know, as time goes on, we're going to be a lot more efficient with the way how we do analysis and data. But, yeah, for the time, for the time being, it's all about how to reduce the cost to maximize taxpayer money.

Kris Newton
VP of Investor Relations, NetApp

Are you guys using the full suite of NetApp efficiency tools so that you're effectively storing more, more data than you have space for?

Philip Adams
CTO, Lawrence Livermore National Laboratory

Yes.

Kris Newton
VP of Investor Relations, NetApp

All right. Tim?

Tim Long
Managing Director, Barclays

Hi. Anthony, if I could just follow up, you talked about the move to kind of more consumption-based as-a-service. Is that unique to storage for you, or are you looking across some of the other silos of infrastructure? And if, you know, anything that's different when you look at storage compared to the others. And then maybe for both, could you talk about kind of when you look at your whole storage environment, is it, is it all NetApp? Do you have others, other, you know, competitors of NetApp? You don't have to name who they are, but maybe if you can just talk a little bit about how you choose a certain vendor for a certain application or, or, or use case, that'd be great. Thank you.

Anthony Lloyd
VP of Technology Services, OpenText

So to answer your first question, yes, we are looking to move to a consumption model across all of the infrastructure. We've already started that process with the compute platforms, and it's worked out well. Now we're starting to go down that path with the storage platforms. One of the challenges, of course, you face anytime you change from a CapEx to an OpEx conversation is, what's the cost construct? And we have to make sure that it makes sense financially in order to do that.

Now, the price points are becoming very competitive, so because of that, we can start to exercise in that manner. To answer your second question, so we go by acquisition, so we may get one of everything depending upon what the acquisition is. But at the end of the day, NetApp is our standard, and so we move from whatever that alternative storage device may be. We move to NetApp over time as we integrate.

Kris Newton
VP of Investor Relations, NetApp

I assume you have a lot of stuff in your environment.

Philip Adams
CTO, Lawrence Livermore National Laboratory

Well, you know, we really looked very long and hard at our requirements that we had. Like I said, there's a lot of varied requirements from a control system to a 30-year archive, and this, the needs of a storage device really change in terms of, you know, subsecond response to, okay, you know, how, how do I store this big chunk, you know, petabytes of data for, for long term? You know, so in, in that case, once we started putting everything down on the list, we said: Okay, you know, how is it going to be available? How's it going to be reliable? How's it going to be manageable? And more importantly, you know, well, how do you back it up? You know, if something happens to the data, like, you know, a ransomware issue, God forbid... you know, how, how do you deal with that?

And so once we looked at all of our requirements and we analyzed them, we went through a pretty long path, pretty long decision path, and we chose NetApp. And then once we did, it was very easy for us to go, "Okay, let's keep implementing the technologies that make sense for us." And they built on top of each other to give us a very comprehensive method for being able to handle our data.

Kris Newton
VP of Investor Relations, NetApp

Question in the back.

Frederick Gooding
Equity Research Associate, William Blair

Frederick Gooding with William Blair. Just curious, so, you know, we talked about AI earlier in this session, so just wondering, A, are you guys looking to, you know, more implement AI capabilities, like, within your storage environments? And, B, do you see that as, like, a whole another budget in your guys' mind for, like, IT infrastructure, or is that a part of, like, your existing budget just for storage in general?

Anthony Lloyd
VP of Technology Services, OpenText

So we are looking to implement AI where it makes sense for us to do it. Obviously, we have the concerns that everybody else has. You know, AI has tremendous capability, but you also have to weigh the security concerns, as well as ensuring that you have the ability to separate your environment in the event that you're using a capacity-based solution, or you're operating in a shared environment, such as Office 365, where Microsoft has implemented AI in their solution. So one of the first questions I asked was: "Okay, how are you making sure that if somebody's tenant becomes compromised, you're protecting me?" And so those things you have to really focus on and ensure that you've got that validation. So yes, we do want to take advantage of it.

We are putting it into our own products, but it's also the understanding and awareness of, you don't want to let the genie out of the bottle until you know what the bottle can do. You know, you can get good things, but good things can turn bad very quickly if you don't have the right security, you don't have the right ethical controls, and you don't have the right governance model around it.

Philip Adams
CTO, Lawrence Livermore National Laboratory

You know, we live in this time where you always are wondering if you're getting the right news, the right information. You know, there's big buzz around fake news, and AI is certainly victim to that, right? The data that comes in, how you train your models, and the decisions it's gonna make out of that is then really key. You saw with the SolarWinds exploit, for example, you know, that was a way of poisoning the well, and it impacted a lot of companies. And so I see AI as gonna have a lot more positive benefits than negative ones, but they are things that, you know, governance is gonna have to be key in terms of how we address those things.

Making sure that we have a very tight handle on you know, how we're feeding that AI tool, and then the decisions that are being made. Is there a check for the process that's making the ultimate call? You know, if you just automatically go with it and there's no counter-check, that may be problematic. Livermore Labs, you know, we've been experimenting with AI for quite some time. In our particular environment, I think we're still at the machine learning phase, again, because of where we are in our phase right now, trying to train the models and making sure that things are right. As they say, you know, some models are useful. So, you know, it's an iterative process until you get to the point of something that is, you know, gonna be interesting in our environment.

Kris Newton
VP of Investor Relations, NetApp

Well, I think that iterative modeling part of AI is really interesting because I think so many people just wanna think, well, you train the model and done.

Philip Adams
CTO, Lawrence Livermore National Laboratory

Right.

Kris Newton
VP of Investor Relations, NetApp

Right? But you... I would imagine you're having to constantly fine-tune your pre-trained models with new information as it comes in. How do you think about keeping things current and, and moving them along?

Philip Adams
CTO, Lawrence Livermore National Laboratory

You know, I'll give you an example. I mean, there, there are two areas in our program that we spend a lot of time in trying to get very deep in, and one is optics inspection, 'cause we're looking for defects in our optics. As you put fluence through glass, you can get these little inclusions, and after a while, we have this optical loop process that recycles the optics. So it's always a question of, well, when do you, you know, land the plane? Or in, in the case of NIF, when you bring it down for maintenance to pull those optics out and refurbish them. That means less time for doing experiments, you know, less time for science, so we wanna make sure that we are very smart for when we do that type of event.

The other issue that, you know, that we spend time looking on is the small little BBs that we use for filling with hydrogen, for actually being our main target. And we're pushing the edge of manufacturing as we know it, so there are always, you know, the pits and little inclusions that we need to do to make sure we machine what's possibly the most smooth surface that we know on the planet. And those things, you know, we're using machine learning to be able to find out, you know, is that a defect, or is that a, you know, a piece of dust or something that we need to deal with?

And so, you know, those algorithms that we have to actually do detection is where we're spending a lot of time right now. Eventually, you know, those things now are saying, "Okay, it's a defect, it's not a defect," are gonna be smarter as our scientists get more involved with it, to be able to, you know, make sure that it is making more, more right decisions than wrong decisions.

Kris Newton
VP of Investor Relations, NetApp

Is it a hot dog? Is it not a hot dog?

Philip Adams
CTO, Lawrence Livermore National Laboratory

Right.

Kris Newton
VP of Investor Relations, NetApp

All right, Meta in the back.

Meta Marshall
Executive Director and Senior Equity Analyst, Morgan Stanley

Maybe a question for Anthony. It sounds like you guys have a pretty standard playbook for kind of bringing a company in and, you know, maybe starting with kind of moving to Keystone and figuring things out from there. I guess, just wondering, you know, how has that playbook changed over time in terms of, I'm sure, the economics of just the scale that you've gotten has kind of changed some of the weighting. So just wondering, you know, has it been more towards subscription? Has it been to kind of optimize on-premise? Just how, how has that playbook kind of changed over time?

Anthony Lloyd
VP of Technology Services, OpenText

Initially, you know, we didn't have the technology available to really consider off-prem, and so we invested heavily in on-prem solutions. As the ability came along for us to get things in the cloud, NetApp brought those capabilities along, we started to move to those models much more expeditiously for a couple reasons. Number one, enabled us to spin up that capacity much more quickly, but the other thing was we were able to get what we needed versus having to oversubscribe, and that was extremely beneficial to us. Now, as we have now the on-demand capacities available from solutions like Keystone, it gives us even greater flexibility because we grow by acquisition. We may close a location, we may close data centers, we may do all of these things.

I cannot be reliant upon having something in a site that may go away at some point in time. So moving to-- and moving everything, not just our storage, but we're moving all components of our infrastructure where it makes sense into the cloud. So we've moved to an SD-WAN solution that gets us off-prem. We're not reliant upon those things. But, you know, so many other components of our environment are in the cloud when it's financially and functionality is feasible and worthwhile. But if it doesn't fall into those categories, then we remain with it's on-prem and with a limited amount of footprint because we know the likelihood of those locations being impacted is very low.

Kris Newton
VP of Investor Relations, NetApp

All right, I saw some more hands. Sidney.

Sidney Ho
Equity Research Analyst, Deutsche Bank

Thanks. Sidney with Deutsche Bank. I want to follow up with the questions on the AI question earlier. One of the speakers earlier talked about when you build these AI infrastructure, you buy the GPU first, and then the next thing, you buy a storage. Just from your experience, sounds like there is a lag between them. You're talking about the iterations of evaluating the need. From your experience, what is that lead time gonna be like? And, is it because the consumption model that you guys are going after, does that mitigate the need for buying storage in a certain period of time?

Anthony Lloyd
VP of Technology Services, OpenText

So there's pros and cons to everything. I would say that one of the biggest challenges we face is that not only do we wanna make sure that the technology has the right capabilities, particularly from an AI perspective, for me, and my environment is a little bit different than yours, but for me, it's how do I have the ability to get automation, self-healing, all of those things that reduce any capabilities, any possibilities of us having downtime, or us having to have manual intervention to remediate anything that may occur. So moving to that on-demand solution, we don't have to worry about doing upgrades. We don't have to worry about maintaining it. We don't have to worry about that thing. That's on our partner.

So from that perspective, depending upon their adoption and integration of AI into their environments, those things may come faster, they may come slower. Now, if you're building it yourself and you're deploying it in your own environment, like Phil is, that's a little different. In my world, I opt not to do that. I'm a pretty much a standard infrastructure and operations person that I got to be able to scale and get performance and reliability as cost-effectively and efficiently as possible. In his world, he's doing a lot of very custom things, so his challenge is much more difficult than mine.

Kris Newton
VP of Investor Relations, NetApp

All right, Phil, let's hear about that difficult challenge.

Philip Adams
CTO, Lawrence Livermore National Laboratory

Well, if I understood your question, I mean, I think, you know, the... It's a little cart before the horse because for me, it's all about the data first. You know, you start collecting data on things that you find interesting, that you've instrumented in your environment, and then you start having questions on terms of: okay, what do I want to learn out of that? What, what deeper meaning is in there? And that's where you need more compute. You start doing things like, you know, data wrangling and figuring out how you're going to make sense of this, and you start organizing that data and, you know, have everything from data provenance and systems that you're gonna be able to do to, to organize that and get deeper analytics from it.

Then you find out, wow, I don't have enough compute to be able to answer that question, you know, and you start limiting yourself in terms of, well, I can answer these types of questions, but not these. I may not be able to forecast fully for where I want to be. If you're lucky, you can get more hardware, more equipment, and, you know, and further on and answer those things, all the way up until you go, "Man, I, I really could use a GPU."

So I don't see going to that path of wanting GPUs until you've begun the beginning of that, you know, as a, as a data scientist, looking through everything, saying: Okay, I'm at that point now where I can't build those, those neural models until I have enough cores to really chew through this. It, you know, it doesn't allow you to escape the hard work. The hard work is the data wrangling, the cleaning of the data, organizing of the data, all that stuff is time-consuming.

Kris Newton
VP of Investor Relations, NetApp

Yep. David, down here in the front.

David Vogt
Managing Director and Senior Equity Analyst, UBS

Sorry. Thanks again, guys, for doing this. So maybe for Phil, so I would assume, given your status as a national laboratory, nothing that you do is in a public cloud. It's all self-contained within the laboratory from an on-prem effective solution. So does that change your thought process? You mentioned at some point down the road you need a GPU. Does that change your thought process in terms of what your ultimate infrastructure- ... looks like down the road? Is it definitively you have to go down sort of a, an NVIDIA GPU-type solution, or is there other internet-based solutions like Slingshot that are more than adequate to kind of meet the needs that you have? And so just curious about how you're thinking about that.

Philip Adams
CTO, Lawrence Livermore National Laboratory

That's a very good question. Let me think about that. So I'll say, we have available to us GovCloud. So anything that is a FedRAMP cloud, we can use, where we have an ATO to be able to use one right now. And we have leveraged it for certain things. Certainly, we have outside vendors that we have as trusted partners that do things from, creating diagnostics for us to, you know, recalibrating things, and sometimes they send data back to us. And so we leverage, AWS and S3 so that they can upload data into that, and we download from that, for example. So there is some footprint that the National Ignition Facility has on cloud.

The greater lab, they have been using GovCloud quite a bit and pushing a lot of other things out to the institution, but I personally can't speak to that. We have, from the NIF, had some requirements, for example, where we need, we did need some GPU capability, and there was a certain set of work that we could not farm over to supercomputing because they're too busy. The pattern's full, sorry, and we said, "Okay, let's see if we can spin up some stuff on AWS, get it going, and, you know, answer this question." And I, I think it's very powerful for being able to meet our gaps between the time to order, the time to deliver, the time to implement in our data center versus: Okay, okay, team, we've got some, you know, time to rent somewhere else and be able to do that.

So those are powerful. But, you know, the other things that we look at in our environment since we have an on-prem building, is some of the safety and, you know, failure modes and effects analysis. So we want to make sure... I mean, there will always be an on-prem presence because, you know, you want safety interlock systems and those kinds of things to be locally controlled, you know, and those are good things. So we are very careful about where we put things, how we compartmentalize things. I think sometime back in the past, we consolidated until it hurt, and then you found out, you know, well, we couldn't take certain parts of our infrastructure down for maintenance and patching and those kinds of things.

So, you know, cloud fits into our strategy in a way of saying, well, how do we best leverage this in a way that is going to make sense? I think you and I were talking about some of this yesterday, that, you know, some of the challenges of wanting to lift and shift a legacy application and just stick it out in the cloud doesn't make fiscal sense. You really have to, you know, collapse that whole thing and make a truly cloud-native application. You know, so there are things that we're still working through. I wouldn't say we've got all the answers on it, but we are dabbling in it.

Kris Newton
VP of Investor Relations, NetApp

Great. Mehdi?

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Thank you. Just one quick follow-up clarification. You said the consumption model is helping you with OpEx versus CapEx, and also helps you with TCO. Can you clarify me, are you talking about comparing apple to apple, like buying NetApp? In the past, it was transactional, now it's consumption. Or are you comparing NetApp consumption model to just renting a storage in a cloud?

Anthony Lloyd
VP of Technology Services, OpenText

No, I'm, I'm talking about comparing apples to apples. So if I was going to go and let's say I'm buying another, you know, another on-prem flash storage solution from NetApp.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Right.

Anthony Lloyd
VP of Technology Services, OpenText

I'm going to get that same type of solution in a consumption model-

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Right

Anthony Lloyd
VP of Technology Services, OpenText

... from Keystone.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Sure.

Anthony Lloyd
VP of Technology Services, OpenText

You know, what we had been looking at previously, running the numbers, is what's our cost per gigabyte?

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Right.

Anthony Lloyd
VP of Technology Services, OpenText

You know, that cost at one point in time was not competitive. That cost now has become very competitive.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Sure.

Anthony Lloyd
VP of Technology Services, OpenText

It has allowed us to have the flexibility to make this decision.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Okay. And how about the depreciation schedule? Are you now able to, like, assume a ten-year rise, especially since it's OpEx, then you, you don't even have to-

Anthony Lloyd
VP of Technology Services, OpenText

We don't have to deal with that. But there's other benefits that-

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Right

Anthony Lloyd
VP of Technology Services, OpenText

... that we also gain. You know, we get carbon credits.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Right.

Anthony Lloyd
VP of Technology Services, OpenText

We don't have to worry about disposing of the asset at the end of its usable life, which we end up paying to do. That goes away. So we reduce our, you know, carbon footprint, you know, which goes towards our getting to a zero emissions model by 2035.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Right.

Anthony Lloyd
VP of Technology Services, OpenText

All of these things come into play as part of these decisions. that give you benefits by moving away from this CapEx investment into an OpEx model.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Right. And just two quick follow-up. Does this consumption model, I'm sure it includes services or any kind of upgrade, right?

Anthony Lloyd
VP of Technology Services, OpenText

Yes.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

That makes flash more durable compared to in the past. Like, 10 years ago, we didn't know if the flash would be good for after four or five years. Now, there is a vendor that guarantees it in terms of consumption model and services, right?

Anthony Lloyd
VP of Technology Services, OpenText

That, that's correct, and it takes the workload off of your staff from an operational perspective. You don't have to worry about upgrades, you don't have to worry about patching you don't have to do any of those things. If there's any type of an issue, our partner takes care of it.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

And then one last item: in terms of a backup cold storage, how does the consumption model impacting the cost structure associated with cold storage or for backup?

Anthony Lloyd
VP of Technology Services, OpenText

It really doesn't unless you go to a different solution. For us, at the end of the day, our backup costs are our backup costs, whether we're backing up our storage that's on-prem or whether we're backing up our storage that's sitting somewhere else, it's still that cost. That cost is a set cost. So at the end of the day, it doesn't really impact us from a backup perspective. Now, if we need to do something different to do that, obviously, then there's a cost impact. But we are able to utilize our standard products and not have to have a deviation- Right. - which is another one of the advantages of using a constant solution across a vendor, because you have that consistency across the tools.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Gotcha. Thank you.

Anthony Lloyd
VP of Technology Services, OpenText

You're welcome.

Kris Newton
VP of Investor Relations, NetApp

Okay, thank you. Questions? All right, down in the front.

David Vogt
Managing Director and Senior Equity Analyst, UBS

Thank you. So Anthony, I know you mentioned earlier your business model or OpenText business model is one of, you know, frequent acquisition. What have you learned in terms of how much of your infrastructure over time becomes more consumptive-based versus what is stranded capital in terms of, "Hey, we can't shut this down, it doesn't make any sense," either from a flexibility or a cost perspective? Is there kind of a good general rule of thumb in terms of, you know, for every $1 of IT spend, you know, we try to move 75 cents of it to some sort of consumptive solution versus stranded capital that's more fixed on-prem or however you want to define it?

Anthony Lloyd
VP of Technology Services, OpenText

So I can't say that we've gotten to that level of granularity yet, but I can say this: as the technology has continued to become better, our goal has been to continue to move towards this consumption-based model because of the other things that I was just talking about. It reduces the workload on my operational staff. It reduces the need for us to have to be involved in maintenance, and patching, and upgrades, which takes a great deal of time, and in addition to the time factor, there's a cost factor associated to it. 'Cause I don't have resources doing other work because they're focused on these activities. But over time, as the technology continues to improve and the price per gigabyte gets lower, it will continue to reinforce that value proposition, as well as the carbon footprint benefits that we gain.

David Vogt
Managing Director and Senior Equity Analyst, UBS

Gotcha.

Kris Newton
VP of Investor Relations, NetApp

Scanning for questions in the back.

Jake Wilhelm
VP and Chartered Financial Analyst, Wells Fargo

Hi, Jake Wilhelm with Wells Fargo. Could you talk a little bit about how you see the move towards continued disaggregation in the data center with technologies like CXL, affecting your storage and memory architectures over the next several years?

Anthony Lloyd
VP of Technology Services, OpenText

I think a lot of that really depends upon the use cases. My... How do I say this? In the world that I operate in, we have a pretty well predefined set of parameters around the performance characteristics of our applications, our databases, because we're primarily talking about HR, financial applications, those really back-end systems that really run the business. So those things are a little bit different than Phil's world, where, you know, he's the mad scientist that's doing all of these out-of-the-box things that are one-offs, and they're trying to figure it out as they go along.

My world is a lot more predefined. So I would say the impact to me is a lot different than it may be for other companies that have different use cases. I don't really have a lot of exotic applications and things like that, that we're supporting, so it, it's a little bit different for me.

Philip Adams
CTO, Lawrence Livermore National Laboratory

Well, I'll say that, we've tried really hard to make a lot of our environment predefined. Which is why we've leveraged FlexPod, and we've got stovepipes in certain environments. So where, where things may seem disaggregated, what we have is pools of aggregation instead of one big aggregated pool. And, you know, as I mentioned before, that gave us some benefits for being able to take down different environments, being able to, to patch, being able to manage the unique workloads so that we're a little bit more fiscally responsible when it comes down to expanding our infrastructure to know: okay, this, this environment here needs more compute, this environment over here needs more RAM.

Especially, you know, with our databases and some of the analytics, they're, they're asking for a lot more RAM, you know, as we're loading things more in memory and doing a lot more in-memory compute. So those, those are, you know... Like I said, we're trying very hard to not be this mad science unique little beast out here. Again, we're using the same Lego blocks that everybody else is. It may be a unique application for how we've assembled it but, you know.

Anthony Lloyd
VP of Technology Services, OpenText

We all have the same problems.

Philip Adams
CTO, Lawrence Livermore National Laboratory

Yeah.

Kris Newton
VP of Investor Relations, NetApp

Well, actually, that was gonna... Oh, well, Eddie's got a question. Yeah.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Sorry, just this is for Phil, actually, non-storage. Phil, as you look into the future, how do you see GPUs and ASICs path crossing? Do you see a day where there will be less of a GPU and more of a ASIC solution for AI application?

Philip Adams
CTO, Lawrence Livermore National Laboratory

I hope there's both. Yeah, I'm looking forward to a time when there's more, you know, code on chips so that, you know, everything from where the... You know, I live in a world where there's a lot of industrial Internet of Things, so I'd love to have the data as it comes out of those things to be able to be bagged, and tagged, and labeled at the central point, and as it's flowing through my environment, to be able to get a full, a complete manifest of where it's been, what's accessed it. Because this is where, you know, as I mentioned, from an AI perspective, you need to know the provenance for where the data's flowing. You need to know if somebody poisoned that well for that information, if you're gonna hope that this AI brain is going to make a good decision for you.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Is that a wish list, or is that actually a realistic target on the horizon?

Philip Adams
CTO, Lawrence Livermore National Laboratory

Well, I do see that where some things that are happening now, and even some releases, I think, that you guys are making is-

Kris Newton
VP of Investor Relations, NetApp

That might come later this week.

Philip Adams
CTO, Lawrence Livermore National Laboratory

Yeah. You might see some things that may, you know, be able to do localized or on-prem processing.

Mehdi Hosseini
Senior Equity Research Analyst, Susquehanna International Group

Okay. Thank you.

Kris Newton
VP of Investor Relations, NetApp

All right, Irvin.

Irvin Liu
VP, Evercore ISI

... Thank you. Irvin Liu with Evercore ISI. So, Anthony, you talked about several of the advantages of spinning up storage in Keystone, such as, you know, OpEx versus CapEx and the consumption, the flexible consumption model. But is there a connectivity or low latency advantage from being in a colo data center or peering that you get from peering that's worth mentioning as well?

Anthony Lloyd
VP of Technology Services, OpenText

So for us, we basically provide all of our storage where our compute is. So if we're using compute in the cloud, let's say at GCP, our storage will be at GCP. If we're using it on-prem in one of our colos or in one of our data centers, it's going to be there. You know, when you start trying to connect things from on-prem to the cloud is when you get into major trouble with latency and application performance and cost, because every time you take data out, dollars go up.

Right? So we, we don't do that. So it's really important that you ensure that you don't start going down those paths, where, first of all, your applications are gonna be severely impacted, but then every time you, every time things get chatty, it's gonna cost you a lot of money.

Kris Newton
VP of Investor Relations, NetApp

All right. Questions? All right. Well, we're getting close to the end of the show, so, I'll ask kinda one question to wrap it up. It'll probably take the full time. You guys are in really different industries. Sounds like you're both aiming for a more off-the-shelf approach to your technologies to make it a little bit easier for you. I was gonna ask what the challenges you face are, but let's make it a bit of a positive, and what are the opportunities that you see ahead of you for your IT environments?

Anthony Lloyd
VP of Technology Services, OpenText

There are so many. The ability to deploy faster, to have greater reliability, to have self-healing, to minimize the need for having more support personnel and resources, which reduces our overall cost. They're exponential. You know, the whole conversation about shift left, shift left, shift left, that's what everybody wants to do. So the better the technology is, it reduces the need for us to have level one, level two people. The technology will take care of that. So that's nirvana, right? So you can spin it up as you need to, you can deploy it where you need to. The time is minimal. You know, all you have to worry about is getting the connectivity to where it needs to be, and away you go.

Kris Newton
VP of Investor Relations, NetApp

What are you excited about as you look forward into the... Your technology crystal ball?

Philip Adams
CTO, Lawrence Livermore National Laboratory

Well, it starts off, like I said, with the data and being able to access more of that data. Our scientists are just ruthless when it comes down to accessing data and analyzing the data, and that's also one of the reasons why. As you. I was smiling as you were saying the cost to pull things back from the cloud, you know, you don't want to do that, right?

Anthony Lloyd
VP of Technology Services, OpenText

No, you don't.

Philip Adams
CTO, Lawrence Livermore National Laboratory

But, you know, so just being able to enable the, those discoveries, I think is a huge part of saying, you know, we've done the right thing. Looking forward, as I mentioned, I'm enthused about AI, or I'd say cautiously enthused about AI. There's a lot more good that's gonna come out of it than bad. I'm looking forward to its prospects as it relates to cybersecurity. It's almost every day you hear some place is getting hacked into, and I'd like to be able to take this on the offensive rather than the defensive. You know, the 1,000 rocks coming at you, and even if you looked at it and you batted off, you know, 999 of them, that one comes through.

But if AI can be that other layer, you know, as I'm going through layers of my, of security, that something else is there saying, "Okay, you know, we're seeing something else," that would be, that would be outstanding.

Kris Newton
VP of Investor Relations, NetApp

Are there any technologies on the horizon that you think are particularly exciting? Or just stuff getting cheaper and faster?

Philip Adams
CTO, Lawrence Livermore National Laboratory

I'll start off. Cheaper and faster will help because the volumes of data are increasing. You know, there is more fiducials that you want to grab in datasets, more aspects that you want to look at in order to be faster, better, and more efficient going in the future. So those things are naturally gonna increase data sizes. We talk about a lot of terms of how do we be, you know, how can we be more intelligent about what we store and not store everything? So, you know, as you do that, you know, it hopefully will help the Moore's Law curve in terms of, you know, the demand for more space versus, you know, trying to be responsible when it comes to, you know, clean, you know, everything green and sustainability. So those things kind of come to mind.

Kris Newton
VP of Investor Relations, NetApp

Anything on your horizon that you think?

Anthony Lloyd
VP of Technology Services, OpenText

I won't, I won't say specific technologies, but I will say, to echo Phil's points, cheaper, faster, and more secure. Those things are really important. The amount of data that we have to maintain and support is growing exponentially, and that doesn't go away. So to your point, how you manage it more efficiently, how you reduce the amount that you have to keep, and only keep what you need, but also being able to ensure that you can access it based on the use case, in the manner that you need to from a performance perspective, those things are all critical. So those are the things that, for me, are really going to be important as we continue to go down this road.

Kris Newton
VP of Investor Relations, NetApp

All right. All right, final call for questions. One in the back. No, it's great. It's better that it closes out with a question from you than from me.

Meta Marshall
Executive Director and Senior Equity Analyst, Morgan Stanley

Yeah, no. So both of you guys kind of laughed about egress fees. And so I guess I was just kinda wondering from the perspective of, you know, have you brought stuff back and paid those fees, or is it just those end up being kind of stranded islands of stuff in the cloud?

Anthony Lloyd
VP of Technology Services, OpenText

So the answer is yes. Which really brings to the forefront the criticality of doing the homework up front. Everybody seems to think that everything in the cloud is free, which it is not. And you have to do the analysis to understand a few things. You know, first of all, is the application a good candidate to go to the cloud? You have to do the homework and understand what the cost is to remediate the application, because you don't, you don't want to do lift and shifts, because what you're gonna do, you're gonna have a bill that you will not be able to afford, and then you're either gonna have to fix it when it's in the cloud, or you're gonna have to bring it back. And so you don't wanna do that.

But you got to do the homework upfront, understand the cost to remediate the application, understand the cost on your application teams, because you're gonna pull those folks from doing other work to remediate these applications, or you're gonna pay a third party to come in and consult and do that work. Then you're gonna pay the cost associated with the application running in the cloud, if it's optimized. So you've got all three of those numbers you've got to compare and compare that to what your on-prem costs are. Then you can make a fair analysis and assumption of exactly is this a worthy application to move to the cloud or not? Because if you don't do that, you're asking for trouble.

Philip Adams
CTO, Lawrence Livermore National Laboratory

That resonates with me quite a bit. The only thing I've got to add to that is, you know, and it's kind of obvious, right? In most cases, where your data sets are, is where you want to do the compute. The problem comes in to where you find out that you've got some data on-prem or some data, you know, that's out in the cloud, or some data split across multiple clouds. Then what are you gonna do when you need to do that deeper analysis to pull everything in and actually, you know, manipulate that data? And so at that point in time, now you have no choice but to pay those egress fees as you're pulling from all of those and slamming it into your machine learning algorithm.

Kris Newton
VP of Investor Relations, NetApp

All right. Well, Phil, Anthony, thank you guys so much for spending some time with us. I really appreciate it. I appreciate you being NetApp customers. Thanks for talking to everyone.

Anthony Lloyd
VP of Technology Services, OpenText

Well, thank you.

Philip Adams
CTO, Lawrence Livermore National Laboratory

Thank you.

Anthony Lloyd
VP of Technology Services, OpenText

Thank you for having us.

Kris Newton
VP of Investor Relations, NetApp

Thanks.

Philip Adams
CTO, Lawrence Livermore National Laboratory

You're welcome.

Kris Newton
VP of Investor Relations, NetApp

Thank you so much.

Philip Adams
CTO, Lawrence Livermore National Laboratory

All right.

Kris Newton
VP of Investor Relations, NetApp

It was great. I really appreciate it. All right. You are free to enjoy Insight. I know you're, you're in high demand, so thank you. All right, well, that concludes our program today. At 3:00 P.M., the keynotes go off, so hopefully we'll see you all there. Then again, for those who are here physically, cocktail reception at 5:00 P.M., where you can mix and mingle with all the speakers that you saw here on stage today and more, as well as the entire Insight attendee program. So thank you very much. Have a great day. Oh, and there are boxed lunches outside.

Moderator

Give it up for them. Come on! Yes, they are killing it, as always. I love Paul and the Factor and that whole crew. They really get it together. Look, we're really looking forward to watching these guys rock out over the next few days, and there's a lot more to look forward to here at the MGM Grand, because NetApp has, hasn't just built... Oop, let me just do this right here. NetApp hasn't just built a conference here. You know what they built, people? They built an entire data festival. Yes, that's right, a data festival. Come on through. That's fine. You can do all of that all you want. It's no big deal. This is a super cool space where you're going to be able to connect with everyone in your NetApp community.

If you head over to the Insight All Access Expo area, you'll be able to drive the latest tech, pick up new skills and knowledge that will hone your IT factor like never before. We've also got great workshops, hands-on labs, certifications, and special areas for our analysts, investor, and public relations friends, great evening events, and so much more. I hope you all get the chance to experience all of it over the course of this week.

But let's remember, the NetApp Insight community extends far beyond the MGM Grand. So please join me in saying hello to your colleagues watching the Insight live stream from home. Welcome to the party, friends. Everybody say, "Hey!" What's up, live stream? Yes. So glad you could join us, even if you're in your pajamas. Thank you for adding your unique IT6 factor to the mix from all over the world.

Meanwhile, here at the headliner stage, we are just a few minutes away from our Vision 360 session, the first of our keynote presentations. Who better to set the tone than NetApp's CEO, George Kurian, and NetApp's president, Cesar Cernuda? That's right, they're gonna be coming up soon with some very special guests. One more thing you can look forward to is actually right here in the ballroom, having a friendly beanbag toss with your colleagues. If you head on over to the game zone before the show for some cool festival activities, they got a giant Jenga over there. I'm challenging you to Jenga, by the way, and of course, they got cornhole. Better start thinking about who you want to team up with and win some of those competitions over there.

Speaking of teams, it's time to hear from some more members of the unstoppable NetApp A-Team. Please give a hand to John Woodall from GDT and Trey Davis from AHEA. Oh, look at that. Oh, y'all getting... Oh, yo, you guys got a fan club.

John Woodall
VP and Field CTO, GDT

We got a fan club.

Moderator

Okay, John, where are you from?

John Woodall
VP and Field CTO, GDT

Personally, San Jose.

Moderator

San Jose, man. Are you enjoying Insight so far?

John Woodall
VP and Field CTO, GDT

I love Insight. This is the... So I've been doing it since 2004.

Moderator

Wow!

John Woodall
VP and Field CTO, GDT

Yeah.

Moderator

Give it up. Do the math on that.

John Woodall
VP and Field CTO, GDT

...No, I don't wanna do that.

Moderator

You're a veteran, man.

John Woodall
VP and Field CTO, GDT

Yeah.

Moderator

You're a veteran. You're a veteran. Trey, what about you? Where are you from?

Trey Davis
Senior Technical Consultant, AHEAD

From Atlanta.

Moderator

From Atlanta?

Trey Davis
Senior Technical Consultant, AHEAD

Yeah.

Moderator

Is this your first Net-

Trey Davis
Senior Technical Consultant, AHEAD

No, this is probably my tenth.

Moderator

Tenth?

Trey Davis
Senior Technical Consultant, AHEAD

So, yeah.

Moderator

Wow, so you're a veteran, too.

Trey Davis
Senior Technical Consultant, AHEAD

Yeah.

Moderator

All right, so what do you think is gonna take your organization's IT factor to the next level this year? We're talking a lot about the IT factor. What, what are you gonna do in your organization to get there?

Trey Davis
Senior Technical Consultant, AHEAD

My organization, we're gonna deliver these NetApp services and products that we're talking about this week and deliver it with excellence. So-

Moderator

Any particular one you're super excited about?

Trey Davis
Senior Technical Consultant, AHEAD

A lot of cloud-related services. AI is a big one, so yeah.

Moderator

Awesome. John, what about you?

John Woodall
VP and Field CTO, GDT

Okay, so what I'm looking forward to hearing from this show. So there's some people say that unified storage is just a certain thing. What I'm looking forward to is hearing what NetApp's version of unified storage means.

Moderator

Ooh, I like this.

John Woodall
VP and Field CTO, GDT

Cloud, on-prem, all protocols, common control plane, and then APIs.

Moderator

I like this.

John Woodall
VP and Field CTO, GDT

Yeah.

Moderator

You're like putting them on the spot.

John Woodall
VP and Field CTO, GDT

That's right.

Moderator

You're like, "I wanna know what's going on.

John Woodall
VP and Field CTO, GDT

Yeah.

Moderator

Give me the good stuff.

John Woodall
VP and Field CTO, GDT

That's right.

Moderator

I wanna find out.

John Woodall
VP and Field CTO, GDT

Yeah.

Moderator

All right. That's cool. All right, so do you wanna shout out any colleagues over at your, at your groups? Anybody you wanna give a shout out to or to your squad?

John Woodall
VP and Field CTO, GDT

I gotta call out my boss, Dan Moseley, from GDT. He's in here somewhere, I hope.

Moderator

GDT, make some noise if you're in the building. Yes.

John Woodall
VP and Field CTO, GDT

Yeah, somebody.

Moderator

All right. And Trey, what are you looking forward to the most here at Insight?

Trey Davis
Senior Technical Consultant, AHEAD

This... I've already actually experienced it. It's just the energy of being here with the A-team and friends and colleagues you haven't seen in a while. It's, it's, it's invaluable.

Moderator

Well, you all have a lot in store for this show. They got a lot packed for you. I've been seeing a lot of stuff behind the scenes. I can't wait for you all to see what's gonna go down. So-

John Woodall
VP and Field CTO, GDT

Same here.

Moderator

Give a round of applause to Trey. Give a round of applause to John. Thank you both for being here. The A-team is in the building. Yes, they are. So with that being said, one more thing that I wanna make sure that we go ahead and hit here is the fact that... You know what? First and foremost, can I give—John, I'm gonna give you the beanbag, buddy.

John Woodall
VP and Field CTO, GDT

Thank you, sir.

Moderator

All right, hold on to that for me. Thank you. I appreciate it. You guys go over there and win some stuff. All right. Speaking of which, we just got a few minutes before we kick off this event. So make sure you're coming in, grab your seats. Everybody, grab your seats and get seated, 'cause we are about, just about to be there, a few minutes away from kickoff. And what do you say we get the blood pumping with another jam from my favorite crew, The Factor? Let's rock it, baby! ... Our Vision 360 session with George and Cesar on deck. You're not going to want to miss this, and that's gonna close things out for this afternoon right here at the headliner stage. But that's definitely not the end of what we all have going on in our evening together. Because at 5:00 P.M. tonight.

We're gonna get it started at the official NetApp Insight Welcome Reception. You are not going to want to miss this. We'll have more details on that later in the show, but rest assured, we're going to turn up like the NetApp community only knows how to, so you definitely won't want to miss that. All right, Vegas, we're just seconds away from getting the start of this show. Everybody, you're watching on the live stream, welcome. Good to see you. We're about to crank this thing up, so take your seats, get comfy, put those mobile devices on Do Not Disturb, please. Thank you, thank you, thank you. Appreciate that. Now, without further ado, it's time to kick the festival off in style, in Las Vegas, live! It's NetApp Insight 2023, and it starts right now.

The it factor. It can be hard to define, but you know it when you see it. It's that secret sauce, the je ne sais quoi. It's what makes thinkers, leaders, and builders. It's what defines a community. And guess what? All of you in this room have got it, and we're here to help you rock it. To get smart about data infrastructure and turn disruption into opportunity, because that's why we're all here at INSIGHT, the data festival: to help you harness, unify, and protect all your data. In short, we want you to crush it. And we're with you every step of the way, making sure your it factor goes to eleven. Welcome to INSIGHT 2023. It is on! Please welcome your host, Mario Armstrong.

Whoa! Go. Let's go! Let's go! Let's go. Las Vegas, welcome to NetApp Insight 2023. Are you pumped to be here? Make some noise. If you're ready to show off that you got the it factor, make some noise. Now, everybody, from the left, in the middle, and to the right, on the count of three, I really want to hear you rock this building in Las Vegas right now. You haven't been together in four years. This is your moment. So on the count of three, I really want to hear you all say, "NetApp Insight." One, two, three.

Yes, that's what I'm talking about. That's how you kick off a data festival, people. Let me ask you a question. When's the last time you ever been to a data festival? There hasn't been one until now, so you haven't been. Can we give a round of applause to The Factor, the house band, please? They've been killing it.

George Kurian
CEO, NetApp

Thank you, Mario.

Moderator

Incredible. Incredible. And again, let's give a shout-out to our live stream audience who's watching us from all over the world. Welcome. Good to see you all. Thank you for being here. And I'm super excited to welcome you to the MGM Grand, which will be our home this week. Now, my name is Mario Armstrong. I'm an NBC Today Show technology correspondent, host of a podcast, and a bunch of other great things, and I'm honored to be your host for INSIGHT 2023. It is a pleasure to be here with you all. So I'm curious, as I've been preparing for this and getting everything together, I'm like, "I wanna know how many people are first-timers." How many of you, show of hands, are a first-timer like me? Wow, that's awesome! Okay, good. I don't feel alone. That's great. That's great. That's great.

Okay, now, how many of you have been to INSIGHT before? Wow, some... Yeah, you guys, this is great. There's almost a mixed crowd here. Almost 50/50, and maybe a little 60/40 there. Well, I heard it's the first time any of you have been back in Vegas for INSIGHT since 2019, so don't call it a comeback, but you're here, people, again after four years. Give yourselves a round of applause for getting here, flying here, and making it happen. It's a big deal that you're here. Proximity is really important. Networking is hugely important, and whether you're an INSIGHT veteran or a first-timer like me, I'm super grateful that we have an opportunity to do this together in person. Now, look, we have an incredible 3 days in store, and there's no stronger way to start than what you're about to see.

Up next is our Vision 360 session, starting with the man himself, NetApp CEO, George Kurian. And George is gonna set you up with an Insight primer covering the state of the industry, expert AI insights from some very special guests, and how to build your intelligent data infrastructure with NetApp. After that, NetApp President Cesar Cernuda will take the stage and catch up with some leading customers and share their inspiring success stories. You're here to up your IT factor, people, this year, right now, so your journey is about to kick off. So without further ado, please welcome to the headliner stage, NetApp CEO, George Kurian.

George Kurian
CEO, NetApp

Thank you, Mario, and welcome to INSIGHT 2023. It's super nice to have you all back in person after a few years. For those who are here for the first time, a warm welcome, and for those who have been here before, welcome back. INSIGHT enables the once-in-a-year opportunity to bring together you, our customers, our partners, practitioners, and thought leaders, NetApp technology teams, and wonderful guest speakers to learn from each other how we can turn every disruption into opportunities for all. So let's begin today, right here, right now. As you well know, we operate in a world of growing risks, where the rate of change and the impact of disruption is accelerating. These risks include geopolitical risks, macroeconomic risks, business and technology risks, and changing customer demands. If you are in IT, as many of you are, you have your own set of challenges as well.

You could include application and infrastructure modernization, getting rid of growing technical debt, dealing with an ever broader range of cyber threats, especially large state actors going after your digital assets and your data. You're dealing with unprecedented talent shortages, where you are stressed to keep pace with the rate of technological advancement, and you need to simplify, standardize, and automate your environments. A few years ago, many of you thought that cloud was the panacea and was the answer that could solve everything. What many of our customers realize is that the pace of migrations to cloud, it takes time, it's complex, and you've got a lot of learnings over the last few years. You realize that cloud smart is probably a better answer, and that you will operate therefore in a hybrid multi-cloud architecture for a long time.

Data and data management is key to success in this era, and we will show how data-driven businesses are AI-ready and accelerating their leadership. Research that the Boston Consulting Group and Google conducted demonstrates the performance, leadership, and competitive advantage of data-driven businesses. They can unify, manage, protect, and harness their data for business impact. They can understand a disruption and respond much more quickly than data laggards. Let's take a benchmark. Let's look at an important metric, like the percentage of companies that are able to grow revenues at greater than 10% a year. If you look at it, 30% of data leaders are expected to increase revenue by more than 10% by the end of 2024, compared to 13% of laggards. That's a huge gap.

Now, if you compare what's happened over the last few years, this gap is even wider than in 2022, where 16% of leaders and 11% of laggards expected the same increase. It shows that the benefits to being data-driven are large and expanding. And that is true not just for revenue growth, but for every important business metric: cost management and productivity, customer satisfaction and retention, return on capital deployed, and so on. Data leaders are advantaged substantially over data laggards, and AI is going to widen that gap even further. And because AI is the archetypal hybrid cloud workload, where you want to experiment quickly on this cloud and scale your production environments, potentially in your data centers or on the cloud, AI will also magnify the gap between the organizations that are hybrid cloud-ready than those that are not.

So let's talk about what it takes to be data-driven and AI-ready. You know, it's hard to be AI-ready and data-driven. The challenge is that data investments must deliver near-term value and, at the same time, lay the groundwork for rapidly developing future uses. While data technologies, the scale and volume of data, the diversity of data types, are evolving at rapid pace. As we have seen, and I'm sure you, our customers, have seen, neither a monolithic top-down approach nor a grassroots bottoms-up approach have worked. There have been clients that decided, "Hey, we're going to put all of our data in a gigantic data repository." You know, you remember the big data use cases of Hadoop. Those have gloriously failed as data types have evolved and analytic environments have progressed well beyond Hadoop. You think about the bottoms-up experiments in many of our customers.

They create a plethora of pilots, none of which have gone to production. So what are the requirements for being data-driven and AI-ready? It starts with having a cohesive data strategy and organization. Knowing what data you will need to drive business impact, and getting all the roles within your team, your data team, analysts, engineers, architects, scientists, to work together to design and deploy production use cases. Now, let's talk about operating model and technology strategy next. In addition to having a cohesive data strategy and organization, operationally, you will need to treat data as a product. And what do I mean by that?

It means that data for each domain, such as customer or product or supplier or vendor or employee, is consolidated and treated as a single source that serves multiple applications and use cases. Today, when I talk to our customers, you focus on operating business processes or their underlying systems. Everybody focuses on the CRM system or the BI system or their supply chain system, when in fact, what you need to be ready to be data-driven and AI ready, is to put together a cohesive data set for a customer or an employee that's abstracted from the underlying system and forms the foundation of your data model that serves a broad range of applications that can transform your business in line with rapidly changing customer needs. So then let's talk about what happens to the next part of your architecture.

A modern data architecture is the third part of what you need to be data-driven and AI ready. A modern data architecture enables you to rapidly respond to business disruption by balancing the need for flexibility, rapid change, and transformation in some parts of your data architecture, with consolidation, standardization, evolution, and integration in other parts. We're not one to tell you, transform every layer of your stack. That's silly, because it maximizes the risk for incremental improvements and flexibility. What we say is transform the right parts of your data architecture and evolve and integrate, and consolidate, and simplify other parts of your architecture so that you have the right balance of risk, agility, and efficiency. Let's discuss this more.

You know, where your business process meets your technology architecture, you'll have a broad range of applications and use cases for data that need to rapidly evolve to meet the changing customer demands and the needs for your business. This involves not only using data from within your transactional systems in a unified manner, but potentially external data sources to create an immersive experience. For example, a customer portal that integrates multiple channels to create a truly immersive customer experience for your customer. You need flexibility at this layer of the architecture, but this layer of the architecture needs to run on a stable data model, where data for each domain is consolidated and treated as a single source that serves multiple use cases above it. Operationally, as we talked about, data for each domain must be treated as a product, independent of the underlying transactional systems.

Let's now talk about the transactional systems. You know, these data products sit on top of a flex, flexible transactional data layer, where there is ongoing replatforming and modernization of applications and databases, data lakes, data fabrics, data meshes to deal with the rapidly evolving nature of data, data types, the enormously, vastly growing unstructured data landscapes, and you, our customers, and the scalability and feature requirements of modern AI analytics, open source technologies. You need to preserve flexibility in this part, where the rate of change in technology is very high, and these need to be built on top of an intelligent, integrated, silo-free data infrastructure that it can evolve to meet the needs of all your applications and workloads, all of your data types in all the places you may need your data in the hybrid multi-cloud era.

A modern data architecture is the foundation, technological foundation stone of a data-driven enterprise that's AI ready. To summarize, being data-driven requires a cohesive data strategy and organization, an operational model to treat data as a product, and a modern data architecture built on an intelligent data infrastructure. So let's talk about an intelligent data infrastructure, because we know a thing or two about that, and that's what we've been working on. We are confident that NetApp can help every business make their data infrastructure intelligent, so that you can turn data into a force to transform disruption into opportunity.

Several years ago, we told you that the word would be hybrid and multi-cloud, and we have delivered the Data Fabric to enable your business to manage your data across any environment with performance and simplicity, whether your data is in your data center, in a managed service environment, or on the public cloud, or a combination of all three. Over the last few years, as we have worked with you, our customers and our technology partners, we've realized that customer needs for data management have expanded with more requirements for privacy, protection, and governance. The need to support new, more dynamic workloads like AI and cloud-native and open source applications. The vast and rapid growth of unstructured data types that form the foundation of 80% of the data in most enterprises by 2028, and the need for observability, optimization, and automation of an increasingly distributed dynamic infrastructure.

An intelligent data infrastructure builds on and expands upon the data fabric. It starts with silo-free infrastructure, then harnesses observability and AI-powered data services to enable the best data management actions. All of our capabilities are hybrid multi-cloud by design. What a step forward for NetApp! From the time we laid out our vision of a hybrid cloud data management architecture. An intelligent data infrastructure combines unified data storage across any environment, with the world's best storage OS for all data, native cloud integration and unified control, augmented by integrated data services for cyber resilience and policy-based data governance, supported by intelligent and efficient Cloud Ops solutions for observability, optimization, and automation. To uniquely enable you to operate with seamless flexibility, get the most value out of your stored data, and to tap into the power of AI to help you maximize productivity across your infrastructure and your teams.

We're excited to show you just how updates to our portfolio will make your data's infrastructure intelligent. AI is the major disruptor of the modern IT era. Eventually, it will raise annual global gross domestic product by 7%. That is an astronomically large number, far more than what the internet could have generated. The rewards, however, can be unevenly distributed because the leaders who are data-driven and AI-ready, can dramatically outperform the laggards. We're here, plain and simple, to make you part of the winners.

We have years of experience in the AI market, working with industry leaders like NVIDIA, to enable predictive AI and ML solutions for many hundreds of customers, some of whom you will hear from today. Trust us, AI runs on data, data runs on NetApp. Our integrated data services help you derive the most value from your data and make scaling AI across your enterprise much easier. Tomorrow, you'll hear more about how we have the industry's best data pipeline for AI. But before we do that today,

Jeff Baxter
VP of Product Marketing, NetApp

Hey, everyone, I'm Jeff Baxter, and I'd like to welcome you to a special segment just for our digital audience. You'll be treated to the world premiere of a new What's the Future video starring Matt Watts, all about AI. But I want to dig into AI a little bit more with one of our own resident experts, our SVP of Engineering, Octavian Tanase. Octavian, thanks for joining me.

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Happy to be here again.

Jeff Baxter
VP of Product Marketing, NetApp

So, Octavian, when we talk, you know, we often talk about AI in three different ways in NetApp. You want to summarize those for me?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

Absolutely. So first of all, we're looking to help our customers deploy AI with efficiency and automation. Number two, we're looking to embed AI and ML in our products. And number three, probably what I'm mostly excited about, is use of generative AI in engineering to improve productivity.

Jeff Baxter
VP of Product Marketing, NetApp

Yes, so since you're most excited about it, let's start with that one, right? How do we use generative AI within our engineering organization?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

So generative AI, as the name suggests, it's about helping generate an outcome, sometimes code, maybe documentation, maybe translation of, let's say, legacy automation into something cool, something modern, that our engineers could, could use and become more, more productive. So it's all about using perhaps, OpenAI or, open source LLMs that we deploy within our infrastructure, to make sure that we maintain, you know, copyright, and, and we do that in a very safe way to, to generate code, to generate, to do assistant development through copilots.

Jeff Baxter
VP of Product Marketing, NetApp

Awesome. So NetApp's already taking advantage of AI to make our employees more productive. Now, how are we embedding AI in our products to help our customers?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

So this is an area where we have a little bit of pedigree. We've been doing that for quite a few years. We started with OpenAI and predictive analytics. The intent here is to replace a lot of the heuristics that we built in our products over the years to make good decisions on behalf of the customers with an AI ML module. So think about, tiering . Rather than, you know, being done based on some static heuristics. You know, you have an AI module that learns from the dataset and the IO into a controller and does the right thing of understanding when data is cold and could be moved to a more cost-effective storage tier.

Jeff Baxter
VP of Product Marketing, NetApp

Awesome. So just a few examples there of how we use AI to make ourselves more productive, how we use AI to make our products even better, and you'll be hearing a lot more throughout Insight and all the keynotes about how we help you implement AI at your companies. But now, I teased it before, I want to go ahead and introduce to you the world premiere of the new Matt Watts 'What's the Future?' video, all about AI. Go ahead, Matt, take it away.

Matt Watts
Chief Technology Evangelist, NetApp

Hi there, I'm Matt Watts. Welcome to another edition of What's the Future?, which we're filming here from our U.K. office in Windsor. This episode is going to be about the AI monster. And we're going to do three things: We're going to talk about how have we tamed the IT monsters we've built in the past, we're going to talk to Joe Baguley of VMware about how did we get here, and then we're going to talk about how do you think about your entire IT estate in the context of what that means when we look at AI. There's a similar pattern that happens every time something new comes along, and that is that we tend to create monsters. So let's talk a little bit more about what that means.

... So there are typically four things that happen. The first one is that some new innovation comes along, and all of the businesses rush towards seeing how they can exploit this and get advantage from it. Step number two is that it's typically quite unstructured. Different groups go out and do things because they all want to benefit from this new innovation. Which then leads to step three, which is that every other group starts to catch up, so the benefits through this innovation are starting to become more minimized. The problem with that is it's now become complex, which takes us to step four. And step four is that we simplify things, and then we go all the way back to the beginning, and we start the process over again. Sound familiar? Well, we've done this three times before.

If you go back to the late '80s and '90s, we used to have these monolithic systems, and those were gradually replaced over time with more modular mid-range technology. It was more flexible. It gave people the ability to innovate at much, much greater speed. It gave much more freedom, but with that freedom, it meant different groups went out and made their own choices. The Exchange team built the Exchange infrastructure, the SQL team, the SQL infrastructure, and on and on and on. We built the first monster. Then the second wave happened.

VMware came along, and it reduced the complexity of these environments. It increased the utilization, and it normalized things for quite a number of years until the third wave, cloud. And we approached cloud in exactly the same way as we approached the first wave. Lots of different groups starting to go out and choose the cloud they want to work with, the frameworks they wanted to use within those clouds, and that complexity has come back. We're now trying to solve that complexity while understanding we're at the beginning of the fourth wave: AI.

Joe Baguley
VP and CTO of EMEA, VMware

Well, when we did the second wave, there was a very simple mantra that we came up on a lot of our marketing, I suppose, which was abstract, pool and automate. If you think that's what we did in the second wave to really change the world was through virtualization of network, storage, compute, we abstract, pooled, and automated that. Let's look at the third wave of multi-cloud. That's exactly what we're doing again. We're abstracting all those clouds, we're pooling them, automating it, simplifying it, so people now treat those clouds like they used to treat servers before within their data center, getting that openness and choice.

Matt Watts
Chief Technology Evangelist, NetApp

So how do we stop AI from becoming the next monster?

Joe Baguley
VP and CTO of EMEA, VMware

Guardrails. That's the number one thing I'm talking to customers about right now when they're looking at the future. What we need to do is make sure that when people are going and using all these new technologies, just like before we had with cloud, where there was shadow IT and everyone going off and using random bits of cloud willy-nilly and building things on premises, and it was all a mess, and then we had the same with Kubernetes.

It's exactly the same we're going to have now, where people are off using random AIs all over the place. What they want to do is they want to make sure they don't end up in some kind of legal mess or logistical or operational mess around what they've built and then have to unpick that. So guardrails is really important. It's actually more about governance than it is about technology for customers right now.

Matt Watts
Chief Technology Evangelist, NetApp

How important is AI to the future of many different industries?

Joe Baguley
VP and CTO of EMEA, VMware

It's fundamental in a way that people don't realize.

Matt Watts
Chief Technology Evangelist, NetApp

Yeah.

Joe Baguley
VP and CTO of EMEA, VMware

What we've got to now is a tipping point, and that tipping point is we're putting AI in the hands of normal people. And what I mean by that is, when you finally get it out there to the mass market, is where the magic happens, where the you know, the non-geeks go and do things with it we never expected or modeled, and it's exactly what you're seeing now.

Matt Watts
Chief Technology Evangelist, NetApp

So this is the democratization of AI playing out in real time?

Joe Baguley
VP and CTO of EMEA, VMware

Completely, but it's also the democratization of technology. Back in the day, when computers first started, a few, five or six people knew how to program them, and then we got easier and easier programming languages. You know, more and more people could understand how to program a computer. Now you can talk to computers in natural language without having to understand how the computer works or what it does. That's absolutely transformational, and that's really what we're seeing right now.

Matt Watts
Chief Technology Evangelist, NetApp

Goldman Sachs says that AI cost savings are going to increase GDP by up to 7%. So let's talk to an AI specialist at NetApp about what that means. Hey, Jose.

Speaker 37

Yeah, Matt. Organizations are starting to see how AI can multiply value for them. Companies that are able to scale their AI projects are seeing 3x return of their investment. So in fact, we're seeing customers who are getting almost 70% time savings from this, from implementing AI in their, in their business processes.

Matt Watts
Chief Technology Evangelist, NetApp

What role will NetApp play in the future of AI?

Speaker 37

If you look at the advancements of the AI technology, Matt, in the past five years, it was mainly because of two reasons: the availability of compute, as well as the availability of the data for a lot of companies. If you think about it, 10 years ago, when organizations are putting their cloud strategy, they didn't have AI strategy in mind, so they didn't think of bringing all this data together at one place and having access to the data wherever it sits, whether it's on the hyperscalers or on premises, is becoming a huge importance to advance AI technologies at different industries.

Matt Watts
Chief Technology Evangelist, NetApp

What advice would you give to companies that are looking to take advantage of AI?

Speaker 37

Define your goals and metrics. Try to figure out what do you want to achieve with this technology. Invest in talent and infrastructure. Choose the right tools and platforms, and when you're choosing the right tools and the platforms, you got to look at things like cost, performance, security, compatibility, and then usability when selecting your tools and platforms. So basically, if you start with a set of goals and metrics, then you don't create that monster that we're seeing today.

Matt Watts
Chief Technology Evangelist, NetApp

So what's the future? Well, we need to learn from the past to make sure that we don't keep making the same mistakes again. We need to make sure we don't build another monster, and we're going to be able to do that if we start thinking about intelligent data infrastructures, finding consistent ways to do things in this world of AI, consistent ways of storing, protecting, managing data, of creating data pipelines, of dealing with security.... If we get that right, intelligent data infrastructure becomes the sword we'll use to kill the next monster.

Jeff Baxter
VP of Product Marketing, NetApp

Wow! That was absolutely amazing. What'd you think, Octavian?

Octavian Tanase
SVP of Hybrid Cloud Engineering, NetApp

AI, it's exciting. I really appreciate Matt's insights.

Jeff Baxter
VP of Product Marketing, NetApp

Yeah, and if you wanna check out more What's the Future?, go to NetApp TV. It's just about time to rejoin our Insight keynote, where we'll hear from our digital twins, George Kurian and Thomas Kurian. Take it away.

George Kurian
CEO, NetApp

A lot of profound perspectives in the work and in the insight from Dr. Fei-Fei Li. Let's now invite another guest up, someone who is working on optimizing data and infrastructure for large scale analytics, data management, and AI. All key considerations for technology leaders and business leaders. A person who needs no introduction, the CEO of Google Cloud, and my brother, Thomas Kurian. Welcome, Thomas. Welcome to NetApp Insight. Thank you for coming.

Thomas Kurian
CEO, Google Cloud

Thanks.

George Kurian
CEO, NetApp

Thomas, we were at Google Cloud Next, where we introduced Google Cloud NetApp Volumes, sort of incredible work that our two teams have collaborated on for many years, and where we are seeing significant interest from customers. Today, one of the classes of applications I know which you are investing in significantly are fast, scalable, and easy to use AI offerings, including an AI platform, video and image analysis, speech recognition, multi-language processing, and so on. How are enterprises currently leveraging AI, and what are the use cases that are delivering value for their businesses, and why should business leaders bet on Google Cloud to help them embrace AI?

Thomas Kurian
CEO, Google Cloud

You know, we've always looked at when we introduced generative AI, we said, "The easiest way for people to think about it is to think of a digital persona for every role that can assist humans in performing the functions." We see customers adopting the technology in two different domains. One domain is where they're transforming the core process of the company in assisting people and engaging customers. For example, Mercedes has changed the way that they're using our AI models to help people identify if there are issues with the vehicle. Mayo Clinic is using our AI models to synthesize all the information out on the internet, as well as from the EHR clinical trial system. So when a doctor meets a patient, they have all that information in front of them, searchable, so they can find answers as they talk to the patient.

Priceline is reimagining the way that you book travel. Rather than saying, "I want to find a hotel room," they say, "Why don't you have a digital travel advisor where you can say, 'I'd like to travel to Las Vegas, watch two shows, and I need to have... I'm bringing my kids along, so I need a hotel room near the last show?'" So they're changing, and there are hundreds of companies doing this. They're changing the way they interact with their customers. The same thing, we also see them transforming the internal processes in a company. Procurement people are looking at using AI to find contracts that don't have indemnification warranties. Marketing people are using generative AI to create advertising, both images as well as audio and video, using models to do that. HR people are using it to automate the benefits and HR help desk.

CIOs are using it to automate the way that their IT help desk works. Rather than have people have to reset passwords and things like that, models are now doing many of these tasks. And finally, you know, we have many companies who are using our AI tools to assist software engineers to write code, to generate unit tests, to refactor code, to document things. And so in every role in your working life, we're now seeing organizations adopt generative AI and models to assist people in doing work.

George Kurian
CEO, NetApp

One of the questions that we get from clients as they consider about, you know, using their private data with AI tools, is how do they deal with security and privacy concerns and things like copyright as AI becomes more mainstream? How do you all at Google Cloud help organizations deal with those concerns?

Thomas Kurian
CEO, Google Cloud

We've always felt that what generative AI does is take AI out of the domain of a small number of people and make it available to every developer and every user in an organization. So we do that by putting together a foundational platform we call Vertex. Vertex has all the tools that organizations need in order to use AI efficiently. It first of all, it starts by protecting your data. We guarantee that your data is your data and nobody else's. It runs in a private virtual VPC. No one, even Google employees, don't have access to your data. So as a customer, you can build models securely and safely. We have a responsibility and safety framework that protects, for example, models from generating violent images, making sure models don't have, you know, abusive speech in their response. So we have all the controls.

We have 18 types of controls. We have grounding to make sure that model answers are grounded in fact and are fresh and current, not when the model finished training. You have tuning and distillation to shrink the model so that it can be the most efficient cost-wise and also latency-wise, that it's very quick. Then we added search and conversations so that you can use Google Search, but essentially for your data, and you can also build conversational applications, both to retrieve information and do tasks on your behalf. To expose enterprise data systems, we've exposed a technique called embeddings and vectors, so you can take data and essentially open it up.

For all of those of you who have invested for so many years in storing your information in NetApp volumes, think about what we've done as putting Google search in front of all the data you have and opening up conversations so you can essentially chat with your enterprise data safely and securely.

George Kurian
CEO, NetApp

You mentioned Vertex AI, and you mentioned some of the awesome advancements you've made. How about we actually show our customers and partners the power of Vertex AI with Google Cloud NetApp Volumes? To show you that... Shall we do that?

Thomas Kurian
CEO, Google Cloud

Let's do it!

George Kurian
CEO, NetApp

To show you that, let's bring up Eiki Hrafnsson, VP of engineering in NetApp's cloud engineering team. This is a live demo. There's no, you know, fake stuff here. So Eiki, have at it. Welcome, Eiki.

Eiki Hrafnsson
VP of Cloud Engineering, NetApp

Thanks, George.

I'm excited to show you all the incredible things you can do today with Vertex AI and NetApp Volumes. So let's pick a volume and get started. All right, up in the corner here, I see my volume. I see the internal IP address that it has and that it's an NFS share. It has some folders on it, a bunch of files there, basically unstructured data. Now, this is a demo about the generative AI capabilities of Vertex AI when mixed with NetApp Volumes. So let's look at a use case that's simple but real-world. Let's imagine that we work at the marketing department of an enterprise company. We were just tasked with creating a product landing page that's supposed to be ready yesterday.

Now, we can have a look at all of the data that we have here and try and read through all them, but luckily, we have an AI that can sift through all of our data and find the relevant things. So let's ask Vertex AI about this topic here: Tell me about Google Cloud NetApp Volumes. The product page is about NetApp Volumes, and almost instantly, Vertex AI finds all the relevant files that you can see down here. It generated a summary based on the topic on our private data, not publicly available information, and it even told us which pages it used to generate its responses. In addition to that, it's giving me key insights of each relevant file, and with a click or a prompt, I can even generate some more.

Like, I want to see the summary of all the relevant files here, or tell me which people are mentioned within these files. This is all possible by vectorizing our unstructured data that's sitting on NetApp Volumes, and in this case, we have both the data and the vector database indices on the volume itself, ensuring privacy for our data in the cloud. Now, back to our goal here. We could go ahead and generate some more information with prompts, or we could read all of these files and try to generate the product page, but we could also just chat with our data. So let's do that. Cool. Let's open up a chat here. I've created some suggestions just to get the conversation going.

So I want to ask it first to give me a two-line summary of Google Cloud NetApp Volumes that I can use on a web page. Instantly replies, "Great, that looks good." Let's do some more. What are the key features of Google Cloud NetApp Volumes? Yep, took a second. Key benefits? These are all typical things you would find on a product landing page. And the last one, what are the top five reasons to use Google Cloud NetApp Volumes? Awesome. So we are getting all of these responses from the AI, but I want to pull it all together with some code. So let's ask Vertex AI if it can create that page for me. Can you, can you create an HTML product page with your responses? Cool. So it's only if I don't have image.

So, so let's use the Vertex AI vision API to generate an image, and I'll, I'll give it a prompt here. Let's give it a prompt, cloud... Let's say cloud data storage artwork. Okay, cool! The HTML is back. I can have a look at that. Yep, it's using all of the things that I asked it for: key benefits, top five reasons. Great. And it's generated an image for me. T hat looks cool. I want to add that image somewhere in here, so I'll add it after the first headline there. So I'll ask Vertex AI if it can do that for me. Can you add the generated image below the first headline in the page? Okay. Now, when working with code, you can both use the text model or the code chat model. It really just depends on the complexity of what you're working with.

So once it generates that, I'll see if it worked. Yep, look at that. It's added the image file right after the first headline, but this is live, so bear with me. Let's save this response to a file and then see what it looks like. Boom! Isn't that just like magic? That's pretty amazing. What we've seen here, what we've seen here is, you know, the power of Vertex AI with NetApp Volumes. We've seen vector embedding and search, conversational large language models, image generation, code generation, all on our unstructured data, right on top of NetApp Volumes.

George Kurian
CEO, NetApp

What about that QR code on the top right?

Eiki Hrafnsson
VP of Cloud Engineering, NetApp

Right, right, right. Hey, proof's in the pudding. If you scan that QR code, you're going to see the page that we just generated.

George Kurian
CEO, NetApp

Awesome.

Eiki Hrafnsson
VP of Cloud Engineering, NetApp

Everything was live.

George Kurian
CEO, NetApp

Awesome. Thank you so much.

Eiki Hrafnsson
VP of Cloud Engineering, NetApp

Thanks, George.

George Kurian
CEO, NetApp

It's really powerful to bring together the most advanced AI capabilities in the world with the best unstructured data management capabilities in the world to help you make breakthrough advances... So easy and simple. I would ask all of you, talk to your Google team, talk to your NetApp sales team, and let's find a way to use the power of Vertex together with your unstructured data. One final question, Thomas. AI, and particularly generative AI, consumes massive amounts of energy and requires higher compute power. How are you and the team at Google looking to strike the right balance between AI and maintaining your sustainability objectives?

Thomas Kurian
CEO, Google Cloud

Google has been carbon neutral since 2007. That's 16 years now. We've publicly said, and we are committed to being carbon-free by 2030. For us, it's not a question of AI or sustainability. It has to be, we have to do both. There are many, many techniques that we have invented from data center design, where data centers to run AI systems can be designed in a different way than data centers to run traditional computation. We introduced, roughly 6 years ago, water-cooled AI systems, because you get roughly a 30%-40% lift in overall throughput and machine efficiency. We've invested in many techniques on compilers, for example, to really speed up both serving or inferencing and training. More recently, we've also introduced techniques like distillation.

So for example, if you're part of our preview, you can use a model that has many of the skills that our AI system has to write, but you can actually run it on Gmail when you type, "Help me write," and it runs directly on the Android or iOS phone. So it's because we've shrunk the skills down. All of that is designed to reduce the cost of power consumption, the amount of power consumption, and also to improve the latency and efficiency. You saw how quick the model was in responding. Smaller models are better in terms of latency and responsiveness as well. And so I would encourage all of you to try some of these capabilities. If you saw the demo, there was no coding or anything required.

The models allow you to interact with them just using text, and we're opening it up for everybody to do these things with our deep partnership between NetApp and Google Cloud.

George Kurian
CEO, NetApp

Thank you for coming to NetApp Insight, Thomas. I'm happy we got to share more about our partnership-

Thomas Kurian
CEO, Google Cloud

Thank you.

George Kurian
CEO, NetApp

And looking forward to our continued joint success. With Google Cloud NetApp Volumes, you have an amazing solution for any general file use case, like VMware, SAP, enterprise applications, and databases, and also as a platform to help you accelerate the use of advanced AI capabilities like Vertex AI with your existing private data. Now, we've talked about AI. We now want to talk about security. We know that for AI-driven businesses, keeping your data secure is more important than ever. You saw from Dr. Li's conversation that data is as fundamental to the success in using AI as the algorithms themselves. And in a world where there is a wide range of models, how you manage and use your private data and harness it for competitive advantage, becomes often your only durable source of competitive advantage.

We also know that for the last 15 years or so, cybersecurity has tried to protect everything: computers and desktops and servers and networks and identity and access controls and so on. And it's proving to be a really difficult challenge to keep the bad actors away from accessing your most important asset, your digital intellectual property, your software code, your movies, your unstructured data, and so on, as well as your important data, like customer and employee data. One way to think about the problem is to stand it on its head. If you assume the inevitability of being compromised, then you would prioritize protecting the most important asset from being attacked and accessed. And these assets are the crown jewels, your data, and need to be overprotected compared to other assets.

Therefore, they require the commitment around investment and controls and reporting and governance to make sure that these, your most important assets, your data and your digital intellectual property, don't get compromised. Tomorrow, you'll hear about why NetApp is stepping up to offer the industry's most secure data storage. We take our responsibility in ensuring the security of your environments and data estates, and are offering to protect your assets with the strongest guarantee in the industry, backing up our position that we are making available to you, the industry's most secure data storage. Stay tuned for that set of announcements at our keynote tomorrow morning. But before that, let's hear from one of our nation's most respected experts about how security approaches and policies are changing to adapt to the new AI normal. Director Jen Easterly is the director of the Cybersecurity and Infrastructure Security Agency, CISA.

She was nominated by President Biden in April 2021 and unanimously confirmed by the Senate on July 12, 2021. As director, Easterly leads CISA's efforts to understand, manage, and reduce risk to the cyber and physical infrastructure that all Americans rely on every day. A two-time recipient of the Bronze Star and graduate of West Point, Director Easterly retired from the U.S. Army after more than 20 years of service in intelligence and cyber operations. In her role now, she is relentlessly focused on innovating responsible, secure AI for government, literally setting the standard for AI cybersecurity. Please welcome Director Jen Easterly. Welcome.

Jen Easterly
Director, CISA

I thought they were gonna do Thunderstruck.

George Kurian
CEO, NetApp

Thank you so much for being here.

Jen Easterly
Director, CISA

Yeah.

George Kurian
CEO, NetApp

It's a privilege to have you and your experience and thought leadership on this topic. You know, Director Easterly, you've worked in the area of cybersecurity for the majority of your career, from your time in the U.S. Army to the National Security Agency as a special assistant to President Obama, Senior Director for Counterterrorism before joining CISA. And then in 2021, you know, prior to that, you were the head of Firm Resilience at Morgan Stanley. What, if anything, feels different about this moment in security, particularly through the lens of the recent acceleration of AI capabilities and global conflicts and misinformation?

Jen Easterly
Director, CISA

Yeah. Well, first of all, great to be here.

George Kurian
CEO, NetApp

Thank you.

Jen Easterly
Director, CISA

Thank you for having me. So I guess it's a three-part answer. So I do think it feels different materially, and I think that is because of the incredible developments that we've seen over the past year, specifically with generative AI. I think it's captured the imagination, frankly, of people around the world, and I think it's the job of great leaders to be able to leverage the power of imagination but avoid the failure of imagination. So you think about some of the excitement with these capabilities, but there's also a lot of threats that are implicated.

When you think about the global threat landscape with adversary, rogue nations, cybercriminals, terrorists, what's happening in the Middle East, the acceleration of misinformation and disinformation, particularly with the election coming up, it really makes you be thoughtful about how we develop these capabilities and how we govern them. That's why it's so important to be able to put in place measures for responsible innovation. Incredibly important, 'cause I think these are not gonna only be the most powerful capabilities of our time, I think they're also gonna be the most powerful weapons of our time. If you think about the most powerful weapons of the last century, they were actually built and safeguarded by governments who were disincentivized to use them. These capabilities are being built largely by the private sector, who are essentially driven and fiduciarily responsible to maximize profits for shareholders.

So we have to come together between the private sector and the public sector to ensure that we can reap the benefits of these incredible technologies while mitigating the risks. So that's, that's really the context. The second is how to think about AI, and very simply, I think about it as software. And I look at it through the lens of the short history of information technology. Let's just go back 40 years to 1983, when the TCP/IP protocol was implemented to allow computers to talk to each other. Well, since that period of time, from the internet to software to social media, none of that was ever, ever created with security in mind. It was all about speed to market and cost and features, and security ultimately was bolted on.

At the end of the day, that's why we have a multibillion-dollar cybersecurity industry. I really think we don't need more security products; we need more secure products. That is really incredibly important, and that's what's behind our efforts around secure-by-design technology, technology safety. Technology that is created, designed, tested, deployed, so that security is the top priority. And I think about that for AI as just another form of software that has to be secure from the beginning. Third part, what are we doing? Well, I don't know if you've noticed this, but everybody in government is doing something with AI. We're all trying to catch up, and actually, the administration is putting out a very comprehensive executive order at the end, I think in, in the next couple of weeks.

At the same time, we are gonna put out a roadmap that essentially lays out our operational lines of effort, and they're all about the nexus between AI, cyber defense, and critical infrastructure. So first, how can we optimize these capabilities for cyber defense? There's a lot of amazing things we can do that's really an extension of what we've been doing for years from machine learning. Second, how can we protect critical infrastructure from adversarial AI? Not just cyber, but think about bioweapons, think about chemical weapons. And then third, how to assure AI systems. Again, an extension of some of the red teaming work that we're doing to ensure that we can identify, detect, and remediate vulnerabilities in software.

And to that point, I'm actually heading to London next week for the AI Safety Summit, and we've been working with our partners in the U.K., the National Cybersecurity Agency, to develop a secure code of practice for AI, getting terrific feedback from industry. We're excited to get your feedback as well, and from our international partners. And that will be the first sort of thing that comes out in terms of what are the guidelines that that developers need to adhere to as they're developing AI capabilities.

George Kurian
CEO, NetApp

You mentioned, you know, the need to be secure by design, and one of the, you know, kind of hallmarks of that is the responsibility of the technology providers and the solution providers, a large number of whom are here in the room, to guarantee or at least step up in their commitment to make the products and solutions that we offer secure by design. Let's pull that on the-- pull on that thread a little bit more.

Jen Easterly
Director, CISA

Yeah.

George Kurian
CEO, NetApp

Where do you think we need to be? You mentioned quite a bit about where we are today.

Jen Easterly
Director, CISA

Yeah.

George Kurian
CEO, NetApp

What would you say, in a 10-year period, the aspiration should be for the industry to work together-

Jen Easterly
Director, CISA

Yeah.

George Kurian
CEO, NetApp

-to solve?

Jen Easterly
Director, CISA

So it's a great question. Look, we are catalyzing a Secure by Design revolution. And okay, we've been talking about this, technologies have been talking about secure tech for a long time, but the incentives have all been misaligned. As I said, the incentives are about cost, capability, speed to market, not about security. So that needs to change. You know, I was reading one of your blogs on ransomware, some of the capabilities that you've developed to deal with ransomware, and there was a, a stat in there that said, "By 2031, there's gonna be a Ransomware attack every two seconds.

George Kurian
CEO, NetApp

Wow!

Jen Easterly
Director, CISA

Then, if you look at the Consortium for Information and Software Quality, they put the cost of poor software quality at $2.41 trillion just for 2022, just in the US. And if you look at the cost from global cybercrime, it's upwards almost $10 trillion in the coming two years. So think about that, ten years, that's not sustainable. We can't live in that world, particularly because everything that powers our lives is digitized. Everything that we rely upon in critical infrastructure, our water, our transportation, our communication, our healthcare, our education, is underpinned by a technology base. So we cannot accept that this technology comes off the line with dozens and hundreds of vulnerabilities, because we depend on it. And that's what we're trying to catalyze, is this Secure by Design revolution.

And we published the first document in April, and it was. It laid out high-level principles for technology that's Secure by Design. And we didn't put them in nerd speak. I mean, I like nerd speak, but we put them in business principles because at the end of the day, this is about business. It's, you know, technology software manufacturers need to own the security outcomes for their customers. You know this very well.

George Kurian
CEO, NetApp

Absolutely.

Jen Easterly
Director, CISA

two, software manufacturers need to embrace radical transparency and accountability for what is in your software. And three, businesses need to organize for security. It needs to be at the very top, or it's not gonna be a priority. And it's really about boards and C-suite leaders, and leaders of all levels, CEOs, embracing cyber risk as a matter of corporate cyber responsibility, as a business matter, as a matter of good governance. So we laid out those principles. We asked for feedback from industry, our international partners, from security researchers, from academia.

We got fantastic feedback, and then we launched the next version last week when I was in Singapore, with 13 countries, and it goes much deeper. I know your company is like 40% or maybe more of engineers, and this whole room is probably technologists and engineers. I would love it if people go to our website, CISA.gov, look at the Secure by Design and give us feedback, 'cause really what we're trying to identify is what does right look like, both for software manufacturers, but really importantly, for consumers.

George Kurian
CEO, NetApp

Correct.

Jen Easterly
Director, CISA

Consumers need to understand what to ask for so that they are as safe as possible. And so we're really, really excited about this, and we feel like we're making some progress. But this is gonna take a long time. You know, I kind of joke about technology now is a version of unsafe at any speed. You remember back in the mid-60s, Ralph Nader wrote the famous book, Unsafe at Any Speed, because car crashes were blamed on bad drivers. Now, it took 20 years to get seatbelt legislation. We can't wait that long 'cause our whole life is dependent upon technology. And, you know, just as we wouldn't get in a car without seatbelts, we don't wanna be running around with tech that's inherently unsafe. I would ask everyone to join us and definitely give us feedback on the document.

George Kurian
CEO, NetApp

Thank you. We certainly will. You know, one final question: as you know, CISA and NetApp are working together through the IT Sector Coordinating Council to develop best practices for AI security. We're also both partners of the Joint Cyber Defense Collaborative, the JCDC, with other cyber defenders. Can you say a bit more about these projects for those in the audience who may not be aware, and as well as others, your office is leading, that are charting the course for security and data management? You know, there's probably many organizations here that could benefit and strengthen the collaborative.

Jen Easterly
Director, CISA

Yeah. So CISA is... Hopefully, most people in this audience know CISA, but we're the newest agency in the federal government. We were built five years ago. Our birthday is coming up in November-

George Kurian
CEO, NetApp

Happy birthday.

Jen Easterly
Director, CISA

-to be America's Cyber Defense Agency. And so our mission is to reduce risk to critical infrastructure, but we're not a regulator. We don't collect intel, we're not law enforcement, we're not military. So everything that we do is by with and through partners, based on our technical expertise and the services that we provide. And so partnership is really in our DNA, and that's what's behind the Information Technology Sector Coordinating Council, with Kristen Verderame sits on it, terrific partner. And then the Joint Cyber Defense Collaborative, which we stood up based on new authorities from Congress that we got at the beginning of 2021, to essentially be one platform where you brought together the federal cyber team, CISA, NSA, FBI, CYBERCOM , to work with-...

industry, critical infrastructure, to understand what the threat environment is, to put those pieces of the puzzle together, and then to drive down risks to the nation. We started out about 10 of the biggest tech companies, and now we're over 250, and we very successfully leveraged it to deal with some really serious threats during the Russian invasion. And since then, we've been working very closely with the Ukrainians, Log4j, all of the, serious vulnerabilities and threats. And so it's been, it's been a journey, and it is a journey, not a destination, but it really is a different way of thinking about partnership. It's not just plain old, hackneyed, tired, public-private partnership.

It's true operational collaboration that where you realize that a threat to one is a threat to many, where you have reciprocal obligations of transparency and responsiveness and the government adding value, where industry doesn't have to worry about sanction if they share information, and where you have a frictionless experience, so you have scalable platforms for sharing information. That's been really encouraging to see companies like yours and others joining, not because they're trying to sell to the government, but because they realize it's the right thing to do for the nation. Because the capabilities where technology is around the world, it can really have an impact on driving down those vulnerabilities. We're excited about that. You know, the last thing that I'll mention that we're doing with partners is our first-ever cybersecurity public service awareness campaign.

I think it's playing on some of the TVs around here. It was inspired by one of my favorite directors, Wes Anderson. And it's really... It really is, even as we do secure by design, corporate cyber responsibility, operational collaboration through the JCDC, it's the imperative for all of us to be good digital citizens, to make cyber hygiene as common as brushing our teeth or washing our hands. And so the basic things that we all need to do to keep us safe, our family safe, our community safe, our businesses safe, because, as you well know, we can't do it alone. You can't do it alone. It has to be a partnership.

George Kurian
CEO, NetApp

Absolutely. Thank you, Director Easterly. Thank you for joining us at Insight. Awesome collaborative work and innovation and thought leadership. Ladies and gentlemen, Director Jen Easterly. Thank you so much for being here. As we discussed earlier this year, the foundation of a modern data architecture is an intelligent data infrastructure. And so let's get to the third topic of today's discussion. An intelligent data infrastructure is built on the foundation of silo-free, unified data storage. You can't try to do all of what we mentioned, to re-platform your transactional layer, to build data as a product, to operate in an integrated, hybrid, multi-cloud model, to integrate security as a foundational tenet of your data architecture on silos, fragmentation, technical debt, risk from having diverse operating models, and on and on. Our competition tells you that you need a silo for every workload.

Let's take a step back and take a different view. As I told you earlier, you know, Hadoop was trying to be a silo that promised to radically improve analytics and make the world of big data so much better. There were a lot of customers that deployed Hadoop top to bottom as a top-down monolithic architecture. As they've tried to realize, as they've realized, that the world of analytics has gone way beyond Hadoop, the nature of data has changed radically from the original assumptions that they made when they created their Hadoop architecture. The advent of modern, scalable computing infrastructures, event-driven architectures, Kubernetes as a scheduling platform for modern workloads. You got trapped. You got trapped in an analytics platform that is out of date, in a computing model that was out of date, and in a data storage model that was out of date.

That's what happens when you build a silo: you're trapped. We believe the opposite. One architecture for shared storage, not only in your data center, but in all the leading public clouds, with integrated data services, AI-powered observability, so that you can operate your infrastructure with flexibility, with unification and integration, so that you can evolve your transactional data layer very, very quickly, giving you lower risk, greater operational efficiency, and most importantly, taking you places that your business needs to go and where your data needs to reside in a secure, private, controlled manner. Our enterprise, our enterprise-grade data storage portfolio is the industry's best. The only truly unified data storage for any app, any data, anywhere, with unmatched integrated data security, savings, and a priority on sustainability and energy efficiency, powered by the best data storage OS for all data, native cloud integration, and unified control.

No other vendor comes remotely close to being able to deliver on these capabilities, and the power of having a single architecture translates into innovation capability release from us and to simplification, unification, and the ability for you to mitigate risk while maximizing flexibility for you. You can take on any new application and any workload, anywhere, at any time you want. You can take the data from your on-premises environments to the public cloud. You can move from virtual machines to containers.

You can build data security in once and manage it across your estate, so that all of your assets, everywhere in the world, is protected by design. And you can consume it the way you want, when you want, where you want. We are proud to announce some awesome new additions to our portfolio that expand on these promises tomorrow. Come to our keynote. But before we do that, I want to show you our portfolio in action by sharing with you a conversation between our president, Cesar Cernuda, and Mike Baylor, Chief Digital and AI Officer of Lockheed Martin. Let's roll the video.

César Cernuda
President, NetApp

Mike, it's great to chat with you. We're thrilled to share your story here at Insight. I understand you can't be with us in person because of the incredible work that Lockheed Martin is doing. Why don't you tell us about Lockheed Martin's mission?

Mike Baylor
Chief Digital and AI Officer, Lockheed Martin

Cesar, thanks so much for having me, and thanks for sharing our story. At Lockheed Martin, we solve complex challenges, advance scientific discovery, and deliver innovative solutions that help our customers keep people safe. As a global security and aerospace company, the majority of Lockheed Martin's business is within the U.S. Department of Defense and U.S. federal government agencies. The remaining portion of Lockheed Martin's business is comprised of international government and commercial sales of product, services, and platforms.

César Cernuda
President, NetApp

Mike, many of us think that Lockheed exclusively supports U.S. military and national security, but there's also impactful work you're doing in scientific research. We've heard a lot about the advancements using artificial intelligence in your Center for Innovation. I understand that you have developed cutting-edge technology using predictive AI, using atmospheric and weather-related data to predict, reduce, and even prevent damage from natural disasters. Can you tell us more about all this?

Mike Baylor
Chief Digital and AI Officer, Lockheed Martin

Sure. The Center for Innovation, or we call it the Lighthouse, is a unique integration, modeling, simulation, and decision analysis in both real and synthetic environments. Within the Lighthouse, we built what we call the AI Integrations Lab. It's an internal ecosystem for developing and productizing AI solutions at scale for civilian and military applications. Wildfire prevention and prediction is one really good example. Imagine if you could predict a fire, where it might likely occur or detect active fires faster. Lockheed Martin's AI systems can use information on the current state of the fire and the local environment to predict its future behavior in minutes.

This information can help deliver critical intelligence to assist firefighters in making faster, more accurate decisions. We're also working with digital twins related to this space. Digital twins can capture high-resolution, accurate, and timely depictions of global conditions using current satellite and ground-based observations. We are developing a real-time digital twin, recreation of a fire and assets to enable real-time, AI-enabled, fully interactive mission management.

César Cernuda
President, NetApp

That's amazing. So Lighthouse harnesses a massive scale of atmospheric data, but it's able to use AI to read, evaluate, and predict in real time. What technology solutions make this possible, Mike?

Mike Baylor
Chief Digital and AI Officer, Lockheed Martin

Yes. We partner with NetApp and NVIDIA to make our work in the Lighthouse possible. To take a step back, at Lockheed Martin, we use NetApp storage and cloud data services throughout our environment, not just for our research within the Lighthouse, but also to support our initiatives in other areas of the business, like space and aviation. NetApp solutions let us choose the right mix of resources to stay agile and innovate to accomplish our mission. In fact, the AI Integrations Lab runs largely on NetApp all-flash storage.

We utilize Keystone, so we have flexibility to grow and innovate quickly without large CapEx. And as I mentioned, NVIDIA is a key part of that as well. The NVIDIA Omniverse Nucleus allows us to use AI and machine learning to ingest and make sense of all of this ongoing data collection together. This enables collaboration and data sharing across multiple tools and between researchers, and much of this is made possible by the data infrastructure that we've been able to build using NetApp and NVIDIA solutions.

César Cernuda
President, NetApp

Mike, thanks for your trust. We really appreciate it. What a powerful example of AI being used to prevent disaster and save lives. We're proud to partner with you and NVIDIA to make this solution possible, and we look forward to our continued partnership.

Mike Baylor
Chief Digital and AI Officer, Lockheed Martin

Thanks again, Cesar. We are excited to continue making great advancements in the realm of AI, disaster prevention, and beyond. Have a great time at Insight 2023.

George Kurian
CEO, NetApp

To continue this discussion around the Lockheed Martin use case, I'm thrilled to welcome NetApp President, Cesar Cernuda, to join me on stage. Cesar, welcome to NetApp Insight. Welcome.

César Cernuda
President, NetApp

Welcome.

George Kurian
CEO, NetApp

Thanks so much for joining me. What an awesome conversation you had with Mike Baylor.

César Cernuda
President, NetApp

I did, and thank you so much for having me here, and thanks to everybody.... It's my great pleasure to be here with all of you at Insight 2023, and actually to bring real customer stories here.

George Kurian
CEO, NetApp

It's awesome. We love them! I know you've spent a lot more time with Mike than just the video shows. What have been your biggest takeaways or learnings from those discussions?

César Cernuda
President, NetApp

Yeah, actually, George, you see, I was quite, you know, impressed with the life-saving impacts that, you know, they've been sharing with me. Globally, we've seen all the rapid acceleration of severe weather events, including wildfires, and, you know, these tragedies have been catastrophic, as everybody knows. And this is why Lockheed Martin effort to require the most advanced technology, and that's what they were sharing with me. And you have seen, AI is at the core of what they're doing.

George Kurian
CEO, NetApp

Coming from California, it's so impactful to witness the life-saving use of technology. It's so relevant, the particular use case that, Mike was talking about.

César Cernuda
President, NetApp

You're right. And actually, as you know, there's not just that work done in the U.S., but worldwide. And for organizations to fully take advantage of advanced AI and machine learning technologies, they require a future-focused approach to data. As Mike shared, the data infrastructure, you know, at the foundation of the AI Integration Lab, is what really allows them to do this incredible solution that, you know, they shared with us.

George Kurian
CEO, NetApp

Awesome, so let's talk about that. At the core of the AI-driven solution that you just saw is a data infrastructure built primarily using NetApp all-flash storage, enabled by NetApp Keystone. Keystone has been the foundation as the project has grown, and what started as a smaller internal AI use case, has evolved to the current massive scale of the digital twin of the Earth.

César Cernuda
President, NetApp

That's right.

George Kurian
CEO, NetApp

You know, we're honored to partner with Lockheed Martin and NVIDIA to make this solution possible.

César Cernuda
President, NetApp

You're right, and actually, if you're interested, or anybody's interested in learning more about the Center of Innovation, you know, be sure to check out tomorrow. There's a session, I believe it's 11:15 A.M. I don't know the breakout room, but you can find that out. And, look, it has been great to come here and to share this story with all of you.

George Kurian
CEO, NetApp

Cesar, it was a pleasure, and I know that you're welcoming another awesome customer here today.

César Cernuda
President, NetApp

That's right.

George Kurian
CEO, NetApp

So I'll let you take it away.

César Cernuda
President, NetApp

You're going to be back on stage, right?

George Kurian
CEO, NetApp

Yes.

César Cernuda
President, NetApp

Thank you so much, George. So, you know, as we go on and transition here, let's kind of switch gears to the next speaker, and let's get into the world of fantasy. Because this next customer has been using the power of AI to bring entire fantasy worlds to life for the last 30 years, actually. As a leader in visual effects and animation, you've seen their artistry and visual impact in movies like Avatar, you know, Guardians of the Galaxy, Planet of the Apes, Lord of the Rings, and many more. And here's a look of some of their incredible work. Let's play a video.

Please join me in welcoming Kathy Gruzas, CIO of Wētā FX. There you are. Thank you so much, Kathy.

Kathy Gruzas
CIO, Wētā FX

Thank you.

César Cernuda
President, NetApp

Please grab a seat. Kathy, welcome to Insight 2023, and thank you for joining me today to share about the world of Wētā FX. I think everybody enjoyed the video. I recognize so many scenes from films that have taken the world by storm, and we're excited to have you here and share some of those stories with us.

Kathy Gruzas
CIO, Wētā FX

Cesar, I'm so happy to be here. Thank you for having me.

César Cernuda
President, NetApp

Wētā FX has won countless awards for creativity, innovation, and visual effects. The stories and experiences you've created for audiences go beyond what some of us can even imagine. Can you share some examples with us?

Kathy Gruzas
CIO, Wētā FX

We work on many major films and streamers. You mentioned Avatar earlier, which is a real standout for us. It has been such a privilege to work on this franchise, as it's unlike anything that's ever been done in the entertainment world. The original film, released in 2009, was the landmark 3D, or stereo, as it is known, film of the modern era.

The immersive nature of stereo meant the photoreal CG imagery needed to be at a higher fidelity than ever before to really draw the viewer into this magical new world of Pandora, completely reshaping the way that we think about visual effects. And more recently, we took this to the next level in the sequel, Avatar: The Way of Water. This was the most ambitious visual effects film of all time. To give you an idea, Wētā FX worked on 3,240 visual effects shots, two-thirds of which involved water.

César Cernuda
President, NetApp

Yeah.

Kathy Gruzas
CIO, Wētā FX

So much artistry and research went... and artistry, research, and development was involved, and we had to address many a technical challenge along the way and really scale out our render and storage infrastructure. But I think the end result speaks for itself.

César Cernuda
President, NetApp

Well, it certainly does. And, what you have accomplished, you know, creating the Avatar film has been unbelievable. I think the resolution is outstanding, and, I imagine as image resolutions go up, the volume of data is multiplied by many times. So what technology does it take to pull all this off?

Kathy Gruzas
CIO, Wētā FX

We developed a new simulation framework that allowed us to create detailed water at nearly every scale, to fully capture the look, the feel, the movement, and the reaction of water against the CG environments and the characters. We also utilized new machine learning technologies to enhance the tools, the tools we use for facial animation, as well as the blending of CG and live-action sets and characters. This new tooling enabled the artists to create the amazing work that you see on screen. But in the background, bringing those pixels to life required significant quantities of data generated by a complex pipeline, and the management of that takes some wrangling, as we say in the film industry. Hats off to my amazing team.

César Cernuda
President, NetApp

Well, listen, hats off to your amazing team, that's for sure. And, Kathy, you're steering, you know, Wētā FX's total IT, IT infrastructure, and as well as balancing the requirements for cutting-edge equipment and technologies. Why has NetApp been the right partner for you? And, and I don't want to challenge you there. I'm super happy with that decision, but can you share a little bit more about that?

Kathy Gruzas
CIO, Wētā FX

So NetApp is our storage of choice for our critical artist-facing tier, as ONTAP offers the foundation and future set for this data to be stored, accessed, and protected. We utilize your high-performance network-attached storage systems on-prem and AWS FSx for NetApp ONTAP for both cloud rendering and virtual artist workstations. Our custom storage ecosystem, built to handle the punishing render compute workloads, allows us to really push the performance envelope for large-scale film visual effects production and the rendering required for the high resolution, higher frame rates, and 3D in ways that have never been done before.

César Cernuda
President, NetApp

Impressive.

Kathy Gruzas
CIO, Wētā FX

Movies like Avatar break the mold. They elevate movie magic to new heights because of these artist-enabling technologies.

César Cernuda
President, NetApp

Look, I know the Avatar movies are just one example of the visual effects magic that you bring to life, and I can't wait to see what's next, Kathy. But I need to admit that it's been fascinating to see behind the curtain, as we're thrilled to partner with Wētā FX and be part of that magic, and we're all excited to see your upcoming films, projects. I think The Marvels, The Batman, Kingdom of the Planet of the Apes.

Kathy Gruzas
CIO, Wētā FX

Thank you, Cesar. NetApp is one of Wētā FX's longest-standing partners, and you play a really important role in helping us continue to innovate. So thank you, and thank you. Thank you for having me.

César Cernuda
President, NetApp

Well, thank you so much for coming. Please, big applause for Kathy. Thank you so much.

Kathy Gruzas
CIO, Wētā FX

Thank you.

César Cernuda
President, NetApp

Really impressive work. I think everybody see the image and the hard work there, and Wētā FX has uniquely designed that infrastructure, has allowed them to exceed what was previously thought possible in a high-end digital content creation. It's such a privilege to see and hear what our customers are doing to innovate every day. There's many cases out there, and I would love you to go and share, as you are here at Insight with each other, you know, some of the things that you're doing so we all can learn from each other. And with that, let me welcome back George Kurian to the stage. George?

George Kurian
CEO, NetApp

Hey, Cesar. Awesome story. You know, Wētā FX has truly redefined visual effects, and that requires performance capabilities second to none. It's amazing to hear how much data goes into a modern film. Maybe you can share with our audience what the original Avatar was like and what's the new one like.

César Cernuda
President, NetApp

I might even ask a couple of questions about that, but as you heard from Kathy, they're able to really push performance with NetApp technology and the foundation, right? And for example, creating the first Avatar movie, you know how many petabytes they needed back then? One petabyte. One, which was super big at that time. Unprecedented amount of data for a single film. What do you think about the latest one, The Way of Water? You know how many petabytes? I know you do. Probably people don't.

George Kurian
CEO, NetApp

Amazing number.

César Cernuda
President, NetApp

23 PB at the peak. 23 PB, and they rely on NetApp storage as the entry, you know, you know, artist-facing storage tier for both films to provide performance and resilience, and that's what I really appreciate about the partnership with them.

George Kurian
CEO, NetApp

That's amazing. Cesar, thank you so much for joining me and sharing this groundbreaking work.

César Cernuda
President, NetApp

...Thank you, George. It's my pleasure to be here with everybody. I'm excited for Insight 2023. I'm excited to see you all. I'm sure I'm going to see you, you know, in the breakouts and, you know, as well in the opening. I'll be back on stage on Wednesday with some very special guests. Let's enjoy Insight. Thank you, George.

George Kurian
CEO, NetApp

Thank you.

César Cernuda
President, NetApp

Thanks, everybody.

George Kurian
CEO, NetApp

Thank you, Cesar. It's going to be a fantastic Insight. Whatever the disruption you face, you can be better prepared by being data-driven and AI-ready, using an intelligent data infrastructure. We can help you with that. Today, however, we are also cognizant that to build a more equitable and just future, where the access to the capabilities that the modern world offers, we need to bring together the next generation of data leaders. And so we are today announcing an expansion to our commitment to doing just so. They will play an important part in shaping our future, and so we are expanding our investment in our Data Explorers program, offering our curriculum free to individuals, partners, and customers, and up to $100,000 in microgrants to communities in need to ensure that we're all equipped for tomorrow.

I get to witness the impact of the work that this program does with little kids who get to dream big dreams, kids like me, who had somebody give them a chance, and I'm super proud of this opportunity. I ask that you all join us in making data explorers have a profound impact by bringing together the next generation of data leaders. Let me close today by sharing that we're in the business of helping you, our customers, become and remain data leaders, who can convert the seemingly constant stream of disruptions into opportunities. To do that, to become data leaders that are data-driven and AI-ready, you need to have a cohesive data strategy, an operational model to treat data as a product, and a modern data architecture built on the foundation of a silo-free, intelligent data infrastructure.

We delivered the data fabric to enable seamless hybrid cloud data storage, mobility, and protection. Today, we shared with you our vision of the roadmap ahead as we chart the next phase of our journey together to enable you to solve the needs of modern workloads, demanding AI applications, the rapid growth of unstructured data, the increased threat of cyber actors taking your... malicious actors going after your most important asset, your data. An intelligent data infrastructure expands and builds on the data fabric, combining unified data storage, integrated data services, infrastructure, observability, automation, and optimization. For you to have an infrastructure for data that's disruption-proof and ready to translate disruption to opportunity. Building an intelligent data infrastructure is a big step to being data-ready, data-driven, and AI-ready, which is what will set apart the winners from the also-rans.

Tomorrow, you will hear about real proof points of technology leadership with the world's best unified data storage portfolio, having exciting advancements, the world's most secure data storage, giving you the ability to use your data securely and privately anywhere you need it, and the world's best data pipeline for AI, allowing you to combine the world's most advanced AI tools with the world's best data management. We have got an awesome program for you at Insight. We are honored to have you here after many years, and we are excited at the prospect of our community, you, our customers, and our partners, to learn from each other. Have a fantastic week. Have a great Insight. God bless. Be well. See you tomorrow.

Powered by