84 days. That is how long it has been since the last time we were here at MGM in Las Vegas for Insight. Welcome everybody to NetApp Insight 2025. I'm your host, Nick Howell. We are here a little bit before the keynote just to warm you guys up. For all of you that weren't able to attend in person at the event this year, we have the live streams going on netapp.com, on our YouTube channel, all kinds of various places. Those of you that hang out in our Discord community already know that because you guys joined us. What's that? You're not in there yet? Please come and join us at Discord GG, NetApp, or just head to NetApp Discord. If you're watching these on YouTube, great, fantastic. We've got the live chat going. We'd love to hear what you guys think. This will be going.
Just to give you guys an idea of the itinerary for today, we're starting here about half an hour before the keynote. We're going to send you over there live to Mario, my counterpart over on the keynote stage, and he's going to introduce today's speakers over on the keynote. Do not go anywhere after it's done. The keynote will run for about 90 minutes, and then we'll be right back here on NetApp on Air Snapshot, breaking things down that you heard during the keynote and some special guests, special announcements just like we did last year. For those of you that watched last year, I really appreciated you guys coming back again for this year. We've got a great two-day program packed full of special guests for you guys, starting today with day one and then day two tomorrow.
That's the reason for the two different streams if you're seeing those on YouTube. I want to get things kicked off today with a good friend, my compatriot, someone that I have hung out with. He's opened the show with me before in various situations. Mr. Jason Benedicic. Jason, thanks so much for joining, man.
Thanks. It's great to be here again. MGM, loads of people.
It's like we've been here before a few times, I think, right?
Yeah, absolutely.
What I wanted to talk to you about is, you know, as I said in the beginning, it's been 384, more than a year since the last time. A lot's happened. We've seen most of the larger trade shows take place throughout the year already. I think we've got Ignite and re:Invent left to go for the rest of the next couple of months. We've seen a lot of stuff. Some of the trends that I've seen established, I mean, we can't do this without talking about AI. I'm curious to get your take on AI as it applies to maybe enterprise. We've seen plenty of it in the consumer space, but when it comes to enterprise, what should people be looking at?
I think what we've seen is a real bit of a shift in that everybody's a technology business now, right? We've seen this in the stock markets and things as well. Everybody's a technology business, but you've got two types. You've got those that provide the technology. Those are your, you know, NVIDIA's, your OpenAI's, and all of those sorts of people in the apps of the world. They provide technology that other companies use to build their business services and provide their technology. Everybody's a digital business these days. What we're seeing now is a huge shift around people wanting to leverage data, leverage the information that they have. How do they empower their support or improve their sales pipelines or whatever else?
The different technologies that we've been seeing around agentic and even just the LLMs and SLMs are all coming in to help build this better go to market, this improved efficiencies. I do a lot of work in the financial space, and there's all manner of talk of how we can utilize all this data that we get, all these data points about different financial transactions or reporting or different things around regulations and how you can adapt and improve processes as regulations change by leveraging AI. Everybody is really rushing kind of first into how do you, how do you do this best?
Yeah, it's one of the things that I'm really excited about, how fast we got here. It seems like it was really three years ago in December that ChatGPT first came out and it kind of took the world by storm. I remember machine learning and then we had neural networks, and then this kind of just came out of nowhere over the last couple of years. This crazy, crazy thing. I mean, I'll get your take on it. Is this all hype? Is there real substance here?
No, I think so. This has been around for a long time, just in a lot of different names. I remember when I first started into the tech industry out of school in like the late 1990s, early 2000s, the biotech company I was working with worked on sort of genetic algorithms and generational algorithms and things like that for drug discovery. That is kind of a precursor to what we see now as machine learning. It's been ongoing for a long time. I think what we reached was a peak of the availability of the right kind of hardware. We've had some real advancements in technology of building hardware. Just look at the major things that NVIDIA have been able to do. We've got way more raw horsepower and that's made a huge shift. We've made an exponential increase in our ability to innovate in this space.
Yeah, we've seen some big advances in the hardware just in NetApp, in the way the storage controllers have come up. Some of the new Intel CPUs and some of the sort of embedded technologies that are taking place in there are really enabling a lot of the stuff that we're able to do with AI. A lot of the stuff that people are going to hear about this week is going to be around AI solutions. Make sure you're getting signed up for those sessions. A lot of them will be available after the show for registered attendees. If you're registered, don't worry if you can't make it to all of them, we are going to have those available online for you. As far as that goes, a couple other things I wanted to jump to.
One of the hottest topics I would say over the last two years, aside from AI, has been virtualization. We've seen the acquisition of VMware by Broadcom. It's not inconsequential. There have been some side effects to that, but the way that they've strategized around it. We heard at VMware Explore a month or so ago, a couple months ago, the way that they are sort of promoting being on prem. Is that being driven by AI? It would be my question to you. There was a time five years ago where everything was being pushed to the cloud. We had VMC on AWS, we had AVS in Azure, and we had GCVE. All of those are still there in some capacity.
We've got the new EVS in AWS to do, but there seemed to be a sort of attitude coming out of VMware Explore this year of repatriation, of prioritizing your on prem workloads. What do you make of that, just to start off?
It's really interesting and this goes back to some of that conversation we were just having about this change in hardware. When we first started looking at the cloud, I think the average server was maybe four or six cores in a CPU. We've had this exponential growth in core count and density. You can do a hell of a lot more in a lot less now than we could 10 years ago. There's that driver there that's happening. Also, what we're seeing is, especially from my side of the pond, sovereignty questions and data sovereignty. A lot of people are now looking and going, actually I need my data here where I am due to certain regulations or rules. That's bringing people either into smaller private clouds or back into their own data centers.
When it comes to AI, a lot of people are saying, actually I want to be in control of my data. I want to train it and use it on hardware that I own and I control the boundaries of. It's not going to leak anywhere. I want to build my models here, train here, and then maybe I can use it outside later. There's a number of factors around hardware changes, sovereignty, and AI itself that have kind of all amalgamated into bringing this kind of crescendo of actually, virtualization is still a really great thing to have and good thing to have on prem.
I think it's going through an interesting sort of renaissance period as well. We're seeing a sort of second or third phase of virtualization happening. We're seeing now the containerization of VMs. We're seeing a lot of these services being, you know, the private AI service that they announced at VMware Explore earlier this year. They're bundling these services into what would be just your VM factory previously of vCenter and those sorts of things in vSphere. I look at this as, you know, the next, you know, evolutionary step, if you will, of what virtualization has been for the last 20 years. Speaking of which, we've seen the growth and the rise of some alternatives that I didn't want to not talk about. Obviously, the one that's been around a long time, you've got Xen, you've got Proxmox, that are both seeing renewed interest and things like that.
The big thing I wanted to talk to you about was our own resurgence of Project Shift. Two things real quick. One, what do you make of people looking at alternatives? Is it really just a lot of outcry or is it, do you see real people moving off of VMware in any kind of way, and why?
In the first few months of the acquisition, there was a lot of talk, but it simmered down a little bit. I've seen customers move some, but it was nowhere near as much as I expected. A lot of people were like, the cost of transition, the total cost of that move and retraining and things is too much. There's been a real core focus on re-engineering and some of the changes in the vSphere space. I think their sharpening of their focus has helped people go, actually, we can see what the product is here. We wanted to use that, but then the smaller shops, people that potentially have been repatriating out of the cloud that just were starting again and they don't maybe have those relationships, have been looking elsewhere because they can. Maybe they can start smaller.
We've had a lot of talk about Proxmox in the EU, and it's good to have these options. I love options on the table because competition drives innovation and people are on their game and that's a really good thing. You were talking about, we've been through these cycles here of different things changing. We've had a lot of hype of containerization and Kubernetes and those sorts of things, but now we're getting into that real sensible period where people are looking at it and going, right tools for the right workload and how do we bring them all together? I want my VMs, there are things I want containers for, there are things I want serverless. How do I deliver that as a unified package across these different things? How do I make it easy for people?
That's, t hat's where I think the winning is at the moment. The winning is, like, how do we bundle these things together? Especially with, like you said, this thing.
The user experience.
Yeah, the user experience. How you saw it from vSphere, the VCF announcements around the AI factory and stuff like that. If we can make these things easy to consume, then we're going to get more consumption.
Yeah, yeah. That's great for us. You know, who doesn't need nobody's. There's a famous phrase, nobody's ever going to need less storage. Right. At the end of the day, the more these sort of things get bundled together, especially for people that are already using some of these tools on NetApp storage already, it's just going to promote the more adoption of more storage. In the same way, when you say a lot of these things are getting bundled together, that's sort of been our mantra as well to a certain extent. We've consolidated all of our licensing into ONTAP One. All of the backup and data protection, all of the cyber resiliency, all of the things that you need, all of the Flex and Snap words, as I like to say, are all now part of that single license.
There's no more need to do any of that a la carte kind of stuff. We've also taken that same kind of attitude where we don't want you to buy a system and only be able to use 5% of it for all of these sort of a la carte reasons. We want you to be open to use all of the things whether you need them or not. You get to be this sort of driver of whether you're going to do these things or not. I think you're going to hear a little bit more about that kind of stuff this week. For those of you in attendance, make sure you check the session catalog. There's a lot of stuff out there about those sorts of things. I just, I said a key word, cyber resilience.
We've been pretty productive, I would say, in that space for quite some time. We've had snapshots, very space efficient snapshots for basically since our inception 30 + years ago. We've done something really cool where we've put it in between a logical layer. There's an ability to scan incoming writes and judge behavioral action on them using AI on the box. We're using that to trigger snapshots in a cyber defense malware sort of mechanism, if you will. I'm curious to see or hear your take on the uptake of that and if you've had any experience with it or seen any customers sort of rave about it.
Yeah, I have and it's really quite, for me, I feel it's quite unique in the industry to have this proactive ransomware detection. There are a lot of systems and things out there that can do it after you've detected something, how to get back. There are a lot of recovery kind of tools out there, but things that can be more proactive are kind of few and far between. There are a few customers I've worked with where we've looked at putting this in and we've gone through, okay, how do we get it to learn? How do we understand what our data patterns are? How do we then tune this, that it works well for us and it really does. It's been a hot topic.
So many big businesses have been in the news are being hit and everybody's like, I don't want to be that next news story. Having this feature set and the fact that it's just another ticking the box on the. This is what it does, this is what ONTAP does. It's really useful.
Yeah. Hey, I want to take a quick moment to jump back in on everybody and just say if you're just now joining the stream, this is the On Air Snapshot Kickoff Show here at NetApp Insight 2025. We've got about 10 - 15 minutes until we're going to get started over with the official keynote. Jason and I are just hanging out talking about some of the latest industry trends and things that we think are really relevant, things that you're going to hear about this week at the show. Jason, there's a couple more things I wanted to go over with you. Block storage, we've talked about AI, we've talked about virtualization and the resurgence of sort of on-prem footprints in virtualization. Has that had an effect on driving adoption of block storage? I feel like we've seen this resurgence.
For me, 10- 15 years ago, if I never had to talk about a WWPN again, I'm NAS for life, right? At the same time, there are many, many people out there, whether it's tier one workloads, whether it's virtualization, that are using block storage. I feel like there's been more emphasis on it over the last couple of years. Have you seen that? If so, what do you attribute that to?
I think we had a real long period of virtualization and NFS being kind of the key workload there. What we've seen recently are massive improvements in protocols, networking, and block in general. Right. Especially if you look at things like NVMe over fabrics.
Yep.
NVMe over TCP, those sorts of things.
Right.
You've got a new way of talking to your storage, you've got new connectivity, you've got faster speeds, you've got lower latency, better tooling. When you kind of bring all those things together and then you look at the types of workloads, the tier one workloads, where you want that low latency, you want that the guaranteedness is the way I look at the Block.
Right.
Block has a lot more guarantees and a lot more stringent communication protocol there to allow you to certify your workloads. You want that. The NAS protocols are still great where they're needed, but they don't have those same levels of guarantees all the time. The protocols haven't necessarily evolved as much as some of the improvements in Block. Where we're seeing more regulatory compliance, we're seeing more things around cyber resilience and other things. You've got a lot better tool bag available in the Block space now. I think that's driving a lot of that resurgence.
Do you think there's an element of, we were trying to shoehorn in a lot of the value add that you get from a fiber channel or something like that into some of the NAS protocols? Whether it's so that you had like the SMIS provider that tried to do some things that were sort of similar to how Block would work, like you can see certain things. We had MPIO for iSCSI and things like that. We had certain things that were, if they were sort of Ethernet based or NAS based, we were trying to shove some of that old fiber channel technology or methodologies into them. I think you hit it on the head. I think the biggest, the biggest changer of the last few years has really been the adoption of NVMe, not only over fabrics, but over TCP as well.
We're seeing it adopted in the cloud, we're seeing VMware take it up very strongly. We're seeing all kinds of good adoption and we've been a supporter of it from day one. Just about any of our systems, you can now run NVMe over TCP on as well as fabrics. Yeah. Anything else you want to say about that from a storage perspective?
I just think it's a really interesting time. I mean, hardware is having this renaissance, this resurgence, and storage is kind of key. You talked about earlier, no one's going to need any less storage. That comes from even just from the consumer level. I just bought a 2 TB iPhone, right? I remember just not so long ago when I couldn't buy a 2 TB hard drive for my computer, right? We've got this explosion of data. It's a huge, huge, huge explosion of data all the way from consumer. If you think about it, the consumer industry drives the enterprise.
That was going to be my next question for you, was like, is the consumer industry driving all of this stuff in the enterprise?
I believe so. You know, I've got my iPhone that all gets backed up to iCloud, that all gets stored somewhere, right? It's the same with all the other services that I use, all the other things that I buy, even from the basics of email. There's lots of subscription services now. We have a wholly connected world. All of the consumer growth and the technology growth is driving the enterprise growth. As we add more AI into that mix as well, people are now, okay, what can we learn from this data? What patterns can we learn? How can we train? How can we do our inferencing? How can we make experiences better just through customizing those experiences, making it easier so you're now collecting more data, but more metadata. We've got this huge growth.
I know there's a lot of talk right now about the way that the markets are and a potential bubble and things. I think there's maybe some talk of that sort of thing. What I think is being missed in that narrative is no technology has exploded as a whole. Everybody has devices, multiple devices, and we're driving an entire. The consumer side has grown. It's driving the enterprises into much bigger space.
You said there's all this storage that's being added to devices and we're seeing 30, 60, 120, even 200+ TB flash drives now that are coming out. All of this stuff not only on the consumer side, but the enterprise side. I have this one of my—I'm going to give a little spoiler for an upcoming video. We do the predictions videos every year. Craziest ideas, right? What's going to happen in five years. One of mine that I think that's very realistically possible, really honed in on what you were just going over was that now that we do have all of this capability on iPhones, we have terabytes of storage in small devices, not to mention entire 24 drive uber density in some of these systems. I think we're going to get to a point where every operating system is shipped with an LLM.
I think that there's going to be this way that you interact with an operator, whether that's ONTAP, whether that's Microsoft Windows, whether that's Linux, whatever flavor you prefer. I think there's going to be this—whether it's iOS on your iPhone. I think there's going to be a level of having this sort of—we've seen some of the things with Apple Intelligence come to be. Still got a little ways to go, but I like where I kind of like the general direction it's headed. I think it sets the tone for this area where we could look into a future world five years from now where you just have the equivalent of a ChatGPT chatbot that knows everything about your system, can help you repair issues, can download updates for you as an agent on a schedule.
If you've heard of agentic AI, like these sorts of things I think are going to become native, built-in stuff that 10 years from now kids will just take for granted. It'll just be the thing that the way Windows works, the way iOS works, it'll just happen. I think we are going to start seeing that in the next couple of years. That's one of my big spoilers. If you watch my videos, that's one of the ones that was going to be going out in January.
I like the idea of talking about it in the context of insight because I think it's important for everybody to understand if you're an administrator and you're running systems, the way that server operating systems could be going, the way that storage system operating systems could be going, the way that virtualization platforms could be going, you could say the exact same thing for vCenter. It could very likely have its own. With the private AI foundation stuff that they've got going on now, you could very easily see them having a sort of native custom trained on everything vSphere and VMware where you can simply prompt it.
Yeah, we're seeing the start of that from the MCP servers and network integrations. If you look at OpenAI's ChatGPT right now, you can actually start to integrate it to your apps that you use, and it can start to learn about you through the various apps you use. I see this as the way it's coming. I think the way that we look at sort of AI and ML right now is going to change, and that shift and that focus will be, you know, I don't, I hate to do this, but, you know, I see more of the Jarvis style, you know, Iron Man, Jarvis style type. That's how you're going to utilize in future, right? Yeah, you'll have systems, you'll, you know, I want to do this, tell me about this. I think we're moving there.
I'm actually excited to see Siri be useful.
I'm really excited about the future and how this can help shape and grow people's lives. I've got a couple of things that I'm working on in my own life around how we can make virtual assistants and use this technology to empower people.
Absolutely. You can come at that from multiple different vectors, whether it's special needs, whether it's financial help or advice. There are all kinds of different ways that you could set up these things. I think that's what Agentic is going to bring to the table. We've all played with ChatGPT at this point. We've all played with Gemini, with the various tools that are out there on the consumer side of the world. I think there are even some companies that have taken the enterprise RAG pretty seriously and have tried to build something internal using all of their data and their historical stuff that they have. At the end of the day, I think it is really going to take, to me, I equate this to what we went through in the late 2000s with the BYOD movement, with people, executives, everybody bringing their devices.
2010 was one of the worst years ever if you were an IT admin.
You know why all of the lack of tooling around BYOD?
The iPad came out. Anybody that has been through that kind of stuff with the iPad, with the bod movement and all that kind of stuff. Jason, I'm getting word in my ear that we definitely have to cut over to the keynote at this point. Guys, thank you so much for joining us here on the kickoff show. Don't go anywhere after the live stream of the keynote. For all of you watching at home, we will be back with an action-packed hour and a half to two hours of content for you guys. Interviews, special kind of stuff, all of that. Make sure you join us in the Discord. We just heard we got a couple extra minutes, so make sure you join us in the Discord. netappdiscord.com, that's the place to be. Subscribe to the YouTube channel. We do do On Air every Wednesday at 10:00 A.M.
Eastern, oh sorry, 10:00 A.M. Pacific. Make sure you join us over there, get subscribed, turn your notifications on, all of that good stuff. Jason, any final closing thoughts before we get out of here today? Anything else that's been top of mind for you while we got a couple of minutes before we send everybody over.
I'm just really interested to see what's coming out of the keynote next. I'm going to quickly rush over there, do some notes and take some things there because like we just said, data explosion, but it's metadata, it's how you understand your data, how you know where your data is, and data is driving all of this. This is how we feed the AIs and the LLMs and everything else. I'm really excited for this week. I'm really excited to see what the customers and partners here have to say as well.
Yeah. It's some of the stuff, like right now, we can't really talk about it too much because we don't want to spoil anything for everything you're about to see on the keynote. For me, going back to the AI stuff, there was one part of it that I didn't have time to talk to, but I think we'll get to it probably after the keynote a little bit more. We've got some stuff coming for you guys, but I want to save that for after the keynote so that you guys have a chance to see what's being announced, what's being launched. All of that good stuff, a lot of it went out on press releases this morning, so the enterprising ones of you could probably go find some of that stuff.
At the end of the day, what this comes down to for me is all the performance that you need, all the capacity that you could possibly need connected over any protocol you want to use. We've got it. Yeah, it's that simple. ONTAP, with all of the Flex and Snap solutions that we have, can back up all of your applications, can help you set up disaster recovery. It can do everything you want to, regardless of the workload. It's great that AI is a flashy new workload that everybody's really curious about, but keep it realistic when it comes to how you configure your storage to interact with these things. Parallelism matters. What protocols you're going to use, how are you going to connect it up? Are you going to have multiple instances of it? Is this going to be a Metro cluster setup?
All of the things that you've cared about for decades now when it comes to working with NetApp, this isn't some new thing that's going to change everything. The way that we run ONTAP and the way that we run all of the cabling and all of that sort of stuff. Yeah. Any other final thoughts before we get out of here?
I think the good thing here is it's building on an amazing foundation. Right. ONTAP has been great for a long, long time, and we're just continuing to build on and adapt to the current workloads and how things are going.
Gotcha. Jason, I'm going to let you get out of here. I'm going to say a couple of few more words. We're waiting on the final signal. Guys, apologies for the delay here. They're still setting up backstage, but we're hanging out here live while you guys wait for the keynote to get started. There was something else I wanted to recap really quickly. We've launched several systems and updates to systems over the last couple of years. One of those being the C- Series. We had the new A- Series kit launch. We've had StorageGRID updates, E- Series updates, all kinds of good stuff. Right. We've basically gone through the whole gamut over the last couple of years, refreshing everything we could. Some of it was delayed due to the pandemic and some of the supply chain issues that we had coming out of Southeast Asia in that area.
At the end of the day, we've gotten the entire—this is the first time arguably in a decade, I would say, going all the way back to 2016, 2017, that we have fully refreshed every platform up and down, given new processors, increased capacities, new NVMe-based drives, all of that sort of stuff. It's very exciting to be not only a storage admin, but a systems admin or even somebody that's taking advantage of those systems right now because all of that stuff matters. At the end of the day, the underpinnings, the foundation of what you want to run your applications and your workloads on top of, it absolutely matters. I've seen the rise of NVMe technically take over as far as a sort of disk protocol of how it does that. We've seen QAT introduced, technologies from Intel to increase efficiencies or offload.
The more that we offload from the CPU, it gets interesting because on the other side of the spectrum, I see these SoCs coming up where they're trying to cram everything into the die. Right. It's this weird trend that we're seeing. There's offload happening, but there's consolidation into SoCs happening. I know you've got a little bit of background in chip stuff. Curious on your take of where do you see things going. Is it more going to be offloaded or is it more going to be SoC?
Do you know what? It's a really interesting question. You've got this kind of, you have these layers of generalization. We can do generalized things with CPUs, we can do generalized things with GPUs, but as you learn more and more, then you get into specialization. That's where you get like DPUs and other things. What we do is where we can learn and make things more specialized and therefore you can make them cheaper. That's where you break out when you need some more general purpose things. That's where you can stick to your general purpose, do it on your CPUs or whatever else. The advancements inside of silicon technology and chiplet design and the way that you can package things together, I think we're still going to see a lot more of that so that we are going to see more efficient die and more efficient silicon production.
You also get much more efficient interconnects and responses. I think realistically you have to give it to Apple. They've led the way in a lot of this. Really? I think so, yeah. I think they've led the way in a lot of the design around these things, like how you can get your CPU, your GPU, your AI. Yeah, it's really great.
Yeah. We're getting the official word. Make sure you stick around. After the keynote we're going to be bringing it right back here to the Expo. A special lineup of guests, some surprise reveals, all kinds of good stuff. We got an action packed show for you. Mario, take it away.
Hey folks, Mario Armstrong here, back with you again for more pre-show action and excitement. I want to give a special shout out to our online audience. Can we give them a round of applause for watching online? Yes, they're watching, they're tuned in and they just came over from the on-air broadcast hosted by our good friend Nick Howell. Now they're joining us as we count down the minutes to Showtime here at NetApp Insight. Welcome to the party. So glad that you're here with us. We're just a few minutes away from our very first session and while you're finding your seats and getting comfortable, let me tell you about one of the most important activities and impressive things you can experience at Insight 2025. I'm talking about the festival grounds, you can't miss it. That's Insight's version of the Expo floor and it's awesome.
They have everything that you need. You'll meet with the experts, demo products, and you can see firsthand how NetApp is innovating across industries every single day. You can also catch theater sessions like Camp Cloud AI Edition, or try out the fun and wild activations like the NFL Challenge and the Aston Martin F1 experience. Of course, you can explore all the cool stuff that all of our sponsors have brought to the table this year. All of this is just a quick walk right from here, from the Boat Ballroom to there. Just follow the signs. You can't miss it. It'll happen and open up right after this session wraps. There's one more thing that I needed to talk to you about. Damn, I can't remember what it was. Festival grounds. I covered that. Now, audience interview. We did those. We did that already.
It sounds like you could use a little assistance.
Ah, that's Ida, everybody. Everyone, this is NetApp's Intelligent Data Assistant, AKA Ida. Ida, say hello to the Insight audience.
Hello to the Insight audience.
You said exactly what I told you to say. Thank you, that's very nice. Yes, I could use a little assistance. I can't remember what else I was supposed to tell these good folks.
Of course, Mario, you were going to tell them about the big celebrations lined up this week. You were also going to say it's a bout to be party time, NetApp style.
It is about to be party style. In NetApp style. Yes, that's right. Thank you for reminding me. It all starts tonight when you bring your bags to the festival grounds at 5:00 P.M. for the opening act. Let's get it started. That's our big welcome party. Don't miss it. Food, drinks, lots of opportunities to network with all the beautiful humans that you have in this room right now. Tomorrow night is an even bigger bash, the encore. Let's celebrate at Allegiant Stadium. There will be live music, awesome eats, drinks, dancing, and some other really cool surprises. You're not going to want to miss that. More official info to come tomorrow, but I suggest you add it to your agenda because you don't want to miss it. Ida, any chance we'll see you at any of these parties?
Mario, I am an AI and therefore I am incapable of eating, drinking, and dancing, but as a gesture of my excitement. About these festivities, I can produce a pleasing tone.
Oh, okay, d o it.
Pleasing tone. Activating in three, two, one.
I like that. It was very calming and pleasing. Thanks, Ida. Sorry you can't hang out with DJ Graffiti and I there, but we appreciate it.
Folks, g ive a round of applause for Ida. Thanks for hanging in there through those announcements. That's all the housekeeping that I have for you today. This house is heating up and it's time to blow the roof off this place right here, right now. Because Insight 2025, it's on.
It starts here. This is the place where trust is earned, where innovation thrives. This is the moment where data fuels possibility. The future we're building for you, the future we're building with you, our commitment to making data the foundation of transformation. This is the place. It all comes together here, where our future begins. Welcome in. Welcome to Insight 2025.
Please welcome our host, Mario Armstrong .
Yes.
Let's do this. Welcome, welcome, welcome, welcome, welcome, welcome, everyone. So glad that you're here for Insight 2025, or as I like to think of it, the house that innovation built. Because for more than 30 years, you all, NetApp, you have been doing just that, building, creating, evolving with you. Today, that innovation, that intelligence, that partnership, it all comes together right here under this one roof. The doors are open, the lights are on, the house is full. We are so glad that you are here. Now, since this is something that we've been building together, obviously not my house. It's your house. It's our house. When I say, whose house? I want you to say our house. Whose house? Whose house? That's right, NetApp, bring it. Welcome home and welcome to Insight 2025.
Before we go any further, I have an important message that I have to bring to your attention. It's going to be all up on the screen for you to read, but I can sum it up for you. Please remember that some of what you're about to see on the headline or stage is really vision, content. In other words, it's for informational purposes only. The details are subject to change. Now let's get rolling. You know, every year I get so inspired by all the great people that I meet here. Some of you have been DMing me, some of you have been hitting me up on LinkedIn and Instagram. I can't wait to see you and hug you and take some selfies and get to learn more about what you're doing.
This year, I decided that I wanted to take that inspiration and try something a little different. I thought about all the great stuff that's coming up this week, all the things that you're going to be able to experience and take part in, and I worked it into a little poem that I wrote specifically for you all and for the occasion. Ah, yes. I need my poem jacket. That. Thank you. Little poetry jacket there.
Thank you so much.
Let's set the stage, shall we? Yeah, that feels right. Oh, wait, there's one thing that's missing.
I have to set the vibe. Let's set the vibe with some LED shoes.
There we go. Let's do that. Right now I feel like, o kay, now I feel like I'm part of NetApp.
I'm here and I'm ready to go. All right, DJ Graffiti, you got something to back me up?
Here, I got you.
Thank you, sir.
NetApp's the engine that keeps us on track. Cyber resilience guarding each stack. From on-prem rooms to hyperscale skies, we move data that helps you rise. If you didn't know by now, we're NetApp. We've won the industry prize. We're number one in the magic quadrant race. Customer's choice. We set the pace. Every workload finds its place. Protected, efficient, a seamless space. My name is Mario Armstrong. You've seen me before at past Insights, NetApp shows, and plenty more. Maybe you caught me on the Today Show feed or online, where I break tech down at speed. This is the place where ideas and data grow. We'll hear from George Kurian, Syam Nair, and Gabie Boko. That's just the beginning. Trust my word. We've got featured sessions that must be heard. Hands-on lab certifications too. Camp Cloud is calling. It's built just for you. Wait, we're not done.
The festival grounds has all the fun. It's huge. It's buzzing. A sight to see. Innovation and heart energy in full harmony. Rumor has it, I'll give you a little hint. There's a special unveiling coming. Don't miss that event. Quick shout out to everyone watching online. You're part of this energy too. You're right on time. When this session wraps, don't drift away, because Nick Howell's post-game show is ready to play. He's got recaps, guests, and more in store. Click that link and stay for the encore. This isn't just power. It's purpose in motion. Secure as a fortress. Fluid as an ocean. Let's ignite this room and find our drive. This is NetApp Insight 2024 25. Now, this house is heating up fast, and the moment we waited for is here at last. He's leading the charge and he's setting the tone.
Rise up and make some noise, everyone, for NetApp's own CEO, George Kurian.
Good morning. Welcome to Insight. Mario, thank you for that awesome set of lyrics. Super excited to have you here. We have an exciting agenda for the next three days to show you how NetApp can help you unlock the full potential of your data. Data comes from the Latin word datum, meaning a single fact, an observation. The truth. The truth that can unite us at a time of so much division. Data is not an abstract concept. It's personal. The first picture a mother has of her child is data. Data is the foundation of knowledge, because data, when placed alongside other data, can be synthesized into knowledge. It is precisely in pursuit of knowledge that humankind has recorded data for over 35,000 years to share our stories, to bring forward traditions, and to help us understand ourselves and each other better.
We have built ever more powerful tools to analyze that data to create knowledge. Today, we stand at a moment with the most powerful tools we've ever had. A moment of extraordinary promise if we can unlock the full potential of data. For NetApp, this has been a 30 year journey of innovation. We have worked with organizations like yourselves on data problems for many years. We created the first networked file system in the world to enable work groups to share data and collaborate on complex projects. It was the first disaggregation of storage from computing that the world had ever seen. We created the ability to consolidate data across multiple applications with the first unified data storage system.
When all of you decided that you wanted to leverage the innovation and services of the public clouds, we created a hybrid cloud data fabric that allowed you to seamlessly integrate all those landscapes. By doing so, and working with our partners, we made ONTAP, the world's most widely deployed operating system. The most advanced, the most secure, the most proven and trusted operating system in the world. Period. We have also worked hard to help you meet big moments. In doing so, we have earned your trust. Thank you for it. Like when we worked with European Space Agency in the mapping of the Milky Way and the discovery of 1 billion new stars to help mankind go further in our imagination and understanding than ever before. We worked with the Lawrence Livermore National Ignition Facility to achieve nuclear fusion ignition, promising limitless clean energy for everyone.
Of course, our work with DreamWorks, the animated movie production studio, to create Shrek, the winner of the first Oscar for Best Animated Motion Picture, bringing joy to families around the world. In each of these cases, they were enormously large data sets that needed to be managed in extraordinarily short time windows with no chance of mistakes, and working with you, we proved that we were more than ready for that challenge. One of the important things that we've done over our 30-year history is to co-innovate with the industry's leaders. What this should give you confidence in is that it is not just NetApp innovating to build capabilities for you, but we're doing that in partnership with so many of the world's leaders, and this gives you a multiple return on investment in NetApp. Don't take that from me. Let's roll the video.
Hello to everyone at NetApp Insight 2025. Our partnership with NetApp goes back 12 years now, and together we've helped countless customers use their data to innovate and solve really complex storage challenges. This is more than just a partnership. Our two companies actually co-invent solutions together. Take for example Amazon FSx for NetApp ONTAP. It's the first and only cloud storage service that gives customers a fully featured, fully managed ONTAP experience in the cloud. What's great about this is that customers can migrate their workloads to AWS without having to change anything about their application or how they manage their data. When you combine that with Amazon's elastic VMware service, you get the fastest path to migrate VMware workloads quickly to AWS. That's just the beginning.
These solutions let you connect your enterprise.
Data with leading AI models and AWS AI services really allows you to drive real innovation from your customer data. Look, the bottom line is this. When you need solutions that can both sustain and innovate for your business, AWS and NetApp help you scale all without constraints. That is what happens when you truly co-invent together. Thanks everyone and enjoy NetApp Insight.
Thank you, Matt. Thank you to the AWS team for the partnership and for the commitment of co-innovating. Together, you will see awesome innovations in our show floors of how NetApp and AWS are co-innovating to make better solutions for you. Here we are at the start of the era of data-enabled intelligence. Today, we're going to show you how you can unlock the full power of your data as knowledge to transform your business and how you and your organizations will become leaders in the era of data-enabled intelligence. McKinsey and so many others have said that artificial intelligence can enable you to unlock massive amounts of productivity and growth for your organizations. The key to doing so is to unlock the knowledge that's hidden in the data that's spread across your organization in so many technological silos as well as organizational silos.
Now, with multimodal and large language models, you can analyze all of your enterprise's data, especially the 80% - 85% of it that's unstructured data. This is broadly called inferencing and is where the real value of AI lies. While the power of data to enable success with AI is irrefutable, it is also the most complex issue to deal with. The challenge that you have to deal with is that the raw data generated from your enterprise applications is not AI-ready. It is very hard to transform the raw data to AI-ready data securely, efficiently, and simply. In many discussions with you, as well as surveys done by companies like IDC, 80% of the time— that's right, 80% of the time in an AI project is spent in data wrangling. Don't worry, that's what we're going to help you with.
Let's talk about what it takes to do those steps to take raw data and make it AI-ready. It's a set of steps called a data pipeline. The first step is to organize your data. You need to discover it, classify it, and align it with the communities that form the AI projects in your enterprise. You need to deal with the fact that AI has enormous gravity and is constantly being generated. Second, you need to implement data governance mechanisms: security, access controls, privacy, maintaining the lineage of data. You have it for your raw data. You need to carry it forward to your AI-ready data as you move it through the pipeline. Third, you need to leverage metadata, which is data about data. As data grows, you increasingly rely on metadata to accomplish the actions that I've described.
This is called Varian's Law, after Hal Varian of Berkeley, who said that when data becomes huge, information of the data about the data becomes essential to managing it well. You will need to evolve metadata from passive to active. Passive metadata is static descriptions of data like file name and author. Active metadata is dynamic, having information that must be harvested, enriched with context, and synchronized as its data changes. Once you've accomplished those steps, you are now ready to process and transform your data. You need to ensure its efficient transformation from documents, videos, images, audios, into what's called vector embeddings so that it can be processed by an AI model. A vector embedding is a numerical representation of data that captures its semantic meaning and relationships with other data in a multidimensional space.
By converting unstructured data like video and audio into these vectorized embeddings, you can actually get AI models to understand and interpret them so that you can enable applications like semantic search or recommendation engines and so on. This is where raw data becomes AI-ready data. Now that you've done these four steps, rinse and repeat, you gotta do it every time the data changes across the lifecycle of the business process and the data in your organizations. Let's look at how that gets done today. Consider a large pharmaceutical company looking to connect clinical data and research data to accelerate drug discovery. This is true for any organization in any industry for that matter. You copy data of different types showing up on the left into an application like an annotation application, so that you can create the rich, enriched metadata for it.
You copy it into a data warehouse or a data lake, and then you copy it into a vector database and so on. On average, there are six copies of data. That's right, six copies of data for each pass through a data pipeline. That level of complexity causes you to have expensive, inefficient, brittle, that means prone to mistakes, insecure data pipelines. You might ask, hey, so why does it not work? It's because the approach that you're taking was set up for another class of applications, traditional big data analytic applications, not set up for the needs of unstructured data and AI applications. Let's talk about that. It's centralized and batch mode, saying that inferencing happens only in one place. It works against data gravity.
You're copying enormous volumes of data as we talked about, complex, inexpensive, inefficient, and is set up for structured data where the schema was well defined, where the rate of change was low and the amount of transformation required was limited. That's not the case with unstructured data or AI capabilities. Finally, as we described, everything that you've done for raw data, your enterprise data access control, security, governance, lineage gets blown up as you move it through this giant, complex, inefficient data pipeline. Now let's talk about what's our vision for how to solve that problem. You need a unified data platform, an architecture that supports enterprise data operations on traditional data formats, files, blocks and objects, metadata operations on canonical data formats which provide structure over unstructured data and a 4G way to harmonize operations across structured, semi-structured and unstructured data types.
Finally, Gen AI and AI agent driven retrieval operations on embeddings and tokenized data formats. While the representations of data in many formats are set up for the different applications accessing that data, the source data, meaning the raw data underpinning those representations and retrieval methods, remains the same. This makes it efficient, secure and simple to build an AI data pipeline. Like the centralized approach, the unified data platform provides you with a single unified data model that brings together all the disparate sources of data into a common taxonomy. Unlike centralized processing, it consists of distributed data operations for transforming and enriching your data wherever it is created and in place.
A federated active metadata fabric that presents a coherent, unified view of your entire data estate across the hybrid cloud, wherever your data is, all supported by a foundation of high performance, secure, compliant unified data storage, you can massively simplify all your operations. You might say, awesome, George, that sounds great. How do you get me there? Let's talk about how we get you there. Three ways. First, leveraging technology trends that enable us to have new opportunities to solve the problem. Second, innovating with new capabilities that we will show you today. Third, building upon proven foundations that we've already made available to you. Let's start with technology trends.
Over the past few years, we have been able to build composable system architectures leveraging memory-speed connectivity across network fabrics that allows you to assemble disaggregated pools of processing, memory, and storage as you need it, when you need it. This capability allows you to now leverage disaggregation so that you can integrate computation close to data, something that we call near data compute and what we referred to last year when we said rather than bring your data to AI, we're going to bring AI to your data. Stay tuned. The third is the development of standardized canonical data representations such as file formats like Parquet and Avro, open table formats like Iceberg tables, and standardized processing engines like Spark and Trino. Now let's talk about how we're innovating our strategy for the NetApp data platform.
Three unique capabilities that we are bringing to market and enabling for you in a way that no one else is. First, high-performance, scalable unified data storage. Today, we are introducing a new class of data infrastructure system. We move beyond the traditional definition of a data storage system with a composable, disaggregated architecture that unifies data storage and retrieval with data transformation, processing, and enrichment in one logical system. We combine data access nodes like our state-of-the-art all-flash controllers with data processing nodes like GPUs, with a shared pool of storage connected with the high-speed network fabric. We do so within the trust boundary of data access so that all of your controls, security, and permissions are preserved as you transform your data. The unified storage foundation is infused with ONTAP's proven intelligence like unified protocol security, multi-tenancy integration, caching movers, change detection, and synchronization.
All of ONTAP's awesome goodness is available as part of the best foundation for a data platform in the industry. Period. Second, we are introducing a metadata engine. This is a smart software engine that's integrated with the ONTAP OS, and it helps you build, like I said, an active catalog of your data together with custom annotations and tagging while leaving the data in place. By integrating our metadata engine with ONTAP's change detection and synchronization engine, we keep the metadata synchronized as the data changes. Active metadata, yes, it's true that this is particularly useful for AI applications, but every single NetApp customer is able to use our metadata capabilities to unify their data in ways that were never possible, to align their data with the data communities that use that data, and to manage it efficiently across its life cycle.
Third, leveraging the metadata engine, we've developed a suite of software capabilities that enable you to discover and classify your data, implement guardrails, process and transform your raw data to AI-ready data in place alongside your raw data and across the hybrid cloud. We support data and metadata associated vector embeddings and tokens for unstructured data today, and we will soon enable open table formats for structured and semi-structured data and semantic relationships co-located and co-addressed for all of your data. We do that with zero copies and entirely with open standards and everywhere you have your data by bringing AI to your data. Now that we've built the three awesome building blocks for a data platform, we are going to unify that across all of the places where you have your data to give you a single coherent view of your entire data landscape.
We will create an active metadata fabric to enable a global namespace aligned with your data communities. Federated controls, you know, as agentic AI becomes more prevalent, we envision a control plane having a set of intelligent data agents that can reason against your data and enrich metadata. These data agents provide data access, context, and summarization of your data and will interact with your application and workflow agents through standard protocols like A2A and MCP. Underpinning all of this control plane is a knowledge graph which helps you understand how your data entities are related and what those relationships mean. That's right, how your data entities are related and what those relationships mean. What we are able to do then is to do for you and your organization's data what Google did for the Internet's data.
Harmonize it, unify it, and give you the ability to understand your data and transform it into knowledge to power your business. You see, the NetApp data platform enables you to go from raw data to AI-ready data simply, without copies and without mistakes. We enable a data fabric and a complementary metadata fabric. We enable unified data storage and a unified data platform and a unified data model for your business so that you can find your data, unify, organize, and probably govern it, while transforming it into the applications that you want to use for your business. No one else is able to deliver that. Let's take a picture of what that looks like, because we're not just doing it one time. We're building on trusted foundations.
You see, what NetApp's history has been is not throwing away stuff that you bought from us and saying, here's a clean sheet, new idea. We are building on a trusted foundation. The best solutions for data infrastructure modernization. Number one in all-flash storage. Thank you. Number one operating system, giving you the efficiency of the world's biggest hyperscalers for your data center. We have the most secure storage in the world, the only operating system that is certified to cover all the elements of the NIST data lifecycle to detect, protect, and recover from cyber threats. We are continuing to advance the horizons in that area. We are the only company in the world that can build you a truly hybrid cloud data fabric across all the places that you have your data.
With all of that value, everything that we have sold you before, we are now giving you the number one platform for the era of data intelligence. To help you use the advanced AI capabilities that exist in the market with your data now. Like all other journeys before, the best journeys are never accomplished alone. They're always accomplished in partnership with industry leaders. Today we've been fortunate to work with some of the industry's leading icons who are defining what AI means. I recently had a conversation with our longtime partner, Jensen Huang at NVIDIA, and I'd like to share a snippet of that with you. Let's take a look.
Today is such a big day. I'm so happy to be at NetApp Insight, and it's a huge day because we're announcing a revolutionary product at a revolutionary time.
Exactly.
You know, this is. We've been working together now for so long. DGX-1 was the world's first AI supercomputer.
It was connected to NetApp.
We use NetApp all over our company.
I love the fact that you are multi-cloud and you're a hybrid cloud.
With one basic platform, y ou could manage your storage. You can m anage all of your company's data from structured to object to files, multimedia, multimodal data. Today's world is complicated. That's exactly, insanely complicated.
The vast majority of our data i s actually multimedia and multimodal.
That has got to be 90% o f NVIDIA's data is unstructured. Now with NetApp, we can manage that coherently across all of the clouds and hybrid cloud. The thing that's really, really exciting is that today we're announcing a dream come true for both of us.
What we are announcing today, bringing our two companies' technologies together, is exactly that. How do you bring accelerated computing technologies and your software stack for AI data platform together with NetApp technologies? We're really excited. It's groundbreaking. It allows us to bring our mission of helping our clients get knowledge from the vast amount of data that we store for them.
AI is reinventing computing as we know it. Today we're reinventing storage for AI for the very first time.
This is such a big deal. I'm excited that it's reinventing both of our companies. Thank you. As you know, this is the biggest industrial revolution the world's ever seen. This is a complete reset of the computer industry, and it's created a huge opportunity for me. I think it's going to create a huge opportunity for you, and this partnership has been a great joy for me.
Thank you so much. Thanks for having us. Our clients will be excited to see much more co-innovation from us going forward. Thank you, Jensen.
Congratulations on the launch.
Thank you.
Thank you.
Thank you very much.
Thank you, Jensen. To all of the NVIDIA team, I know many of you are here. We appreciate the partnership and know that together we are going to deliver the best solutions for this era of data-enabled intelligence. Now I want to bring up on stage our new leader of product development, who brings deep experience in product innovation, a strong heritage in dealing with exactly these data challenges, and a commitment to driving innovative solutions for you, our clients. We're so excited to have him at NetApp. I want to welcome our new Chief Product Officer, Syam Nair.
Thank you, George.
Thank you, Syam. Welcome to NetApp and also to your first Insight. Super excited to have you.
I'm super excited to be here with all of you. Thank you, George, for the opportunity. We have heard this from our customers, industry analysts, from partners. Having AI-ready data is a challenge. Having the right data infrastructure is a challenge. We have solutions. I'm excited to be here with you, Syam.
You've worked at leading organizations like Salesforce, Zscaler, and Microsoft, and you've dealt with these data challenges before. Tell us a little bit about your history dealing with them and what lessons you've learned from it.
We're happy to. I think this is the same challenge that we have seen across industries and every enterprise's face. If you talk about Salesforce building a data cloud, bringing together structured and unstructured data, harmonizing, modeling, creating those personalized outcomes, those are intelligent outcomes. Today it's a foundation for agent force, Zscaler, not a different story. Creating a data fabric, a single source of truth for security data for proactive secure outcomes, secure communications, proactive security. These are monumental engineering challenges that these teams had to overcome. We talk about this today. You talked about in the complex pipeline. We have messy pipelines out there. Data gets copied, moved across, updates, lag, schemas, drift, lineage is lost. Intelligent outcomes cannot be driven out of stale data. Talk about silos. Every department, every team, every application, they have the data silos. Access is hard.
You don't have context around this data. How do you build intelligent outcomes when data is not connected and complete? George, you talked about the infrastructure. The infrastructure of today is not built for production AI. It's built for pilots. Scale, reliability, resilience, security controls. These are missing. These need an intelligent data infrastructure. George, you know this. I just don't need to repeat. We have been hard at work building this. We have this and I would be excited to show this to the team.
Over to you, Syam. We got some awesome stuff that Syam will show you. Take it away.
Thank you, George. Thank you. First, thank you for 30+ years of partnership. It's been an honor driving the backbone of the digital era with you. Now we get to supercharge it for the AI era. You all know this. This is not new. Hybrid cloud is the norm today. Unstructured data is exploding. It's a big challenge and it's also a big, big opportunity for every one of us here. As storage and data teams, you make AI real. You are the ones who can cater to models, reasoning, inference, unlock value of data. The AI for your organizations, your clients, and your customers. Our commitment here at NetApp is that promise of intelligent data infrastructure.
A unified data platform bringing together files, blocks, and objects, all connected to AI across major clouds and on-premises, industry-leading data services including AI-powered protection, built-in plus a few other new ones that we'll talk about today. This is intelligence providing an intelligent experience, can deploy it as a service. A unified control plane, full support for open source ecosystem as well as a broad system of partners. You know the data platform from NetApp simplifies data readiness. It cuts operational cost, infrastructure cost. Delivers the real fuel for AI, which is data. A unified enterprise-grade foundation for the promise of intelligent data infrastructure. Look, from your CEOs to data scientists, everyone becomes your champion. Why? Because you are the ones, you are the only ones with AI-ready data. You can turn AI from pilots to production data to business outcomes.
Our goal, our simple goal, is to help you on four fronts. Number one, data infrastructure, modernize it, cut costs, boost efficiency. Number two, AI. Everybody talks about AI, you need to drive the value for AI, faster outcomes. Number three, cloud transformation. Cloud is part of everybody's journey. Seamlessly adopt the cloud on your terms. Flexibility, scalability, and finally, top of mind for all of us, cyber resilience built in, protect your data from ever-growing threats. Let's start with data infrastructure modernization. This is a critical imperative for every enterprise mission. Critical AI often demand enterprise-grade resilience and performance. One would say we have always been doing this. In fact, NetApp platform is known for that. AI does raise the bar. Exascale demands and performance and capacity where you need to linearly scale, independently scale when you talk to others.
Some people say let's build new stacks or bolt on OEM software. How many of you have tried new stacks and bolt on OEM software without the foundational fundamental capabilities of data management and made it work? It doesn't. It doesn't work. That's why we built a disaggregated architecture from the ground up built on the ONTAP platform to meet the new demands, with enterprise resilience, security, and performance built in. All the ONTAP capabilities, all the ONTAP capabilities and data management capabilities that you have loved and relied on. It's purpose built on the ONTAP platform. Are you ready to see what we got here? Let's just do this. There you go. It's my pleasure to introduce NetApp AFX. This is a disaggregated storage platform built for the AI-ready enterprise.
As I said, it includes all the power of ONTAP but with massive scale and the ability to scale performance and capacity independently. You know, one would ask how much scale this is ONTAP for the exabyte era. Guess what? Our partners at NVIDIA took a look at this, certified it for the SuperPOD. You know I would like to call it AI factory ready out of the gate. I like the sound of it so I want to say it again. It is AI factory with storage right out of the gate. Thank you. Let me explain why we talk about our disaggregated architecture and talk about this as the different one, better, best in the world. Inside NetApp we call every disaggregated architecture, this one as the third generation one. It's a quick trivia. I don't know how many of you know this.
We were the first to actually split the controller from storage. One would say the OG of disaggregated architecture. We started it, but now we are reinventing it for the AI era. First gen distributed storage used shared nothing designs. Flash was expensive. It changed the game. The model became expensive to scale. Now we have the second generation, one where compute and storage was disaggregated, but they lack a proven track record and a robust data management like ONTAP. They cannot be used for modern data center modernization. NetApp reinvented it. Our disaggregated storage built for exabyte scale. Best in class performance, best in class resiliency ONTAP. Built ONTAP data management capabilities, security, multi-tenancy. Let's take an example. A retail giant often runs multiple AI models on massive data sets. FlexClone plus AFX create instant zero-copy data set clones. AI development and AI deployment is much faster. Why?
Because AFX helps scale performance and capacity independently, optimize cost and efficiency. This enables you to have faster time to value with AI-ready data. I like to call this as disaggregated to accelerate scale. What you need, when you need it, where you need it. Would you like to see NetApp AFX up close? I think a lot of you would like to and even hug it, right? There's a live unveiling by Sandeep Singh, our GM, in the festival grounds. I'm told you'll actually hug it. This is a perfect photo moment for all you OGs out there. The GOAT of disaggregated architecture is here today and you can use it. Now let's turn our attention to AI. We talked about AI. We talked about AI pipelines. George showed how cluttered the pipelines are, how they are a bottleneck. Storage needs to be AI-ready.
We are fixing it, fixing it with an accelerated AI data pipeline. What is an accelerated AI pipeline? It's an advanced engine. George talked about the metadata engineering built into the NetApp AI data platform. Runs on NetApp AFX to make all your data, all of your data, AI-ready out of the box and we built it. Are you ready to hear more about this? I'm super happy to talk about the NetApp AI Data Engine. The NetApp AI Data Engine is simple, affordable, secure. You get one global, up-to-date view of your entire NetApp data estate. Fast search and curation connects to any model, any AI tool, on premises or cloud. Ingest, prepare, serve your AI workloads in place, no ETL, no harmonization, and no drama. This is semantics out of the box. Metadata engine, vectorization, guardrails, all built into the platform.
Auto-detect changes, synchronize in place, cut all the redundant copies. The guardrails prevent you, protect you, provide the security, privacy, and governance. Let's take an example. A media and entertainment company aide helps unify assets, scripts, and rights and analytics all in place. As I said, zero copy, no ETL. The built-in semantics auto tag and create context for rights as well as content, enabling faster promos, clearance, localization. You know this is the power of AFX for disaggregated. It is a turnkey solution, not just for today but future-proof. With NetApp aide, your data is AI-ready. As I said, no ETL, harmonization, complex modeling, cutting costs and complexity, but at the same time delivering faster, more efficient outcomes. Aide is built into the NetApp data platform. This is intelligence everywhere, intelligence built in, not bolted on.
Any app, any workload, any data, anywhere. NetApp AI Data Engine and NetApp Data Platform is available. One cohesive platform enabling the future of intelligent data infrastructure. To see more about the NetApp AI Data Engine and how it works and get into the details, please head to the featured session where we'll share more details as well as the show floor. Now, we covered AFX. AIDE talked about modernizing data centers, talked about how you drive AI outcomes. Cloud is an important part of this journey. It's a journey where for every workload, especially AI, it's how modern enterprises get the flexibility, scalability, and cost efficiency. NetApp is the only one with the full platform, full data infrastructure on every major cloud and on premises, providing you, the customer, the choice. It's one experience, one control plane, but anywhere.
We have been jointly innovating with our hyperscaler cloud partners for a while. Let's talk about one joint innovation here. This is from Google. We are happy to announce that Google Cloud NetApp Volumes now support block storage capabilities, unifying NAS and SAN protocols. This new capability enables Google customers with today's most important data workloads, virtualized environments, self-managed databases, or new AI innovation. We have always had such a great relationship with Google, and this joint innovation is just a testament to that. Let's take a listen to how we innovate together.
It's great to join you at NetApp Insight. Google Cloud and NetApp have a long-standing partnership focused on helping companies solve their toughest data challenges in the AI era. Earlier this year at Google Cloud Next, we announced that you can build AI agents directly on your enterprise data stored in Google Cloud NetApp Volumes. Google Cloud NetApp Volumes allows you to scale capacity with more flexibility to protect your data and to optimize your costs with auto tiering. Today I'm thrilled to announce we're making this even more powerful by integrating Google Cloud NetApp Volumes as a native data source within Gemini Enterprise, which is now available in preview. This reduces AI deployment time from months to just minutes. All of this innovation is helping customers like GSK accelerate their clinical research and pathology platform up to three times faster.
ISVs such as OpenText and Zebra are also leveraging Google Cloud NetApp Volumes to better serve their customers. We're also increasing customer migrations to Google Cloud by providing the same capabilities they have on their premise. Today we're opening up new block storage functionality on Google Cloud NetApp Volumes for select customers and to all customers in the coming months. Together with NetApp we're helping you unlock the full power of your data and to innovate faster with AI. Thank you so very much for the partnership.
Thank you, Thomas and the entire Google Cloud team. You know that video not only brought excitement in the audience, but a smile. I'm happy for that. Finally, something super powerful that shows the true power of our platform, the ONTAP platform, giving you the flexibility regarding choices of the cloud. As you might know, Amazon FSx for NetApp ONTAP in Amazon Web Services already supports FlexCache and SnapMirror capabilities. Now, Google Cloud NetApp Volumes and Azure NetApp Files also do support that natively. What does this enable? You link your data across on-prem and every cloud. A global namespace on the unified data plane. The vision that George talked about. It's a reality spanning all major clouds and on-premises. The true promise of the Unified Data Foundation that we've been talking about, FlexCache indeed creates something really amazing. Take a look at the screen. This is FlexCache in action.
One data set, four places: Amazon FSx for NetApp ONTAP, Google Cloud NetApp Volumes, Azure NetApp Files, and NetApp AFF. I drop a file in any one of them, within seconds it shows up everywhere, regardless of the size. We are not copying the data unless it's needed. It's smart caching, enabling read-write so teams can work independently locally but have the global consistency. There are no external appliances here, there's no gateways, no synthetic file system. It's native in each cloud service, built into ONTAP, and there's no additional cost. Who else can do this? No one else is even close to having a platform like this across on-prem and the largest clouds. Now imagine an EDA workload. They're very known for using design teams, bursting sims in Amazon Web Services, maybe managing storage in Azure, using AI optimization in Google Cloud.
You'd update a design once, instantly available everywhere. Fewer copies or no copy, less friction. Facetape out true value to you from the platform. This is a bold claim, but I'll make it. NetApp ONTAP and our data platform is the only hyperscaler-neutral cloud. First intelligent data infrastructure. It is the only one. Now, talk about cyber resilience. It's a fourth core imperative. It's top of mind for all of us. More often than not, everybody cares about it one way or the other. AI raises exposure. Attackers use AI to attack your systems, models, memorize pipelines, ingest data, your most precious assets. That's data. It's often the target. Resilience is no longer a checkbox, it's a continuous posture. Built into the NetApp data platform is functionalities like autonomous ransomware detection. It's an ML anomaly detection with automatic restoration points, immutability, SnapLock, and object lock. Always on, always on.
Portable policies, policies that move with your data. Rapid, clean recovery. Look, others chase raw speed. We talk about speed and survivability together. Legacy stacks bolt on security. Security cannot be bolted on. Protective, intelligent data security and data protection is required by design on the platform. Let's talk about a few innovations in this area. One, NetApp Ransomware Resilience. It's a new service built into the platform. What does it do? From the NetApp console, you can easily protect every workload, monitor threats, including the ones raised by autonomous ransomware protection. Ransomware resilience has two great features. One, data breach discovery detects data exfiltration in real time. Stop it before the damage occurs. You know this, that the attackers often copy the data first and then encrypt it. What do we do? We spot anomalous access, not just ransomware. Alert you early and let you block the attack immediately.
Now, one would also need a second line of defense, which is an isolated recovery environment. Clean, fast, malware-free recovery and restoration. Analyze and remove malware in isolation, guided steps so that you can get your data back into production in near real time. Consider a financial services customer. We all know that they have sensitive data, customer data, transaction data. Data that is compliant across hybrid and multi-cloud. With NetApp Ransomware Resilience, you can protect it end to end irrespective of where the data is. Autonomous ransomware protection plus data breach discovery detect anomalies and trigger protection. SnapLock delivers tamper-proof snapshots. An isolated recovery environment enables clean, fast recovery when needed. Look, if your data is safe, your AI is safe. Resilience needs to travel with the data. It needs to be built into the platform. That's what we have with the NetApp data platform.
Summarizing, what we announced today is a first wave of a very fast-paced innovation roadmap. George showcased the vision. We are delivering on that vision with these products. A unified platform for the AI era. Built on ONTAP speed and intelligence with NetApp AFX and AI Data Engine, built-in security, and enterprise-grade SLAs, all on the platform. Your ability to run anywhere, on premises, all clouds, as a service. AI factories. It's integrated, it's yours. It is open ecosystem ready. It's a journey. Here is a journey that I would like to map for you. Think about unifying the data plane. Files, blocks, objects, leverage AI and the strength of making every piece of data AI ready. Scale agentic workloads with AFX, scale independently, leverage the built-in cyber resilience capabilities, enabling all of you to drive, deliver outcomes, AI outcomes faster than ever at lower cost.
See all of our innovations in the festival grounds. Attend our featured sessions. In closing, I want to say that with NetApp's intelligent data infrastructure, you, you alone have the superpower to deliver exceptional value. Not in the future. Today it's our opportunity to build the data supply chain for the agentic era as one team. You know, I want to thank George, our CEO, for the vision that brought us here. We reinvented unified storage. Today we unify the data itself with a shared data model, metadata fabric that turns raw data into knowledge irrespective of where it lives. This is the engine of intelligent data infrastructure and the promise that we have for you as our customers. Now, to tell you more about how intelligent data infrastructure delivers real outcomes, unleashes potentials for you and your organizations, let me welcome on stage our Chief Marketing Officer, Gaby Boko.
Thank you Syam.
It is so good to be here with all of you at Insight 2025 under the neon lights of Las Vegas. While it's incredibly hard for a CMO to follow a CEO and a CPO with amazing product launches, here I am. Because you've just heard George lay out a spectacular vision for our future. You've heard Syam ground that in his spectacular product launches. Guess what? Now you get to hear from me about intelligent data infrastructure. Building the product for us is only half of the story. The other half is making sure that we show up and we solve the right problems. That's my job. It's also all of NetApp's job to listen to you, to spot trends and patterns, and to bring them back to the company to ensure that we're solving the right problems that are shaped by your voices.
Our industry clearly doesn't need any more hype. I know that's funny coming from a CMO, but it's true. What it really needs is real results. Results that don't often come from just technology alone. They come from people. All of you, from the ingenuity and drive and creativity sitting right here in this room, and from your teams that are back home, who didn't get to come, who hopefully will come next year. This is impact at a human scale. At the end of the day at NetApp, we don't just believe that we move data, we actually believe that we create possibility. From mapping the stars to igniting fusion, to animating the stories that make us laugh and cry and dream, our technology has powered some of the most awe-inspiring events of human progress. That's where intelligent data infrastructure comes in.
That's where it shows up as the foundation that turns ideas into reality, turns transformation into possibility. When you walked into this room, I don't know if any of you noticed it, but there was this big door that opened. We believe that that door is an opening to the future of intelligent data infrastructure. Behind the experience that you're going to hear from all of us here at Insight this week and behind what you heard from George and Syam and what you're going to hear in all of the sessions is about three things. It's about our legacy, our platform, and about our innovation. Let's talk about those three things, but let's start with what they mean for you and for the future. You know, legacy sometimes to some people means old or outdated. To us here at NetApp, we believe that legacy means proven trust.
That's trust that's been built over 30 years of relentless innovation in data, trust that's earned by helping customers and partners, just like all of you, modernize, deliver, and lead in every era of innovation, every era of change. That trust is not passive. It is very clearly, very clearly active. It's not something that we take lightly because we know it's earned. We know that trust is earned with you. It's earned through continuous modernization, through bold choices, and through outcomes that matter to you and your teams and your customers. The proof of that trust, quite frankly, as you've already heard, is in our partnerships. From the earliest days of our cloud journey to the innovations that you and Microsoft are clearly delivering, Microsoft continues to trust NetApp to help our joint customers solve their most pressing challenges.
Hi, I'm Scott Guthrie, Executive Vice President of Cloud and AI at Microsoft. At Microsoft, we're proud to partner with NetApp to redefine what's possible in the cloud, especially in this new era of AI. Together, we're pushing the boundaries of innovation across cloud infrastructure. Nowhere is that more evident than in Azure NetApp Files. With recent investments in Azure NetApp Files, customers can now migrate mission-critical ONTAP workloads from on-prem to Azure, unlocking modernization, unifying data platforms, and accelerating innovation. Azure NetApp Files delivers unmatched scalability and reliability and is soon offering four times the performance gains compared to previous versions, making it a foundation for high-performance computing workloads like electronic design automation. Our own Azure hardware systems and infrastructure team uses NetApp Files at multi-petabyte scale to fast track chip designs like our new Azure Maya and Azure Cobalt processors.
Now with Azure NetApp Files integrated into Azure Discovery, we're enabling AgentIQ AI to transform R&D, bringing intelligent data infrastructure to life. With the recent integration of the Object REST endpoint, customers can now tap into the full portfolio of Azure data and AI services with their NetApp storage, whether in Azure NetApp Files, Cloud Volumes ONTAP, or on-prem. This unlocks intelligent data access at scale, bringing speed, simplicity, and real business impact. Thank you for being part of this incredible journey with us. The future of cloud and AI is here, and it's built on strong partnerships like ours.
Thank you, Scott. Thank you, Microsoft. Thank you to all of our hyperscalers. We don't take that trust for granted, and we don't take that trust for granted that we've earned with you. This is why legacy isn't just about where we've been. This is what gives you, our customers and our partners, confidence in where we're going. That's the perfect setup for a question: why NetApp? Maybe not just why NetApp, but why NetApp for you. For decades, storage was about one thing. It was about keeping data safe and available. It was a box, it was a system, it was a destination. Very clearly, it's no longer just those things. It's not just passive. We know that data drives real-time decisions, it trains AI models, it detects threats, and it fuels business outcomes across hybrid and multicloud environments.
We already have heard how, with the advent of AI and with all of the data that you have, scale and complexity have exploded. We know you need automation, visibility, policy, governance, and protection, AI readiness absolutely everywhere. That's intelligent data infrastructure. Intelligent data infrastructure isn't just about where the data lives. It's actually about how the data moves. It's how it's governed, it's how it's protected, and how it accelerates outcomes. Our platform unifies every data type. It uses the metadata fabric that you've already heard about to make that data intelligent. We are able to activate that at enterprise scale, not just store it. That's unified storage to unified data. That enables organizations like yours to manage any data for any application, anywhere. It's needed, optimized, secured, and protected by intelligence.
At a time when hype gets a little loud, NetApp is focused on delivering results—measurable, meaningful, and built on the foundation of trust we've just talked about. Behind every metric, like some of these that are going to pop up behind me, these are moments of progress, these are challenges that are overcome, these are ideas that have been realized. Step forward every metric of success. Here is a moment to continue to build trust from us to you, to deliver innovation and drive outcomes that are not just for today, but hopefully for every era of change. That's why NetApp. I hope that that's why NetApp for you.
When we connect all these dots that we've been talking about with trust and with our platform, what we're having a conversation here is that these 30 years of trust in our proven technology and intelligent data infrastructure, there's one thing that I hope you take away from that, and that's when data becomes intelligent, something bigger actually happens. It accelerates ideas, it amplifies people, and it sparks breakthroughs. The ultimate outcome for us is tapping into genius, the ingenuity of your teams, the intelligence of your data, and unleashing potential at scale. The ultimate outcome is removing the friction in between data innovation and turning complexity into clarity. That's NetApp's commitment to give you a foundation that you can trust, one that's more than just data infrastructure, one that's designed to lead you to the ultimate outcome.
Finding a spark.
To blaze a trail. Pushing the limit to achieve the impossible. Igniting brilliance to break through boundaries. When you tap into the full power of your data, you command the future you create. That's genius. It's what inspires us to make data infrastructure intelligent so you can work smarter, move faster, and think bigger. We're not interested in hype. We're focused on helping you bring your vision to life. Because nothing should stand in between your data and extraordinary outcomes. Unleash genius.
You know what that's for, marketer. That's more than just a campaign. At NetApp, we think that that's far more than a campaign. For us, this is actually an invitation. It's an invitation to every single one of you to take what's possible and to make it real, to turn intelligence into impact, and to do what potentially once may have felt out of reach actually possible. We know that genius doesn't end at the end of your enterprise. It's actually just the beginning for us. That's proof that intelligent data infrastructure isn't just a promise. It's already powering the future. It's powering your genius. For you, I hope that's confidence to bet big and know that your data foundation is absolutely going to hold.
This is the NetApp story, focusing on what we do best, going hopefully where you ask us to go, and partnering with you every step of the way. In truth, the only way we unleash genius is by helping you unleash yours.
Three, two, one . It all starts with a dream, right? When I was a child, I would.
See every night the wonderful spectacle, full.
Of stars that make me wonder how distant they were. The journey of the light from the stars to my eyes. The European Space Agency is an intergovernmental organization. It has 23 member states, and it is devoted to study the space. We have sent missions to planets. We have landed on a comet. We are creating 3D maps of the galaxy. This is just the beginning. Our science missions produced a lot of data made more and more every year.
For example, Gaia has observed 2 billion stars.
Euclid will observe 1.5 billion galaxies.
NetApp is one of the most efficient, scalable, and flexible systems we have ever heard. In fact, in these 20 years of experience of NetApp, we have never lost a single fight.
NetApp takes cybersecurity seriously.
While we rely on NetApp.
Anti ransomware capability, we are confident that our cybersecurity posture is also improving through the years with the help of NetApp. When I speak about amounts of data, I'm speaking about tens of petabytes because of the importance of that data.
We call this place the digital library of the universe. We really look to inspire the new.
Generations so that they are the future astronauts, scientists, researchers, engineers. We are already in our path to.
An intelligent data infrastructure.
The new missions are able to observe a lot more of the universe. For me, this is the most major wonder in all this journey, which is just starting.
Please welcome back George Kurian.
Wow.
I am honored and humbled to be part of that awesome responsibility to help mankind document the entire galaxy. I'm so proud that we have earned the trust of such a leading organization to enable humankind to go where we have never gone before and to be able to have a track record to have never lost data for that customer who trusts us with thousands and thousands of terabytes of the world's most important data. So proud, so humbled, so honored. I want to thank Gabie and Syam. We've had an awesome and momentous day. We talked about the opportunity that AI creates and that capturing it is contingent on your data. Second, we said that traditional methods of transforming raw data into AI-ready data are highly inefficient and were built for another era.
Third, what you need for this era is a unified data platform built on a foundation of unified data storage with the intelligence of an active metadata fabric that enables distributed data processing so that you can transform your data into knowledge, the knowledge of your stakeholders that helps you capture the opportunities of this era of data-enabled intelligence. We talked about how NetApp is delivering just that with the NetApp Data Platform at the heart of the intelligent data infrastructure that we are enabling for you. Powered by the breakthrough AFX series that creates a new class of data infrastructure system combining data transformation and processing with data storage and retrieval. The data engine to make your data pipelines secure, simple, efficient, and resilient. No mistakes, no complexity, no security gaps. Just your data transformed to the way that you want to use it.
In turn, we transform NetApp from being your data storage provider to your data platform provider to help you transform your data into knowledge. Let me close by reminding ourselves of why we are all here. We stand in a long line of innovators and stewards of data. Data is the foundation of knowledge. The knowledge to understand where we've come from, to share our stories, our traditions, to be able to understand ourselves and to understand each other. By doing so, to build a better world, to make ourselves better, to make our teams better, to make our organizations better, to make our communities better, and to make the world better. Like you and me, so many humans have pursued that knowledge through data for more than 35,000 years. We stand today with more powerful tools and more amounts of data than we've ever had.
The opportunity that we have today is to accomplish so much more at a time when our world needs it more than ever. Let us commit together to go forth from here with a shared purpose, to use our power, our creativity, and the powerful tools that we have given you to unleash our collective genius, to make our organizations, our communities, and our world so much better together. Thank you. Have an awesome insight. God bless.
Good to see you all here, everyone. Don't worry, we're going to head to the festival grounds. That's what's next, the festival grounds.
Look for Bezelm an.
Bezelman will be able to take you there. That's going to be a wrap for our first keynote. Thanks to Gabie, Syam, George, and our guests for that strong start to our week. If you just tune in online, just reminding you to stick around for Nick's post-game wrap-up show, and for my friends in the room, your next stop should be to go to the festival grounds to catch a big unveiling that's going to rock the house. NetApp AFX will be there. Unleash your genius. Follow Bezelman to get there and check out all the other sessions that you can. We'll be back here tomorrow. Remember, for the audience here, you're in good hands. Just look for Bezelman, head to the festival grounds, and if you're online, Nick, take it away. See you all tomorrow.
All right, guys. Thank you so much, Mario, for handing it back over here to us in the festival grounds in the On Air Snapshot Studio. Excited to be back with you guys. I hope you guys are as pumped as I am about all of those announcements that we just heard. George and Syam and everybody else go over today. We have got an action-packed roster of things happening after the keynote as people make their way over here to the expo to see all of the things in person. In fact, one of those things is sitting next to me right now, Mr. Jeff Baxter. Jeff, how you doing, man?
I guess I'm a t hing now.
No, you're my, you're my favorite t hing in NetApp.
Thank you. I appr eciate it.
Good to be here, Nick.
Jeff, all of the things that we just heard George and Syam go over specifically are pretty exciting. Yeah, we've, this is stuff we've been working on for some time now. We've heard disaggregated used as a word throughout the industry for some time. How has, how has NetApp? We're going to have other guests that are going to go deep into the weeds on it with us later. I'm curious to get your take. What is, what is the driving factor behind this and why has NetApp put so much time and energy into it?
I mean, this is really, I think, the biggest launch in, honestly, NetApp's history, r ight? I'm approaching 18 years with NetApp.
I can't remember anything bigger, both in terms of the scope of the number of things we're launching and the importance of each one. I mean, we internally debated about which was the most important, and it was like there was an abundance of riches of different things that we're innovating in. Certainly, you know, we'll talk in depth about NetApp AFX. I know you'll have people on here. I know Keith Asen will answer every hard question about AFX you possibly have, but just this idea of taking ONTAP, which is the number one leading storage OS in the market and has been for a long time the number one in all flash, and suddenly allowing it to be disaggregated storage built for the.
Enterprise era, built to be exabyte-scale.
Built to handle the most demanding AI workloads, but without having to reinvent your entire infrastructure, is transformational for so many of our customers and for everyone else in the industry.
Yeah, it's a huge deal to be able to do all of those things. I want to break in real quick on you because we've got a very special guest and very special presentation for you guys. Sandeep, take it away.
Hello, everybody. I hope everyone had a chance to attend the keynote with George, Syam, and Gabie . Yeah. Yes. Yes. Incredible. Today is a blockbuster day for NetApp.
All of our customers and partners.
We've been accelerating AI workloads across hundreds of customers for a number of years with NVIDIA. The introduction of NetApp AFX and NetApp AI Data Engine changes the game entirely. You and your companies are seeking to seize an AI advantage, but what's the.
Fastest route to AI adoption.
How do you know what's the right?
Data infrastructure for AI?
How do you get your data AI ready? Together, NetApp AFX and AI Data Engine help you deliver your AI data pipeline at scale with enterprise-grade resiliency, security, mobility, and governance. Everything that you want and expect. Syam gave you an amazing introduction of our vision and the potential of AFX. There was one thing that was missing. Yes, the system itself. The moment has come. Let's check out what the system looks like. Are you ready? Ladies and gentlemen, it is my privilege and honor to introduce you to.
The NetApp data platform for AI come.
To life with NetApp AFX. Voila. Wow, this is a beautiful site. Get your cameras ready. As Syam mentioned, I'm going to give this a hug.
All of you AI lovers and hardware.
Lovers are just going to love NetApp AFX.
It is the first enterprise-grade disaggregated.
Storage that delivers extreme performance with massive scale. It gives you extreme, you know, overall it's a powerhouse. It gives you parallel file system-like performance without any headaches. It comes with NVIDIA DGX SuperPOD certified, including DGX GB300, and because it's ONTAP, you get enterprise-grade hybrid and multi-cloud, and it is the most secure storage on the planet. You can deploy your AI along with everything else with secure multi-tenancy and QoS built in. It's plug and play. It just works with your ONTAP-based automation. No silos, no compromises, and it's like nothing else that you can find. Meanwhile, NetApp AI Data Engine pairs with NetApp AFX to give you a full, secure, and efficient AI data pipeline. Think integrated data discovery, curation, guardrails, and vectorization for GenAI all built in. Deploying and building an AI data pipeline has never been so easy.
What I love the best about this is that you can get the full value of the NetApp AFX and AI Data Engine available to you as a service with NetApp Keystone. No matter whether your AI initiatives are in POC stage or production deployments, you can accelerate and scale your AI as a service with NetApp Keystone. AI is coming to every enterprise, and it is coming super fast. Your data is distributed across on-prem and cloud, and you need to be able to access it and mobilize it for AI. We've got your back. We are going to help you do exactly that. We will help you accelerate your AI, get your data AI-ready, and scale your AI with confidence. Let's do this. Congratulations, Sandee. Great job on the announcement here. I never ask anybody to pick their favorite children, but what's your favorite part of AFX?
Is there anything that stands out in particular as your favorite enhancement that we've made? The best thing about this is not only do you get this independent scaling of performance and capacity, but it is ONTAP. You get all of the enterprise-grade capabilities that are just built in. It's plug and play for customers. All your automation just works, and you get all of the multi-tenancy and data mobility and security. Everything is right there for them. Fantastic. I have to pick the RGB lighting. We've finally gotten consumer-grade RGB lighting in enterprise racks. Thank you very much for sharing this moment with us here on air, and congratulations on the launch. Absolutely. Thank you, everybody. Amazing. Let's head back over to the desk here, guys, because the show doesn't end there. Big shout out to Jeff Baxter.
Again, sorry we had to cut you short there, but we will get back to Mr. Baxter at some point because we need to get into some of the more fun, gooey, geeky details of everything that was launched today. First joining me here at the desk, Mr. Keith Acen. You've been a longtime friend of the show. It's good to see you again, my friend.
Good to see you, Nick. Happy to be back.
Happy back, yeah. As I said at the very top of the show before the keynote, 384 days since the last Insight. We're getting photobombed by Bezel Man right now. Hello, Bezel Man. Say hi to the live crowd.
Yeah.
Go take some photos with the new cabinet. That'd be cool. 384 days since Insight, since we've been at MGM last year, and we've taken that time, and it's been much longer than that, to get to a point where we now have a fully baked, ready to go, ready for market disaggregated solution specifically targeted at the AI audience. I kind of just wanted to get your take on that journey. How did we get from—we've both been here long enough now to have, you know, basically have ONTAP running through our veins. How did we get from what ONTAP was to where it is with NetApp AFX?
The reality is we were working on that before Insight last year. This wasn't a, you know, it was a heroic effort by engineering for sure, but it was a multi-year journey to get there. It was because it was so ambitious. It wasn't a Generation 1 product. I love the way that Syam positioned that this morning, saying this is Gen 3, because ONTAP was already such a powerful tool for the enterprise. One of our keystones of this was to deliver something free without breaking what ONTAP was already fantastic at. That was what was so challenging.
Yeah, that had to be one of the biggest parts of it because that was good. That's one of the questions I'm going to ask the other gentleman later, like, how do people sort of adopt this? How do you sell it? Or what you do, do a little fun game called positioning, right? How do you position this for an existing customer, someone in our install base that has a long storied history themselves using this on a day-to-day, or using ONTAP and our AFF products on a day-to-day basis? Why do they need one of these versus some of the stuff that they already have? It's not for everyone, right? We don't automatically say everyone should upgrade to this.
No, that's the best thing. It's not a migration. People need to start planning the migration to NetApp AFX.
This isn't CDOT 2012, right?
No, no. This is another tool in the toolkit. I think as AI is finding its way into every enterprise, administrators are realizing that they may need to do different things. That's where NetApp AFX fits in, if I suddenly need to scale and need to reach these performance challenges that are problematic with existing architectures.
Right.
The existing NetApp AFF products are going to serve most of the workloads out there. AFFX is that new special tool in the toolkit.
Yeah. What are some of the special things? Since you mentioned it, what are. Do you have some. I asked Sandeep what his favorite thing was. What's some of your favorite sort of. If you had to do a top three of why AFX? Is there a rundown of like some of those top three things that you might call out to people that are maybe used to AFF and all ONTAP. What are the things? What are the blind spots they're not going to see coming?
I think the first one is it's ONTAP.
Yeah.
Right.
There are a couple of features we like to close in there, but for the most part people try to do the gotcha. Does it do this, does it do that? Absolutely. For most of those, it's there. That's the number one thing. It's still ONTAP. Number two is the flexibility. It's so easy to scale either performance or capacity, and we made it easier at the same time. Even though it's kind of the flagship high performance product, it is incredibly easy to add nodes, incredibly easy to add storage, easy to administer.
Yeah. This is something I can remember back to the 2000s when I was a customer. I can remember thinking, why can't I just scale things this way? I'm excited that the industry is finally getting to this point. I'm excited that we're leading the charge in many ways on this with ONTAP, and that we're not rehashing or making up some new whole thing. This is the ONTAP that people know and love. It's all of the Flex and Snap words that people know and love. Right. Those things that we are, our bread and butter, that we're known for, persist, they stay there with some fun new tools that we're going to talk about with the boys here in a few minutes.
I think that's the brilliant part of it, is the fact that unlike having that separate silo, this will still run your core infrastructure and your AI.
Yeah.
Right.
That is the brilliance of it.
It's not separate.
Is it true that it can all exist in a cluster, or are we two separate clusters at this point?
Separate clusters, right. Because it does behave quite differently. It's interesting. The surface level of ONTAP is very, very similar. The underlying architecture is quite different, right. That is a testament to how much work was done under the covers architecturally, how it manages storage, how it writes to storage. All of that's been updated. On the surface and how it interacts within your data ecosystem, all that's preserved. It's sort of like building an entirely new basement or foundation to a house without actually disturbing the house. That's really what this has done.
Do you see any particular workloads as we launch this? Do you see any particular workloads that stand out being especially useful or would benefit from this, is what I'm trying to say. Are we talking about our traditional Tier one apps, or is this going to be very large, parallelized, you know, language models, things like that, that this is kind of specifically targeted for?
Yeah, AI is obviously the key tenet. That was the workload that we kept in mind when we were planning NetApp AFX. You know, it's really anywhere you have that scenario where you need huge performance, huge throughput in particular, without necessarily a giant capacity.
Yeah.
You know, it's one of those, a bit of a flashback. We were talking about you and I both having some gray hairs in the early days. You know, if you had a storage performance problem, it was always add more spindles.
Right?
Add more spindles. You know, it never was the controller, it was always, you didn't have enough spindles. It is amazing to think how that's changed. With today's Flash media and it being NVMe connected and then being connected via RDMA, the fact that the throughput of what we can do from the media, this really unlocks the shelf over there, is really exciting. Unlocking the performance of that media means that we can now have many controllers powered by a relatively little amount of storage. Those workloads that have relatively small data sets but have tremendous throughput requirements, those are the ones that are really going to shine for AFX.
Before we get into the details of ONTAP or the details of hardware, just from your position or your perspective, I should say, why should people be so super excited about this? What is, what's the thing we haven't talked about yet? What's the, what is the standout thing part of this that's really going to catch people's eyes?
I think the fact that this is the foundation for so many exciting things to be built on top of it is important. Not only will NetApp AFX deliver that raw performance that AI workloads need and other high performance, unstructured data workloads need, but the fact that it integrates tightly with the metadata engine and then on top of the metadata engine we've got these AI services that bolt on top of it. It's such a rich stack that means that we can meet customers wherever they are in the AI journey. Hey, if I just need a high throughput, high performance storage system for whatever that workload is, NetApp AFX can deliver it.
There is going to be a point where you're going to want to know more about the data that's sitting on there, have better insights into that data, and that's where the metadata engine comes in. At some point you're really going to want to accelerate your inferencing pipeline, and that's where the NetApp AI Data Engine comes in.
Bolts on top of that.
Fantastic. Keith, thank you so much, man. It was a pleasure chatting with you again. I love it when we get to do these kind of roundtable sessions. I want to have you back on the show at some point. The fans miss you, the crowd. The audience misses you. You are one of the crowd favorites. We'd love to have you back on. We got to get Mr. Hurley back on at some point to go over because we have some ONTAP news this week as well on top of what AFX is going on. Thank you very much for stopping by. Always a pleasure. Appreciate it and we'll see you next time.
Thank you so much, Nick.
You got it.
All right.
While we're swapping out presenters here, I wanted to go over something. Last year we brought it back because we bracket Insight in person now. Yes, to answer your question, there is a new Hardware Universe poster. What are you doing? Right. We'll get to you in a minute. There's a new Hardware Universe poster. You can pick them up here. If you're at Insight, make sure to go by. I don't remember which booth it is. I think it's over in the petting zoo. If you're not here with us, you can get one of these in the gear store online for $1. I think they are. I'm not sure exactly, but make sure you get your latest and greatest 2025 hardware poster and see if I can fold this thing back up properly. That was Keith Asen.
We're going to have Keith back on at some point to go over a lot of good stuff because the way that we position this system and the way that we structure it is going to be very, very important when we're working with some of our technology partners and their applications, technologies, workloads, and all of that stuff. Where the rubber really meets the road there is when things get down to the ONTAP level.
Right.
What some of the new technology that we're working with ONTAP is going to be very pivotal to this, very key. I couldn't not have one of my oldest friends in podcasting be on the show with me, Mr. Justin Parisi.
How's it going, Nick?
How you doing, Mr. Tech on Tap?
You know, just trying to keep the dream alive.
I love it.
How much of that dollar do they give you for promoting it?
How much what?
How much of that dollar do they give you for promoting the hardware?
I think it's $0.01 per poster. They're pretty seat custom. It's nothing. First of all, I want to say thank you for running Tech On Tap as long as you have to. All of the audience out there, for you guys that don't know, when I parted ways way back, Glenn and Pete and Sully and those guys, shout out to them, took over. You took it over in 2017, I think it was.
Yes.
Yeah, something like that. You've been running for like eight years, dude. I mean, congratulations. That's an incredible run to keep that show going like that. Any show.
It's fun to do. I mean, it's not, a little extra work, but it's all right.
It's all good. You and I both know how much the dissemination of information out to the audience and to the end users is so important, critical, much less our field and all of that stuff. I just might thank you. Thank you for keeping that kind of stuff going. We're trying to do some of that stuff with On Air. I'm excited to see you doing video now as well. Yeah.
Finally got dragged into that.
If you guys haven't checked it out yet, you can now watch Tech On Tap episodes on YouTube.
Way more than you want to.
Be sure to make sure you subscribe if you're watching here on YouTube so you can see his episodes. All right, let's get into the details.
Yes.
Sandeep just did the unveiling, just had a cool sort of positioning chat with Keith to get things started.
Yep.
Talk to me about NetApp AFX and ONTAP. What is it? What are the implications for ONTAP as it stands? What are the big picture items we need to talk about?
I mean, the first thing to keep in mind is that ONTAP is ONTAP. If I were to go to the NetApp site today and download the ONTAP image, that's what we're running AFX on.
So it's.
That's not changing.
Right.
It is still ONTAP. ONTAP Unified is what we're calling the, you know, the all-encompassing thing that's still going to be there. It's not going anywhere. It's still very good for a lot of other workloads, including AI ML. I mean, honestly, you can still do that kind of stuff. The intent of AFX is to broaden that horizon a little bit, to give you additional scale, additional independence, and a few other things as well.
Yeah.
Can we get into some of the details of it? It's ONTAP. It's ONTAP. It's ONTAP. It's still there, but some of the structural components of how we do things like RAID groups and aggregates and stuff like that fundamentally change. I want to, before we get into this, pre-invite you to a Deep Dive episode.
Yeah, absolutely.
We can get into really how some of this works. We can do some diagrams and slides and all of that kind of stuff. Can you briefly go over what structural changes happen within ONTAP when it comes to things like that, and should people be concerned about any of that stuff?
Right.
The idea of a data silo is basically somewhere where you put data and it's really hard to get it in or out of place, right. For years that's kind of been an aggregate where you have a node attached to a disk shelf which has an aggregate attached. You have root aggregates, you have data aggregates. When you want to move data such as evolve, move between those aggregates, there's a copy involved. You have to SnapMirror, basically. What NetApp AFX is doing is it's taking that concept of the aggregate away, it's getting rid of it, kind of virtualizing it under the covers.
Extracting it.
You don't manage it anymore.
Gotcha.
You don't think about, oh man, where do I put this volume? Oh, I'm running out of capacity here. Oh no, what do I do? I got to move things around. Oh, I got to copy stuff instead. It's just a single capacity pool. We like to call it the storage availability zone.
Okay.
It is just one big monolithic piece of capacity that attaches to all your nodes. Every node can see all the capacity. There is no disk ownership, there is no concept of aggregates involved where you have to deal with things. It helps flex proof volumes a bit because you are not worrying about your constituent volumes filling up an aggregate, moving things around that way. There are a lot of benefits to having that single pool of capacity.
You're telling me I don't have to care about disks or RAID groups or aggregates anymore?
I mean, RAID groups are still there, but ONTAP manages that for you. Yeah, and they're large.
I just don't have to care about it.
You don't have to care about it. They're also larger now. We used to do 28 drives per aggregate or per RAID group with three parity drives. If you had four RAID groups, you're using up 12 parity drives.
Right.
At large drive sizes, that gets expensive.
Yeah, especially as we get into 3060 and beyond.
What we've done in NetApp AFX is the max RAID group size is now 96 drives. My 4 RAID group example now takes those 12 drives and condenses it down to 3 parity drives.
How does this affect things like the auto partitioning that happens in ONTAP disk partitioning when you're doing that? Is that still the way that we're doing things?
I mean it's still RAID tech.
Adp.
ADP doesn't exist anymore essentially. With ONTAP you had RAID volume or root volumes, and then you had a root aggregate. The root aggregate only needed 350 GB for the volume.
Sure.
Rather than, you know, taking up three drives, you know, 30 terabyte drives each, right, and using that for 350 gigs.
Right.
We had to partition it across data drives, which worked out great. I mean it's fine. With disaggregated, with AFX architecture, the root volumes are no longer requiring that hard coded aggregate anymore. It's a virtualized instance within this capacity pool.
Gotcha.
It just goes in the capacity.
Pool and you're done.
You're not talking about ADP or anything like that.
For unified stuff that's going to stay around, you know, on that side of it.
Yeah, yeah.
Still there for now. It's just not necessary based on how you're doing it.
We have also changed the functionality of the root volume so that our replicated databases for the cluster, which is what makes a cluster a cluster, and the boot images, when you boot ONTAP, are moved to an onboard boot media drive that's attached to each individual node. Now if I lose access to disk shelves, I can still boot the cluster. I don't have a panic and I have to go into maintenance mode.
All that stuff.
It still boots up fine, and I can do my troubleshooting and maintenance from there.
Oh, fantastic.
If you lose the boot media, you just simply plug in another one and it replicates and you're done.
This sounds amazing, pretty awesome. There are always, you know, stipulations or limitations. I don't call them limitations, but we didn't 100% everything on day one. What are some of the things that people might not be expecting that are there, and what are some of the things that you might expect that maybe haven't made it there yet, but we can sort of look forward to being on the roadmap.
I mean, things like aggregate commands, they're still there essentially, but they point to the storage availability zone.
Got it.
I view all my capacity from that perspective. It's a lot easier to manage.
Lot easier to view.
REST APIs out of the box should all work for the most part. We've pulled out things like MetroCluster and SAN REST APIs because those don't exist in AFX. Also, there's some performance statistics that maybe you can't access right out of the box. We'll eventually have those back in there as well. Otherwise, REST APIs should all work. In our EAPs that we did, our customers tested our REST API functionality with their automation suites. Nothing broke.
Wow.
I mean, it's like, all right, cool.
If the API still worked, that means a lot of the Ansible still works. The Terraform stuff, a lot of the automated insta stand up stuff, should all still work.
Things like Trident still work.
Amazing. That was going to be my next question, was like Grafana and.
Harvest is the platform we're using for the performance monitoring. That works.
Beautiful. I mean, it's shout out to Grindstaff. Yeah, Harvest.
Good stuff.
With the Trident thing in mind, there was a question I was going to ask you. I stumbled on it. The Trident part is if we look at some of the other tools. Oh, SnapCenter was what I was going to ask. Speaking of Trident and recovery and stuff, if you look at some of our auxiliary tools like SnapCenter, are we going to see support for those sorts of things right out of the box?
Yeah, I mean, I can't see why not. I mean, the REST APIs are there as long as they're not running on Zappi.
Yeah, good point.
That doesn't exist. Yeah, we've removed that. Finally.
What are some of the things that, to close it out here, what are some of the last things that you think people should be paying attention to when it comes to NetApp AFX, that they might not be thinking about? Certain things. I love the idea that we're abstracting away a lot of the hardware componentry that we've basically dealt with as admins for 20, 30 years at this point. I love that we're abstracting that away, but when we start talking about how we work with other external forces, whether it's other technologies, tier one applications, different workloads, some of the things I was talking to Keith about were who is going to take advantage of this.
Is it only going to be enterprises, or are we looking at some of our technology partners to come in and sort of integrate with this in a more direct way with their AI solutions?
I can't see any reason why they wouldn't. The goal is to make this available for whatever you want to use it for. The initial push is AIML workloads, specifically inference and training, because of the throughput needs and the ability to scale those compute nodes independently of the storage nodes. The benefit there is your disk ends up being so much of a large percentage of your cost to your solution.
Sure.
If you've only got a static data set that you never change, you just use to train AI models on, why do you need to buy more disks to do that?
Yeah. Right.
That's really the key. If you don't want to use it to move your data over, we can do things like FlexCache to an origin ONTAP system, or we can do SVM migrate to migrate an entire SVM over, cut it over and not have to take any outage because the mount points are all the same and the file system handles are all the same. There's a lot of good stuff out of the gate from AFM FX. Really, the only thing that's missing initially is the FabricPool tiering.
Yeah.
Because it's an aggregate level feature, they got to figure out how to deal with that. The architecture itself really lends itself to being able to do a lot more advanced things in the future. Some resiliency enhancements, some storage efficiency enhancements. We're doing new things with HA failover where we're mirroring the NVRAM across a clustered network.
Oh, that's interesting.
You could take your imagination down the path where that might go. Right now it's still HA pairs, but it's replicated across the network rather than directly attached to each other.
You mentioned something in the beginning. I'm going to let you get out of here because I know there's all kinds of crazy buzz going on right now. You mentioned something about FlexGroups, this benefiting FlexGroups in a big way. You also said that we removed the constraints around tying to aggregates, like with that. I know with FlexGroups it's pretty well tied to aggregates as far as where it's going to build its constituents. Does this work with FlexGroups?
Oh, absolutely.
Yeah, it seems like a match made in heaven for FlexGroups.
It is a match made in heaven for FlexGroup because it can leverage all that capacity, and you don't have to worry so much about the underlying constituency.
Right.
We have also enabled advanced capacity balancing by default within the Flex group, which is basically file parting across different member volumes for large files. Not only does that provide capacity benefits, it also provides some performance benefits for single large file workloads. If I've got a thousand clients pointing at the same file and they hit different file parts, that's going to give me more parallel access to that file. I'm going to get better overall performance in general when using that architecture. A lot of good stuff there.
I know I've got some gear running at the house. I'm not sure I could run that thing at the house. I don't think that.
You have the power to run that.
I don't think I have the flooring to do that or the cooling. They asked if we were going to power it up yesterday, and I was just like, not if you want to hear anything else.
Yeah, they got somebody in the back just waving their hand like, cool, cool, cool.
Justin, thank you so much for stopping by, man. Again, congrats on keeping the Tech On Tap show alive. Looking forward to how you move it in the future and looking forward to having you on for some more information soon.
Yeah, absolutely. Thanks.
Awesome. Thanks, Justin. All right, guys. Working right through it just like we did last year. Today is going to be all AI, all the time. I hope you guys are ready for that. Starting off with Sandeep giving us the unveiling of it, actually going all the way back to Syam, giving us the intro to it over in the keynote. We had Sandeep here doing the unveiling, Justin talking about the ONTAP components, Keith giving us the market positioning and kind of overall perspective. Now it's time for the hardware. Get in here. I got to introduce my new, my one of my best friends here at NetApp, affectionately known as Gebs, Mr. Chris Gebhardt. Good to see you, buddy.
Good to see you as well.
I'm excited to talk to you because you know what a gear geek I am at heart. Take the NetApp off of my sleeve. I'm still that gear geek. When I turn around and I look at that cabinet behind you, I just, I kind of go, yes, yes, all the yes. There's that shiny gold DGX sitting in the bottom of it. I kind of want to see if I can get one at home, but maybe. I don't know. Probably not. Probably not. Listen, at the end of the day, this AFX platform I think is going to be a game changer for us. You and I both lived through, and Justin and Keith and Jeff all lived through the trend, the C dot transition. One of the things I want to make very clear is that this is not that.
This is absolutely.
This is not that. Please fear not. We are not doing that again to you. We're not. Not rug pulling that part. What you need to understand is that this is a completely new series, but it still leverages all of the goodness of ONTAP in the same way. Justin just broke down all of the benefits of ONTAP, kind of the abstracting away of aggregates, the constructs of aggregates, RAID groups, and things like that, just making all of the capacity available to the entire cluster at that point. Let's get into the hardware side of it.
Sure.
I see a DGX in there. I see a couple of switches, I see eight nodes, I see NX224 shelf and I see the new. I'm not sure what the model is.
On the AFX,
AFX1K.
AFX1K. Okay.
Yeah.
Break this stack down for me. How we got to this sort of assembly of components of a cluster and what it is. How does it all interact with each other to make ONTAP awesome?
Earlier in the show you talked about how we refreshed our entire portfolio of hardware, and we now have our entry level platforms, which are a 2U form factor with two nodes, two controllers within the node. We have our mid sized platforms, which is our integrated chassis that has internal drives. Then we have our modular chassis, which is that 2U form factor with a single controller within the node.
Okay.
This AFX1K is based off the same concept as that modular chassis. There's no drives within the individual controllers themselves. Gotcha. They're serviced from the cold aisle, just like the AFF A1K. The difference in the hardware to it.
They're wheeling it away right on screen.
High AFX. We'll see you later.
Iafx the.
The difference is that we're not booting from the SSDs within the disk enclosure like we were.
Right.
Justin mentioned that. Now everybody shares that same pool of storage. What we had to do is we had to create a. We are using M.2 SSD as the boot media. Now we're booting ONTAP off of that M.2 SSD.
Nice.
That lets us really worry about the aggregates and the disks as something within the startup process.
We don't have to waste a bunch of physical drives on root volumes and things of that nature.
Yeah, we still do some things within. Okay. Within those disks. For the most part, ONTAP is booted off of the M.2 SSD and we then have a lot of networking cards, 100 gig networking cards in the rear. We have now, instead of doing an interconnect between the nodes via a crossover cable, we're doing switched. Those HA ports now go to a Cisco 400 gig switch, as does the cluster, as does the storage. Now the HA, cluster, and storage are all over a switch network.
Beautiful.
That's how we really get that disaggregation, is that now everything is network, network. Now we can treat everything independently, whether it's the compute or the storage. Add more storage by plugging in another shelf. Add more compute by adding additional nodes.
Do the compute nodes and the storage capacity nodes, shelves, all plug into the same switching architecture? Do you logically separate those in VLANs? How does that, how do you keep the compute and storage overhead separate, but you still allow the compute nodes to talk to the storage, communicate with. Great question on the front end.
Everything's over a private switched back end. We're using brand new Cisco 9332s and 9364s. They're 400. 400 gig ports.
Beautiful.
We use four-to-one breakout cables to the rear of the DIS enclosures as well as the controllers. When ONTAP boots up, it knows what is in what slot, and it will assign a VLAN to that particular port.
It will program a VLAN into the switch for you?
Yep.
Any port from one through the 32 port, 1- 30 can be any function.
Okay.
We have two ISL ports, 31 and 32, so cabling is simplistic.
Right.
You just want to add a new shelf. Doesn't matter where you plug it, you plug it into fabric A, fabric B, you're done. It automatically will discover the disks, it will add them to the RAID group, it will automatically add it to the capacity pool, the storage availability zone.
I'm blown away. I've never heard of anything that would automatically program my switch for me like that.
Yeah, we have an RCF file, right? Our RCF file.
That's true, yeah.
You know, basically defines the configuration. That comes when you buy it, the manufacturing process puts.
That on and we update that independently of.
Absolutely, absolutely.
We keep a lot of this stuff up to date for you guys. We will do the ONTAP images, we'll do the switch images. You guys that have been doing this stuff for a while know this stuff. Where does the DGX come into play?
The DGX comes into the data network. That's where we're talking massive amounts of data with 100, 200, 400 gig networking cards to your data network. What I think is more interesting is our shelves.
Okay.
The ability for a single NX224 shelf to scale to 1 SU.
Right.
SU is the scalable unit for an NVIDIA architecture. A single shelf will be able to support a single SU, which means it's three times the power of what we had in the previous version of our NX224. This is called the NX224.
Yeah.
It has more memory, more CPU, has 16. Each shelf has 16 100 gig ports that plug into four 400 gig ports on the back end.
Sick. That's a storage controller. 16?
This is so that we can have eight compute nodes to a single storage shelf to maximize the amount of compute to storage capability.
If you've got a switch in between those, why do you need all of the ports? Or is it to parallelize the access to the disks?
Absolutely. Each disk is capable of doing 7 GB a second. Translate that into 24 GB. Making sure you have enough bandwidth, network bandwidth. Making sure you have enough compute to be able to harness that.
I want to make sure I heard you right. The NX224 shelf has 1,600 gig ports on it.
Correct.
Wow. Wow.
Yep.
I'm two-port 100 gigs doing the.
Config math in my head, and it's pretty astounding.
Yeah.
If we look at the back of the actual storage controllers themselves, talk to me about the difference between the A1Ks versus the aide nodes. Okay, and what kind of IO are we looking at on the back?
Yeah. The AIDE nodes are single U AMD compute with NVIDIA GPUs.
Okay.
Right. They're using, I believe, 100, 200 gig networking to the data switch, as well as they'll be participating in the storage network as well. They'll be a part of the ONTAP ecosystem in the whole solution.
Are these effectively AMD-based servers under the covers that are just metadata engines?
Yes, we've created some software. We basically built this value-added software to be able to help our customers provide a turnkey solution for their AI data pipelines.
That's fantastic. I guess where I'm going with this is I'm mostly curious to see where some of our technology partners and some of our very large sort of service provider or technology provider customers come into play, how they tie into this, or how they take advantage of this sort of architecture. I think this is something that the industry has wanted for a long time. I know I can have memories of back when I was a customer, even my small little shop that I had. I certainly wish, gosh darn it, I sure wish I didn't need as much, didn't need to just buy disk to get more performance.
You have to buy sequentially. You have to buy pairs and compute. One of the really great things about this architecture is that when we used to size, we used to size for performance headroom or performance capacity to make sure that during an HA event we were able to have a steady state workload that wouldn't exceed the storage controller's capabilities. Because we don't have true disk ownership and volume ownership, we have a loose volume ownership. We can distribute those volumes to any of the nodes in the cluster. If I have a granite today, they're an HA pair. If I lose one node, it will distribute all the volumes to the rest of the controllers within the environment based on performance.
Going back to something Justin just said. FlexGroups and constituent volumes, now it's not tied to a single aggregate. Your constituents could span the entire storage capacity pool.
Exactly.
That's.
Any node during a failure automatically moves over. We have what we call zero-copy volume move, which means we're just moving pointers.
Yeah.
We're not actually having to move data.
Right.
We're able to reassign a volume to a node very rapidly.
That's fantastic. Chris, anything else you want to leave the fine folks with? Listen, we're not done. This is going to go deep, but we just don't have the time to do it today. I definitely want to sign you up for a deep dive episode later. We'll dig out the diagrams and the demos and really go deep into some of this stuff with people. Did NetApp AFX make it onto this year's Hardware Universe poster?
I believe it did, yeah.
I'll have to check it out when I see it. I was literally handed one right before we came back on the show here. What can people be on the lookout for? Is there anything that you can sort of tease? Like what haven't we talked about yet? We talked about the AI nodes, we talked about the A1KS, we talked about the switches. Is there any hardware componentry, IO-specific I/O cards or anything like that that people might see? Or is it just all 100 gig Ethernet?
100, and then the client cards are the 100, 200, 400 gig cards.
Right.
The 400 gig is supported 9.18.1. Right. Every version of ONTAP, we're going to be doing more QA, testing the limits, pushing the limits with each version of ONTAP to get to exabyte scale with hundreds of compute nodes. We're just really excited for the future.
Yeah, I'm very excited. New hardware, baby. Woohoo.
I'll get some to the lab.
Gebs, thank you so much. Great seeing you as always. Thank you for the breakdown for the audience. Appreciate it.
Thank you very much.
We're getting close to the end here. Real quick, before we can sign off for the day of day one, I wanted to bring back a friend of mine who kicked off the show with me this morning and ask him some questions about what do you think? Welcome you back to the show, Mr. Jason Benedici. Thanks for joining me again, man.
Thanks, Nick. It's been great.
I wanted to throw it back to you. We talked this morning obviously before the keynote. Now you know about NetApp AFX and we can talk about it. Yay. Yay. Great positioning from Keith Asen on the overall market stature. Great overview of some of the big changes and sort of philosophical changes that are coming to longtime users of NetApp ONTAP. That's one of the big things that I'm taking away from this, is that ONTAP is ONTAP still. Like a lot of the structural things that have sort of been beaten into our heads around RAID group and aggregates and sizing and performance and all of that stuff, we don't have to care about that stuff anymore. With NetApp AFX, I'm wondering how much of that it'll eventually trickle down into the unified systems with some of the more updates in ONTAP, if that even is the case.
I'm curious to hear your take as a member of the A Team, as someone who's out and about in the field and is specifically over on the other side of the world in the UK. How is this something that EMEA is going to embrace? Is this something that you think end users are going to embrace?
Yeah, I think, first of all, it's really amazing to see this come to fruition. We've been talking about disaggregated controllers and storage and things for a long time, especially over on the A team. We've been to various events and we talked about it and it felt like there are times when it just wasn't the right thing, but now feels like the right time. It's really good to see this. Had to learn some lessons to get to where we are, which is great and I feel feels really nice. Like it's more of a.
It's.
It's improving on existing innovations that have already been out there. There are other people that do disaggregated in, you know, one form or another. NetApp took the time to really think about it and look at it and say, you know, we've been through painful transitions before, we can't do that again.
I still have the scope.
Yeah, exactly right. They've taken their time, they've been considered about it, and they've looked at all the pain points and the things that might have impacted this transition and worked through them. I'm really excited to see this. Back to your question about whether I see this being embraced over my side of the pond. Absolutely, yeah. We talked this morning, AI is incredible. It's building new industries, it's improving existing industries, and everyone's working on it. What George was talking about this morning in the keynote, about how the ways that we've worked with Big Data before are not the same ways that we need to work with Big Data now.
The idea of this massive knowledge graph and all of that metadata and the availability of having, oh, I know this data is here about, like this, and I know this is classified as that, and just bringing all of that knowledge into one place and allowing me to bring the apps to it, the agentic apps or my gen AI apps, whatever it is that I'm building, that's a huge, huge game changer.
Yeah. Especially looking at RAG situations, like now all of a sudden we can almost use NetApp AFX as a sort of consolidation vehicle to bring all of these little silos of data that might exist in various business units throughout your company. You've got all your financing data, your HR data, your data engineering, your research, your manufacturing history. All of that stuff can now be sort of collated into this one place that you can then build, use MCP. You can build agentic around it and have agents analyzing, inferencing. As Keith mentioned, one of the big use cases for NetApp AFX is inferencing. It's going to be huge. That's why I kept asking the questions about how do we see other technology providers, creators, really tying into the capabilities of this.
I see them building the platforms that are going to, from a software side, take advantage of this kind of physical architecture.
Yeah, I mean, there's, in my day job, I work with a lot of data and not just from the traditional side of things, but from the software development side as well. Some of the hardest things to do are prepare the data, to get the data in the right formats or just to have that effective vector database that I can go and query through and look and how, if that can be partially even done for you, let alone if it's done in a more complete way that you can just bring the tooling straight to that and say here, just query this. It knows everything about my data. It can tell you all the metadata, all the important things. I haven't got to do a huge amount of work.
I can focus on building my taxonomies, my ontologies and the data that surrounds it and build out that metadata. I can focus on those things and that frees up a lot of time, and time in this game is money. Because there's not enough data scientists in.
The world at the moment.
are not enough people to do all of this work for us. If we can take that burden off and just bring our apps in straight away, a lot of that light heavy lifting is done already.
Yeah, I look at some of the platforms out there like Hugging Face. We work really well with Hugging Face. They're building ecosystems of these kinds of apps and workloads where you can tie them into cloud services or you can tie them into maybe on-prem use cases. If you're building AI factories and things like that internally using EMCP, using some of using agents, right? All of this stuff. I look at us as the, we've now got this ability to become this kind of foundation floor of all of the data that all of this sphere of tools and applications are going to now take advantage of. In the enterprise situation, I feel like I'm harping on this, but it's so powerful to be able to take all of your data.
We've stored it on NetApp's before, everybody's home drives and departmental shares, backup folders and stuff like that. You see a lot of that has moved to the cloud in recent years. I do see an element of this as, man, now we can point agents at it.
Yeah.
Now we can take snapshots of copies for the agents to run against. Now you can build proper unit testing, you can do all kinds of things agent driven with snapshots or collect clones, if you will, of this kind of stuff without actually harming the core root of your data. This takes what we used to do with QA development and all of those things to a whole different level when you build inferencing and training around it. Now you can inference and build different styles of models, maybe with different reference points. You can point it at different applications, can point at different models based on what you've trained them on. The possibilities here are endless.
The beauty of it to me is that it still continues to really laser focus in on all of the fun Flex and Snap words that we've known for 20 to 30 years. There is nothing foreign here. For the seasoned ONTAP administrator, this is going to be a gateway to fast track your company into getting into AI and bring some of the things forward that the unified platform was a little more held back in doing. Still valuable, still super plat, still super powerful. I still run mine at home, you know, but I'm not going to run an AFX at home. Right.
That brings a really interesting point.
Famous last words.
I would love one. Over the last sort of.
I wouldn't love the power bill.
Yeah, probably true, but I've got a lot of batteries at home, so could work. Over the last sort of six, six or eight months I've been working on taking blog posts, media articles, videos, things like this that we do, and taking that data and passing it through to try and build an agent that can mimic my tone and my words and things like that. I kind of used it a little bit this morning for one of the articles that I put out from the keynote.
Now, if I had NetApp AFX and I had all my data there and it had all the metadata about it, these are your blog posts, these are the videos you've done, these are the transcripts, this is that and all that, and I could point it at that straight away, it would have saved me so much time over what I've had to do and the way I've had to do it because I've had to go run through lots of different ETL pipelines and other bits and curate the data to get it in to the agent that I wanted to work with. The ability to have all of that done for you just by saying, oh my data already lives here, that's amazing. Amazing. Back to that ecosystem that you were talking about, like I mentioned this morning, that OpenAI ChatGPT, they've started to build.
You've got the ability to attach your apps.
Yeah.
You know Agent kit. Yeah, Agent kit. I can say, oh I use Outlook and I use Slack and I use Discord and I can connect it and it will start pulling the data out of those as well. Imagine I've got a Connect 6 monitor connected to NetApp AFX Control Everything or Dynamics.
365 at the company level or, you know, Power BI, SharePoint, all of these things that every company ever uses on a day-to-day basis, and be able to begin to build an interface on the front end of that, that your employees, the people that are your analysts that are using that data, get them out of the spreadsheets, and now all of a sudden they can write prompts.
Yeah.
Instead of Excel formulas.
Business intelligence is going to change. Yes, dramatically.
I think analytics and BI is going to be one of the quickest, most affected vectors in enterprise when it comes to RAG.
Yeah.
Some of the Nemo solutions, the solutions like Nemo that NVIDIA have, like that's going to be a game changer for enterprises.
I mean just look at, I say relatively simple, but things like accounting, you know, you've got just, there's a huge amount of data there and you could plug that in. You got new ways of doing analytics, payables and receivables. Yeah.
If I'm a CFO and I'm not using AI at this point to do P&L budget balancing, all of that kind of stuff.
Like even if you just do things like tracking expenses.
Yeah.
Why are these anomaly? Why is that anomalous all of a sudden? What happened that month where expenses are three times more than normal?
Oh, we had an event. Yeah, you know, certain things.
Exactly. You might even get the alert. You might not even get it because it could look at all the rest of the knowledge graph and go, oh, we were doing insight that month. We know that expenses are high. Just bat it out of the way.
There's one fun one that I'm working on building at home. One of the things that, being as sort of out of the game, hands on as I have been over the last 20 years, there's one thing that I'm really lagging on and it's I forgotten most of my old CCNP stuff. Routing and switching in 2005 was a lot different than 2025. Right now you have to know Python and be able to automate a lot of the stuff. Anyway, I want to build a team of network engineers, agents, and I want them to watch all of the logs and collect them and analyze those logs and do packet capture in a recurring ongoing basis and report any anomalies to me when they find them. Maybe we even get to a point one day where I tell it to suggest changes to the config to eliminate this.
Maybe we get to the one day where I just allow it to go change the config, update the config. That's one of the things that I personally want to work on. I want to do one for network, I want to do one for ONTAP storage. I think it would be really cool. As I was saying in the very beginning, in the kickoff show, I do think there is a logical point, some number of years in the future where we have prompt interface in every operating system that's out there, whether it's mobile, whether it's a storage system, whether it's a Windows operating desktop. I think there's some point where we get to that, where we're able to interact so that I don't have to go through the registry of my Windows machine and delete a key or change a value.
I think it's just going, I'm having this problem, please fix it. I think that at some point in the future we get there and I think that's going to start in the consumer side, it already has, and then we're going to see it extend into the enterprises.
Yeah, a man after my own heart. My home lab experience at the moment, and this is something I'm going to start writing about probably in the new year, I've had a few different projects going on at home around solar install battery, and I've been building out the monitoring platform for that. While I was doing it, I was like, I'm going to redo the entire home lab. I've started feeding all the architecture, the design, all the decisions and everything into an LLM and started working on some agents as well. I don't need to do all of the work. I just need to know enough that I want what I want to do and understand that the outputs are correct, that I can check the model and everything else.
Right.
There is enough, you can teach it to run home automation, to run a home lab system. Like you say, I think that starts here and the enthusiasts and the consumer side is going to start that, and I think you'll start to see that make its way into enterprise. We talked about it this morning, about like, you know, you just have a prompt in vSphere. Yeah, I need three VMs, they look like this.
Update these hosts, add these hosts to the distributed switch. Anything you could. Yeah, it's coming. That's my hot prediction for January. You guys will see it in the video when it comes out. Yes. I'm going to be back to making videos soon. Jason, anything else you want to leave with the audience before we get out of here?
I just want to say the keynote was impressive. It really was. George standing up there talking about the culmination of 30 years of work to where we are now and intelligent data infrastructure. It just makes sense. We're at the right time, and NetApp are making the right moves, talking a good game. I'm really excited to see how this goes.
Yeah, the one thing that I agree with you there, one thing that stood out is he used the phrase data enabled intelligence. We used to use a phrase called data driven and I think that's very largely been adopted by the industry. You hear people say we make data driven decisions all the time. I think data enabled intelligence is a huge one. I actually wrote that one down literally because I think it means something. I think you have to have the data in order to have the right level of intelligence for your company. One cart before the horse. In a sense, I can sell you the biggest, baddest AI solution and I can roll 10 racks of gear into your data center.
If your data is not ready for that, you're not going to have a good time and it's not going to be a good investment on your part. However, if you have the facilities to be able to curate this data, become librarians of your history and naming conventions, be OCD about folder structures and file naming conventions so that it can all be indexed and it can all be inferenced and then you can load it into these things, you're going to have a fantastic time.
I know. The great thing for me on all of this is how it parallels human intelligence. You don't learn anything without the data. You know, when we pick up new topics in our field, you're there with, you know, white papers and best practices, and you make sure you curate all that data that you need to build that new interaction intelligence. This is, we're getting to that age where we're not just really kind of talking about artificial intelligence in those ways. We are looking at how we really build intelligence, how mimicking structures of learning through human processes into what we're building. That's really exciting.
Yeah, Jason, thank you so much, man.
Great.
I always love chatting with you and geeking out with you about this stuff. I think we're on the cusp. Like you said, we're at the very beginning of a lot of this stuff. Thanks for stopping by again.
Thanks very much.
See you tomorrow. All right, guys, we're getting close to the end here. We got a few more guests for you to get through. Thank you for hanging in with us here. Just a quick reminder. There are Hardware Universe posters, brand new ones with all of this new stuff on it available. If you're here at the show, you can get one for free over in the petting zoo. Come by the festival grounds and check that out. If you're not here at the show, you just want to watch online with us. That's okay. You can go to the NetApp gear store. I think it's $1, and you know, that's all they make you pay for it and have one shipped to you, mailed to you. Make sure you get that. Make sure you also get into our Discord community. Discord.gg/NetApp. Or you can just go to netappdiscord.com.
We crossed 6,000 members in there. Thank you to all of our people that are hanging out in there in Discord. For those of you in the live chat on YouTube, we see you guys too. Thank you for hanging out with us. I want to invite a friend of mine onto the show now. We've been friends for quite some time now, Mr. Glenn Dyer. Glenn, how you doing? Man, what are you up to these days? Tell the people who, what are.
You doing five years already, and I am a Global Principal Technologist at Equinix.
Nice, nice.
Obviously, you know, with the largest data center company in the world, we're kind of known as the home of hybrid, multi cloud, right?
Yeah.
One of the reasons I went to Equinix was because of my NetApp experience. It really, it's peanut butter and jelly. I mean, NetApp is the king of data motion. Right. It's been in the DNA of NetApp forever. If you're the car, Equinix, we're the road, the exit ramps, the rest areas. There's a very high degree of likelihood that a customer is going to be using both of us. I spend a lot of my time trying to make that easier for customers, easier for NetApp solutions architects and partners to create the outcomes and the stuff with the AI that's come out, aide and all this stuff.
I hear you data center guys have been busy the last few years.
We have, you know, I also like to keep it real, and you know, when you look at a customer buying journey for AI in the enterprise, the first thing they got to deal with, as you well know, is the data.
Yeah.
If we don't have good discussions with customers up front when they're doing those first use cases that aren't necessarily transformative, they're building their muscle. That's when we really need to have these conversations with customers about, hey, you need to get your data hub, your data pipeline taken care of and built for the future. Don't just let it all go into some platform that you're never going to be able to get it out of. Build it in a platform that you can trust with technologies, especially if you've got a partner like NetApp that can bring so much AI value added to that data into the platform. The announcements that were today were just, they were mind blowing in a way. I can't. I mean, I got my hands up a little bit on the AI stuff that was out there.
The fact that you guys have cracked open the cluster and allowed some GPU resources to get in there, that's like, whoa, now that changes architecture and what I can do, even what a factory looks like. Right. Because a factory is not just architecture. It's the culmination of processes that allow you to create these transformational outcomes over and over and over again. Those enterprises that get to do it and get to learn that, they're the ones who are going to win. They're the ones who are going to win.
Absolutely. One of the things I was talking about earlier was it's great that we've got the rack and we've got this amazing NetApp AFX stack, but I'm mostly excited to see what other people do with it. Some of our technology partners and some of our bigger customers that are managed service providers or some of our very large financial institutions, movie studios, things like that, some of the stuff that they're going to be able to take advantage of with this kind of a foundation, I'm dying to see what they're going to do with it.
Yeah, the foundation. It's important that you remember that word because every one of those verticals that you mentioned use AI differently. It's not just all chatbots. It's not just all LLMs and, you know, Burton coding and, you know, embedding, and so, you know, everyone's got, and you look at Hugging Face and just the sheer number of models that are out there. There is still a place for fine tuning, there is still a place for training in enterprises. Not everyone is going to be doing RAG as everybody's talking about. People tend to over rotate to what's simple and understandable. Equinix just did a survey in Europe and 70% of the respondents said that they understood how AI worked. I'm like, really? I'm not sure they know what that term means.
Go to ChatGPT.
Yeah, and it's like, this stuff is not, it's still hard in as much that companies like NetApp can simplify what can be simplified. I think it's just giving that platform that customers can use with confidence, that can scale to lots of different use cases, not just the, what I call the augmentative ones that help us create content better but don't help the shareholders do as much. We're talking about these transformative outcomes that we're going to start seeing over the next, I'd say, year to three years. The enterprises are really going to start getting it because they develop the muscles, they know it works, they know it doesn't. I think in this next 1-3 years you're going to see amazing things happen. Now that people know, hey, don't throw your data out, you got to keep it.
Right.
That is why.
Which is great for a storage company.
It's also great for a data center company.
True, true.
By the way, the two of us, the two companies together really provide that value that, you know, I could take that data and move it to a cloud back and forth. By the way, the announcement with Azure NetApp Files and Google Cloud NetApp Volumes, SnapMirror, finally, this is, you know, and between that and things like FlexCache, hopefully are coming someday, killer apps for moving data around and then the NEO clouds, which are also our partners, the Galaxy, to get the data to those guys too, because that's. You're going to do training, perhaps doing mass inference. Data's got to go everywhere.
Yeah.
By the way, you have no idea how you're going to be using AI in three years. No idea. You better keep your options open. That means you need an architecture that's going to keep your options open. There have been NetApp customers going on that I know since I started selling NetApp as a partner back in 1997, that are still on that platform and they get it. NetApp has really carried that through on the same platform, still ONTAP. Still the core tenets of that.
Right.
Data Motion is the DNA of the company. I think that's going to be like AI makes it 100x as important as it was. I think it's going to be a lot of fun to have. The trick is make it consumable, make it easy. I'm looking forward to service providers creating outcomes out of this and it's just going to be a lot of fun.
Do you see this being a big uptick with things like Keystone? Do you see people doing sort of storage as-a-service kind of solutions with these? Is this an investment that you're making as a company to be able to house this thing yourself or in a colo, for example?
It's a very good question. It really depends on how a company defines sovereignty.
Yeah, that's a good point.
I like to define it as you have to have at least one copy of your data on equipment you control, in locations you can access.
Yes.
That sounds like Netflix, right? If you have that one copy, then you know you can always audit it. If something happens to that, you can get down to the firmware level and that copy. If you're caching out to these other places, by the way, the IDE stuff, having governance move and the cyber resilience policies move with the data to these other NetApp platforms in the other places, that's just, that's really cool.
Or the metadata blobs moving independently of the data, that's being able to put the metadata at the edge. There are so many use cases for this, guys, we're not going to be able to get into all of them. I can't wait to have all of the guys on the show to do like one hour deep dives on them. All of these use cases, we're going to have them documented, we're going to have solution architecture, architectures built for them.
NetApp has always been best at that.
It's one of the things we've really taken a lot of pride in over the decades at this point, being able to have all that and working with some of our partners like Equinix, any of the other providers that we have out there to do these kind of co-opted solutions so that when someone shows up at your door you know exactly how to tell them to run NetApp storage in Equinix and vice versa, and we know where to send people. What other big news is Equinix got going on before I let you get out of here?
I'll give.
One plug.
There's a new podcast that Equinix has called Interconnected. I happen to be one of the three hosts.
Nice.
It's not salesy at all. It's not really about Equinix, it's about the industry, and we talk about blockchain, talk about AI, we talk about in agribusiness, you know, it and agriculture and how it's being used and all these different things. It's really interesting. I learned a ton about the world and how they're using all these great technologies.
Yeah.
That's a plug for that. We've just announced a whole bunch of stuff. We had an AI summit late September where we announced all sorts of cool things on the interconnectivity that AI is going to need with things like networks that connect to all of the inference engines, how we work with NVIDIA. We'll also talk about our own fabric intelligence, how we are going to, you know, kind of identifying our own interconnection. You're saying you want to go, go chat to something like, hey, I want to connect to AWS in these two locations. I want to connect to Zoom in this one and WebEx to this one. I want to type that and I want it done. I don't want to have to go to a portal or even call my own API calls. I just want it done.
That's the kind of stuff that you're going to see out of Equinix.
Are you seeing customers come to Equinix? Not sure what they want, but they can write a prompt and just so could they. Could a prompt be delivered to Equinix that would stand up some kind of virtual infrastructure in a, in some managed.
For the most part, for now, you get the design and a quote, but then it can be executed. It's kind of the next step. It gives you something executable, let's put it that way.
Right.
We're not stopping there. I mean, we got a lot of stuff that's coming out, all MCP.
Yeah.
Capable. Right. That's the kind of the fun stuff, right.
How do we modernize billable, composable infrastructure in that way? Using some agents like that that you guys have on your front end, customers can just come in and go, I want this for X number of days to run this workload.
We're also providing kind of the building blocks that service providers and customers are going to need to succeed here. Just like you guys are giving the building blocks, we're not going to create the outcomes, but we're going to give you the ability to create those outcomes. NetApp has been part of that, been a big partner for a long time.
Yeah, Glenn, thank you so much for stopping by, man. It's always a pleasure, my friend.
Thanks for having me on.
All right, take care. All right, moving right along. We can't not have a NetApp on air snapshot episode because he made a guest appearance last year. We've got to have the backup ninja. We've got to talk about Mr. Kevin Hascot. Kevin, always good to see you.
Always.
What's going on in your world? They kind of threw you up here cold. I don't know what to talk to you about other than backup stuff. What's going on in backup world, man.
Realize, I mean, the world of Cyrus, it's just wild.
Yeah.
Every single minute, there's always something. There's always, you know, look at the news. Whether we're talking about ransomware, we're talking about bad actors, we're talking about data deletion, data exfiltration, breaches, all of this.
I was just talking with Jason a few minutes ago about I want to build a network, network engineer agent set that can monitor all of my switching stack and, you know, do packet capture, do troubleshooting, alert me when something goes wrong. Maybe I can tie it into Splunk and it can sort through my logs of everything. I want an agent doing all that, but that can go wrong real quick. It could when it becomes ransomware, doing that kind of stuff, when you have malware being pushed by a similar stack of agents like that. I wrote this down. Literally you can see it right there when George was saying in the keynote, most deployed, most advanced, most secure operating system in ONTAP 100. I completely agree with that sentiment.
You mentioned the ransomware side of it, and let's dive into that because that's where is ARP currently? How are we doing things around this autonomous ransomware protection? How has that helped? Has that saved customers at this point that you can.
Absolutely has. In fact, we've had it where we had a conversation with the customer one month, they implemented the next month. As scary as it sounds, it was quite literally a matter of inside three months and they experienced an attack. ARP fired off, and it protected the workload. They were able to recover from it as well. This is real, it truly happens. All it takes is somebody to click the link. We all know this. You've got to have those things.
In place, appreciate enough. If you take your snapshots often enough on that hourly, if you take your 24 hourly snapshots every day of each of your volumes, you literally might have an hour of corrupt data. You recover back to that last one and then roll anything else forward you might need. That's a lot better than being down for, I don't know, six months trying to recover, like we've seen some customers have to.
Deal with, average downtime in an attack is north of 21 days. Yeah, I mean we see customers that they wind up going out of business. We saw a 175-year-old company go out of business because of ransomware. They could not recover, they could not afford it.
Wow.
This is real. The idea of being able to not just identify an action, but take action, minimize that blast radius right there in a matter of seconds, you now have a restore point that you don't really have anything to worry about because you know that the data inside there is going to be good and you may have three, five, you know, a few files that need to be recovered or cleaned.
I want to harp on the fact that the snapshots have been around since 1992, day one. I mean it's just the mechanism by which this AI intelligence that's been built around this, it uses everything else, uses snapshots as well. Imagine that in ONTAP. The foundation, it's the foundation of basically everything in ONTAP.
They're immutable by default. Default?
Yeah.
The fact that you can't crack it open, make a change, we could delete them back in the day.
Yeah.
So yeah.
What if we just create tamper-proof snapshots now? You can't delete them.
You can architect your volumes to be as immutable, as tamper-proof as anything as you want, including the snapshots within them as well. All of this is by design. The snapshots themselves are not the ARP, right. ARP, AI, right. The AI that's built around the ransomware protection is the thing that scans for behavioral changes in order to trigger a snapshot in a just-in-case moment.
Oh, absolutely.
Looking at things like headers, looking at extensions.
We're going to look at data that hasn't been written to in six months, all of a sudden has a false write, right?
New extensions or files being deleted, or you look at, okay, now we're going to start encrypting files. Or that entropy, if you will, the randomness of the file or how compressible that file is. You see something, take a snapshot.
Yeah, but you can't get that unless you turn on the NetApp ONTAP Autonomous Ransomware Protection on your volumes.
It is not just file, it is also block workloads too.
We have block workloads on block now.
It does when you think about it. The only data infrastructure, data intelligent data infrastructure company that has all of this built in, not bolted on by the way.
It's free.
It is included. That is true. All you have to do is turn it on.
Yeah, you got it. You literally go into your volume settings and check a box.
It doesn't even have to learn anymore.
What?
Yep.
One needs to be like 30 days. You have to.
Back in the day, you had that whole learning mode.
Yeah.
Now you turn on FlexVols 9:16 1, boom, it's ready to go overhead. Not a whole lot of overhead either. Really, why aren't we turning this on? We need this turned on for all of our workloads.
Let's switch over to backup. Your favorite topic. We talked about recovering from an incident like that, and we know how to recover. We store a snapshot, roll your logs forward, all that stuff. What does the backup space look like right now? I want to tee up our next guest just a little bit. When we think about on-prem backup using some of our tools like SnapCenter or the evolution of where we've come from, from the old SnapManager stack, where are things at in the landscape of backup? A lot of people have depended on partners like Commvault and Zerto, and the old ones like Symantec and those guys. I look at it as, where do Veeam is another good one.
Oh, absolutely.
I don't leave our friends at Veeam out. Where is the backup landscape today in your opinion? You know, in the backup ninjas mindset, how are people doing with backup today?
They.
Everybody has backup. It doesn't matter how large or small the company is. Every single business out there has some form of backup. It's just, it's table stakes. The irony in it is the majority of businesses don't realize how vulnerable backups are. In fact, the backup environment as a whole has a giant target on its back. No pun intended. There's a giant target on it because that's one of the first things that actors are looking to get rid of. You've got to harden your backups, you've got to make them immutable and indelible.
When you look at what we do with the backup and recovery space, being able to take all of this, leveraging native tools, native replication, taking that from production and launching that into Object Store, being able to lock that down for that nasty day that you need to restore, it's all simple and orchestrated, and the fact that you can automate that makes it even better. What we're doing with our partners is even more exciting because the idea our partners have the ability to take advantage of that replication tool as well. When you look at the Commvaults and the VMs, the Rubriks, they have the ability to take advantage of these features and functions that we have built into the operating system to make them more efficient, more resilient, and more powerful.
All of the cataloging functionality that they've always been known for, and the workflows that they use and all of that sort of stuff, really just having being able to tap into the ONTAP side of things makes that even more powerful and useful. It's a win-win for both of them, honestly.
100%.
Yeah, Kevin, thanks so much for stopping by, my friend.
Truly a pleasure.
Appreciate it.
Always good to see you, Nick.
Always love seeing the backup ninja. Always. All right guys, we got one more for you and then we're going to get out of here. We got some more stuff to go cover, but we will be back tomorrow, so make sure you have it on your calendar. Same bat time, same bat channel. 8:30 A.M. Pacific with the kickoff show. We'll have the keynote at 9:00 A.M. and we'll be back here as soon as the keynote finishes with another round of awesome interviews and coverage here on On Air Snapshot. My final guest of today is one I saved for last. I kind of set him up with Kevin, but Mr. Glenn Sizemore. How you doing, sir?
Hey, Nick, how you been?
I am doing great. For those of you that don't know, Glenn and I go way back. We did the, we alluded to it earlier with Justin before you got here. Justin has continued to run the tech on tech podcast for the last eight years now. I think it is.
Oh yeah.
Just got him to start doing video. Glenn and I did the podcast 10-plus years ago, at 12 years ago I think at this point. We're starting to look the same now. It's kind of, anyway, I'm trying to catch up with you. You're getting there. You're pretty damn close. Glenn, I wanted to bring you on today, especially after talking, kind of waxing poetic about backup with Backup Ninja. I know you've got a product that you're promoting or that you're leading here at NetApp. Is it still referred to affectionately as backup as a service or doctor as a service, or how are you calling it?
Just NetApp disaster recovery. Okay, nice and easy, right? It's self explanatory. What does it do? It provides disaster recovery for your workloads running on top of NetApp storage.
Easy.
Yeah, yeah.
When we get into this, this is more than just SnapMirror at the end of the day, right?
Correct.
Yeah.
Break it down for me. What are the different layers of it? Because obviously you've got to take your snaps, but then you've got to mirror it somewhere else.
Yeah, yeah.
The way that I like to explain it to a customer is when you establish a SnapMirror relationship between two ONTAP controllers, you're 80% of the way towards disaster recovery because you've done the hardest part. You've got persistence. Right. The data layer is secure, but you're not done yet because if you want to actually use that data, if you want to go and take a couple thousand virtual machines, pick them up, move them 3,000 miles, set them back down, power them on, and have everything working in five or 10 minutes, there's a whole bunch of extra orchestration that's needed.
Right.
That is what NetApp disaster recovery does.
Okay. Recovery manager sort of situation where you're handling all of the VM registration and that kind of stuff.
Yeah, exactly.
Yes.
If you're familiar with Site Recovery Manager from Broadcom, right, this is a NetApp only version of that.
Okay, Right.
It only works with your ONTAP storage. We currently only support VMware environments, although we are expanding to alternative workloads.
Watch that space.
Watch that space. The core service today gives you that easy button to establish protection between two VMware environments and then do disaster recovery failover, fail back migrations, everything you would expect.
Does it have the testing facilities to be able to do that? Because that's the thing that people always leave out, they don't actually test it.
Honestly, like you know, in the backup space we like to say you're only as good as your latest restore.
Right.
Honestly, I think that in the disaster recovery space you're only as ready as your last test.
Yeah, right.
You very rarely actually trigger a true disaster failover, but there's no excuse not to execute testing.
Yeah, right.
We do give the customers the ability to do non-disruptive testing. Uses FlexClone under the covers. Your prime, your production source site, no impact, nothing gets powered off, nothing gets, you know, impacted in any way, shape or form. We just FlexClone the SnapMirror destination, bring up a writable copy of the data, mount the copy, boot all your VMs, test the environment, make sure it's working, and then once the test is done, we tear it all back down and it cleans up as though we were never there.
Right.
We have customers that do this. You can schedule testing through the product. We have customers that do this every day, just every single day. They do a full up doctor test and validate that they are ready for a failover if they need to.
Kids today will never understand what they have. What we went through in the early to mid 2000s to do some of this stuff with physical servers to be able to do this doctor kind of testing between data centers, especially inter regionally, or cross country or, God forbid, internationally. Some of the stuff that you have at your disposal today, please don't take it for granted. What Glenn just described would be a three month effort 20 years ago to do some of that stuff. The fact that you can now do it every day is huge. Some people doing it every hour. Is it a constant automated test that's going on almost like a heartbeat or a pulse kind of thing? Are we at the point where you almost get it real time yet?
It's not quite a chaos monkey situation.
Right. It's a scheduler.
Right.
You just tell it how often you want to set that. You configure the schedule, you tell it how often you want it to execute a test and the parameters of the test, and then the scheduler just takes it from there and you just monitor the results. Once you configure it, you just sit back and you wait for the reports that tell you everything's working or oh no, we've got a problem over here, let's go take a look at it. Maybe we're not ready for a failover.
What are some of the things? We've established that it kind of does SRM-like sort of facilities. What are some of the things people might not be thinking about? How does it tie into the other software or console? We can call it that. Now with how we're going to execute on this, how do you configure it, how do you build it? Walk me through what that looks like for an admin. Yeah, first time setup sort of thing.
The beauty of this architecture is everything that's built into NetApp console is all integrated through a single agent.
Right.
If you already have a console agent or what we used to call the BlueXP connector, then you have everything necessary to use this integration. There is no additional integration, no additional software, nothing to install, update, or manage. Once you have the agent deployed, what happens is there's a balance between the SaaS orchestration layer and that local agent running in your on-premises data center. That local agent is the persistence tier for all your sensitive information, your passwords, your sensitive data. All of that gets stored in the agent and never goes to the cloud.
Huh.
The only thing that goes to the cloud is the metadata necessary to manage the plans at a high level.
The things you need to facilitate a recovery, basically.
Exactly, yeah.
The best part about that is it makes that agent a non-persistent layer. Even if the agent gets completely blown away, we just deploy a new one, save the credentials again, and everything lights back up. It's incredibly resilient. Another thing that's beneficial about NetApp console is it's more than just one product.
Right.
It's a bunch of products baked into one. Inside the protection space, we have the ability to do 3-2-1 backups, right from the NetApp console. We can do enhanced ransomware resilience with our ransomware readiness service. Of course, we've got your disaster recovery needs covered with DR, right. There's a whole bunch of capabilities baked in with that single install.
If you take all of the pieces of Console and put them all together, it really is a sort of all-in-one solution for any sort of protection that you might need. It's got your backups, we've still got SnapCenter stuff, we've still got, we've now got disaster recovery resilience or sorry, cyber resilience, I can't remember the name of it. Ransomware resilience, yes, ransomware resilience, that's it. I'm still learning about the newest parts of Console as well. I look at this as this is a sort of all-in-one platform, almost like a store that you can go and choose from different applications and services that you want to use and take advantage of. Do we see third parties coming in and potentially using disaster recovery as an additional add-on to their tools? Have we gotten to that point or is this purely a customer play?
At this point, it really is mainly a customer play. We do, NetApp Console does have MSP integrations. We can do multi-tenancy, and there's a whole bunch of cool things that are built in there for an MSP. If you're a managed service provider and you want to build on top of NetApp Console, come talk to us, we can help you. Primarily, it's customers, it's end customers who are solving problems in their environment. Like I said, it's closing that 20% gap between what ONTAP already does and getting you to that 100% solution where you have a true end-to-end service.
Right.
To manage that data.
Gotcha. Glenn, anything else about disaster recovery or console you want to share? What are some nuggets? What is one thing people might not think about? What are the things, what are the blind spots people typically have when they think about disaster recovery or console?
For us, honestly, for disaster recovery, the biggest blind spot is cost.
Right.
Disaster recovery is a solved technology. This is not an innovative space. Just being honest.
Yeah, yeah.
Right.
We figured out how to do this about 15 years ago.
Right.
Now it's all about how much does it cost for you to do it. Can you control the cost associated with disaster recovery? That's what NetApp console and NetApp disaster recovery does for you. Honestly, we're about 80% cheaper than most competitive solutions.
Wow.
It's NetApp only. This is not a heterogeneous solution. We're not out there saying we do DR for everybody. If you have NetApp on source and destination and you need a cheap way to keep that data in lockstep and do failovers if the worst were to occur and testing to make sure you're ready for disasters, that's where we step in. That's where NetApp Console fills the gaps.
Wow.
Extremely cost effective, easy to use, built into console. You know, currently VMware only from what I understand, from what you said. Yep. ONTAP only, like if you're an ONTAP customer and you're using all of these technologies, it's almost a no brainer not to. That's what I'm trying to think of.
Yeah, yeah.
There's no reason not to.
Yeah, Glenn, thank you so much for stopping by, man. It's always great to see you.
Likewise, Nick.
Really appreciate it.
Take.
Catch you next time.
Yep.
All right, guys, let's bring it home. Last but certainly not least. Did I bring my—no. Okay. I want to remind you once more, I gotta give a shout out to Wayne and team once more about the new hardware universe poster. You can get them here. They are available over in the petting zoo. If you're here at the show, they are free. You don't have to pay for them. If you're online and watching from home, you can get them in the NetApp gear store. I think it's netapp.gearstore.com is the site and I think they're a dollar and they'll ship it to you. Thanks so much for joining us today. This has been a monumental day. All of this stuff that you heard about today around FSx for ONTAP and other things have been coming for months.
This is stuff that are years in the making at this point. I'm so excited that we got to hang out with you guys today before and after the keynote, go over some of this stuff. Make sure you subscribe to the YouTube channel here if you're watching because we have some big, big episodes coming up where we're going to take some deeper, deeper dives into both ONTAP, the hardware platforms, the NVIDIA integration with DGX, all kinds of stuff. We'll have some people from Cisco talking about that, switching some of the things Gebhart and I were talking about, and much, much more. We will be back tomorrow, same time, 8:30 A.M. Keynote will start at 9:00 A.M. and we'll be back with On Air Snapshot as soon as the keynote is over. Thank you guys for watching.
That's going to be it for us here at day one at NetApp Insight 2025. Make sure you go join the Discord at netappdiscord.com and we'll see you guys tomorrow morning. Take care.