Like never before. You've all heard the AI hype. Now you want AI's help. That's exactly what we'll give you, Cisco, making AI work for you. Where will you be in five years? Where will we be in five years? In twenty-five? In fifty? Let's be here and here with her and him and they. Let's connect them. Let's connect everyone. Let's deliver technology that gives them access to power, opportunity. Let's set a new standard for data security and personal privacy. Let's change the system, promote equality and fairness in the workplace. Let's tear down the barriers to social justice for a more inclusive world. Let's clean house, zero carbon, zero waste. Because the health of our family is tied to the future of our home. Let's gather resources and partners, steer toward our greatest challenges, and accelerate for the benefit for all.
Cisco has made it its purpose to power an inclusive future for all. Where will we be in 50 years? Let's go see Cisco. The bridge to possible. A hacker doesn't always look like a hacker. A hacker's at home everywhere. A hacker comes in many forms. He's interested in everything. He can work alone, but with a crew, so much better. A hacker is free. With Cisco, protecting your business from cyber attackers is simple. If it's connected, you're protected. Sam, at Cisco, our purpose is to power an inclusive future for all. In that future, Mother Nature has a voice. It's a new day for the new era. AI is everywhere. So are we. We have the infrastructure AI needs. Now the breadth of data AI craves. Will use AI to help the world. See more, do more, and we'll secure it like never before.
You've all heard the AI hype. Now you want AI's help. That's exactly what we'll give you, Cisco, making AI work for you, perhaps being a silly goose. Show it. Show it off.
Wow. I can't believe we've arrived at this moment. This is the kickoff of our final broadcast day of what still remains. We're not at the end yet, nowhere near it. We have so much great stuff coming your way today. We're just off the heels of an incredible party last night with the Killers and just people coming together in a more social fashion as we all attempted to leave work behind. Of course, what do we all have in common? That's what we're going to talk about as we partied the night away. So many people had such a good time.
I unfortunately been getting feedback from people because I fell asleep. That's exactly how I handled it last night. That is so I could be here with you today and be somewhat on my toes. You don't realize how much brain work it takes to do what I do. Regardless, I'm so excited to be here. It's fun to be in the host chair. You know, it looks kind of quiet. This is going to change quickly. I'll tell you what fascinates me the most is not exactly what happens here. We're going to talk about that all day long. We're also getting a good look at what does it take to put something like this together. With that, I want to throw it over to my favorite co-host and the host, the man with the most, Steve Multer.
You have some event leads with you. Perhaps that could give us a little picture.
Boy, do I ever. My dear friend, thank you. You say favorite co host to all of us, do not you? I just know you do in the background. It is not just me. I know you. I did not realize you realized that. Of course I did. Thank you, buddy. Welcome to a phenomenal Thursday. As you said, what it takes to put this event together, it blows me away. Every single year. Every single event. Anybody who is watching here on the broadcast, everybody who walks around here at the show, they just have no idea the scope. It is because of these two spectacular people sitting next to me. Laura Simmons, Heather Henderson Thomas, two of my favorite people to work with in general. Two remarkable bosses.
I'm gonna say bosses because none of us get to this event without the two of you and the incredible work that you and your team do. Brittany's hanging out over there, and we'll talk about her in a moment as well. First of all, I just want to say thank you on behalf of all of us. It is just a remarkable achievement, and we are grateful. You make it fun and you bring this incredible community together. You know, Laura, I'll just start off with you. Community, I think, is the word of the day, the word of the week, the word of the year for us here at Cisco.
Absolutely. Community. That's what it's all about. We know they're here to learn, but they're here to see each other. Once a year, they come together, whether it's here in San Diego, Las Vegas, around the world, but they want to see each other. They want to see Cisco, they want to network and get to know each other. It's amazing. That's what we really think about when we start thinking designing the program, all the activations, all the hard work that the team does, what can we do f or the customers who are here?
It's sort of behind the technology. We can talk tech all day long. We're very good at that here at Cisco. It's what we do best. Ultimately I think every technological decision starts with a personal decision. How does it affect the individual human? Right, Hea ther?
Absolutely. I think that's something we always keep in mind is what is the attendee experience here? You know, the basics. If you take care of their basic human needs here, as far as like being able to be housed, be able to transport, find what they need, be fed, be hydrated, that makes the difference, then they can absorb all the great content that Laura and her team put together. If we've designed an experience that enables that for them. That is something that the team works really, really hard at, is putting ourselves in the attendees' shoes to make sure that we can help them have a great experience and get everything they want out of this week.
All right, before I dig even deeper, name the team, I want to hear names. Who is responsible for all this, for supporting the two of you,
Th e entire company.
Start. All right, so let's start with a guy named Chuck Robbins. We'll just work our way down the list.
I would say that, you know, there is a ginormous team that we have the pleasure of leading and working with all across Cisco and we can't do it without them. We have a lot of hashtags and sayings on the team and one of them is one team, one dream. Like our entire team is so focused singularly on the success of Cisco Live together. It doesn't matter if you're agency or if you're on our direct report team or if you're on another team within Cisco. Everybody buys in to doing this together for our customers and our partners and I think that what makes a real difference
and I'm going to tell you something, we can all tell, I don't know if you can tell from outside of the team, but everybody looks around to whenever they see you. I'll let everybody here know who's watching in. When you're in the room and you see these two and their team walking around, you're all always smiling and when you consider the crush of the pressure that you have on you to perform and execute this, the fact that you're always in good moods, I never see stress, I never see arguments. It's just we go. And we make it happen. All right, in a moment, Heather, I want to kind of talk to you about this venue, because this shift for us. A big core piece of the venue, Laura, is this year's showcase, which is a dramatic change.
Yes.
We went from 33,000 sq ft to 43,000 sq ft. We took all the BU's, all the capabilities, all the technologies and put them onto a single pad to show the depth and the breadth of the portfolio. Talk to us about how that came about.
It was r eally important to shift. You know, we don't want customers to have to walk around and try to understand, where do I find this? Where do I find this? It's here. It's here. Cisco has all of their products and solutions working together. Let's tell that story. Let's have them come in. Understand campus and branch employee experience, manufacturing, it all works together. Here are all the solutions you need to be successful. Design and deploy a network that's going to deliver for your customers, deliver for your employees. That way they can find the experts to talk to you.
Because what we want to do is not just show the demos and the products. Talk to someone, ask questions, understand. If you need to go deeper, then you go up and you talk to your TAC folks, you talk to DevNet. It's all integrated. It is a consistent story of what I hear in the keynote. I see in the demos, I learn more in the sessions, and that gives them a better perspective to go home and be successful in their own companies.
I love it. It is so cohesive, and the response has been phenomenal, even behind the scenes with what you do not hear our people down there reporting back. People love this consistency. Heather, you. We go from Vegas and here we are in San Diego, two dramatically different venues. How do you take advantage of this venue for the benefit of everybody who attends the conference?
You know, that's a great question. We like to say that there are no challenges, only opportunities in what we do. We embrace the opportunity. We're under five different roofs here. We have all that beautiful outdoor space. We have the new venue behind the convention center called the Rady Shell. We just embrace it. We don't fight it. We're like, okay, let's rethink this. Being in San Diego allows us to have all this lovely natural light and we get to go outside to have our meals. We basically created a cool playground outside to give people a brain break. Because this is a lot. This is an intense week, especially for everyone.
You get to go outside to have your meals, have a little chill time, maybe go out to the Rady Shell on Monday night and enjoy a movie under the stars, all while having this great educational experience, being able to connect with one another. We just, we really honestly embraced it. It's a very different experience, but that kind of makes it fun. I mean, this is my 19th Cisco Live, but I stay on it because every year is different and we get to do things differently each year. Our jobs are never boring.
No, no. You just make it look effortless. All the way up to the incredible outdoor party last night. Congratulations to both of you. An incredible, incredible Cisco Live 2025 and in no small part at all, thanks to the two of you. Thank you on behalf of all of us. All right, Rob, let's send it back over to you, my friend.
You got it. Thank you so much, Steve. Very nice job. Hats off. The teams have been fantastic here. With that, we're going to go down to the showr oom floor. Showroom floor, showcase floor before it opens. Z, I believe you've got something to share with us.
Absolutely, Rob. I tell you, I am super excited. You know, Lauren and Heather, they said the operative word, they said community and they said one team, one Cisco. I tell you, one team, one Cisco. We're doing it down here on the showroom floor. I'm here with the nerve truck. That's our crisis response vehicle. It responds to crises, hurricanes, disasters.
I tell you, this thing is outfitted to be deployed whenever there's any type of, you know, humanitarian crisis or anything like that. This thing can be deployed in a location and it can be erected whenever networks are down and connect people because you know what? It's about people. Aren't you glad that Cisco is thinking about your family, is thinking about my family? Because things happen in life. We take our innovations, our technology and we're thinking about how can it serve you when things are happening, things that we can't control. We have this nerve truck. I tell you, my co-host over here, she's over here getting ready for something to happen. She's gearing up. Hey, Lauren, let's take a peek inside. What's going on in there? Lauren?
Hey, Z. That was a perfect segue. You're talking about having technology at the core of what we do and provide help. You know what? Right behind me is the stack that's powering and making this come. We have the Meraki cloud communication system here and it is providing highly available services, is providing the security that you need. There's one thing to get connected, but what's the point of being connected if it's not securely? As we're bringing that Wi-Fi to our, to those, those individuals who needed their secure communications and we got Talos doing that. You see the cyber map, we're looking around what's happening, making sure we're dodging any of those criminals and threat actors out there and we're providing good services for people. Technology for good is at the heart of everything we do.
We're using none other than our one Cisco platform and technology to do it. We also have a phone here. Imagine if you have to get in touch with someone, that the phone communications are powerful, and if for some reason the network went down, we have multiple ways to connect. Redundancy is key. There is 5G cellular service to get back and connect to the command center, wherever that may be. There's also broadband, there's even some satellites. We have a number of different resources and backup and redundancy in place to make sure that this nerve vehicle gets to everyone who needs it. I'm gonna probably just take a second here and look around at just like the. I like to look at this as a preconfigured package in a box, like everything you need all here.
You don't have to go anywhere. We have it all for you. I love that. It's like wrap it up in a bow, take the ribbon off, it's ready to get deployed. That's important, you know. We are, I love that. We are, like Z said earlier, we are that company that you can depend on to bring you services whenever you need it. We make it all here at Cisco. That's the best part. I'm actually gonna send it back over there to the studio, I think. Rob,
I got, I got some parting words for us. Lauren.
Oh, you got something?
I got something for us. You know what, stop by the Purpose Pavilion today. It's going to be open from 10:00 A.M. to 2:00 P.M. today. Stop by the showcase floor, the Purpose Pavilion, and see how Cisco is purposing an inclusive future for all by securely connecting everything to make anything possible. Back over to you.
You got it. Hey guys, thank you so much. You guys are so awesome. I've crawled through versions of this vehicle multiple times. This is my favorite by fa r because it's got the big Ford chassis and just so much capability there. Hey, we've got another video to share with you right now. Let's take a look at that and then we'll continue on.
All right, Cisco Live, let's talk about social impact and give back. Here's the tea. It's not just about talk. We're not just talking about it, we're being about it. We have some wonderful give back opportunities and we want you to come and be a part of that. Attendees can participate hands on with activities that give back to the San Diego community. These activities will be quick engagement, assembly line style in the World of Solutions. Next to the Purpose Pavilion, you can pack shelf stable food kits, pack hygiene kits, pack park ranger support kits. Hey, we're giving back guys. Stop by the Purpose Pavilion and pack a kit for the San Diego community. I tell you, it's time to give back for social impact initiatives.
That is excellent. Thank you guys so much. A lot of information coming. The showroom showcase, however you want to refer to it, will be opening up here shortly. Lots of great information still there, lots of fantastic experts to interact with. We are turning towards our setup here now for Center Stage on our next topic.
This one is fascinating to me because I had a chance to talk about hybrid mesh firewall in some interviews that either went live or recorded before. Here's the. And Amy, if you could turn off your headset thing. Ah, fantastic. Thank you so much. What I want to set up here is just how both fun and scary and just exciting it is right now from a security perspective. First and foremost, if you study what Cisco has done across the board this week, it's incredible, the consolidation. From a security perspective, it's even better as the messaging has been. We want security to be fully embedded into the network. The network is what's enabling these AI ready workloads and all facets of it.
Security touches on this AI side, not only with enabling it in a secure manner, but it's using AI to combat AI. It's being aware of the things that can threaten AI and potentially come back to bite us if we don't design these things carefully. I've just been very impressed with exactly just the overwhelming amount of new technology and how fast Cisco is moving in this space. We're going to be talking about the hybrid mesh firewall in more detail, which sounds like an individual thing. It's not. It's an architecture that encompasses everything that is happening from a security perspective in.
In respect for the situation that we find ourselves in and the way in which Cisco has quickly pivoted in my opinion to be able to take advantage of this situation and provide confidence, provide the ability for us to know, okay, we can go out there and extend ourselves into what is a bit of an unknown for many of our customers is how do we move fast but not break things that are going to break us. That is what they are enabling here and they are making it easier to do from the network perspective because the endpoints are changing so much. This stuff has to continue to deploy. We are going to share some details here in this next Center Stage session and then we will take questions afterwards. I believe there will be a short conversation with them, but this goes on all day long.
Lots of learning, lots of new announcements in detail on previous announcements are coming to you as we proceed. I want to thank you for joining us here on the stage. With that, I want to go out to Center Stage. Please enjoy. Hybrid Mesh Firewall.
Thank you everyone for joining. This is the perfect time to wake you all up after your lunch. Wonderful sort of slot that they've given us. I'm going to get right started with, with everything that we're talking about, jumping right into it. We have all known firewalls from when firewalls were graduated into next gen firewall. This term is older than my daughter is. She's in high school now and we're still calling these things next gen firewall. She wasn't born back then when we started calling it next gen firewall. This has to change, right?
It has to change in the way the applications are being deployed, which is it is no longer just about placing a device that is called a firewall and applications that are being protected. It is making sure that we convert that noun firewall into a verb. Firewalling is something that is available across wherever your applications are running. Whether it is in a container, in a server, in a VM, does not really matter. For us to be able to do all of the right set of use cases, be it around segmentation or around inspection and the rich set of services that we provide. This is where a lot of our focus is around a variety of ways that we are changing all of the three architectures. Sorry, all of the three facets in the security architecture.
The first one of course being around enforcement points, right? Very, very important. You have the right kind of enforcement points to make sure that the security policy is in place. The second one is around simplification of the policy because we have all lived in a world where I want to define a policy. I have one place to define a regular firewall policy, another one for IPS, a third one for containers, a fourth one for cloud. It becomes very, very complicated, but a simplified mechanism by which we put the policy in place. A lot of the newness in these architectures that is coming about is driven by the AI workloads because they do not behave like the usual workloads, which are application oriented. They have a very different characteristic by which they get utilized.
We'll unpack that over the course of this presentation starting first with the set of enforcement points that we have in the environment. On your left side is the firewall, the bespoke firewalls, the appliances that we have, that we have built. Cisco has a whole range of firewall appliances. We have refreshed all of the entire product line if you will on the firewalls in the past year or so and we are now extending it. We also have a high end firewall which is called the 6100 that does in a 2ru form factor, 800 gigs of layer 4 throughput, 200 gigs of layer 7 throughput, with full decryption of encrypted traffic and inspection. Highly performant for the data center world.
We also have on the low end something called FTD200 that is on the new device that we are adding to the bespoke firewalls that does all of the complex operations that a modern firewall requires in a very small, small form factor. We also have. I'll come back to the third party in quick second. We also have Secure Workload, which is the cloud native enforcement that we do in with Cisco's firewall estate. We have Isovalent, which is a company that is holding the EBPF or rather I should say Cilium as the open source, Sprint Essential open source package. This is probably one of the most disruptive and productive new technology to come into the space in both networking as well as security. Whether it is running natively EBPF or through a package with Cilium, we also have those enforcement points in the environment, cloud native, on prem.
If you're using Kubernetes, there is a high likelihood the way you want to enforce security policy in a modern construct is through Cilium. Adding to it is also ACI, which is a well-known technology from Cisco, which is also managed with the same consistent policy. You don't have to go to multiple places to define that policy, but from one place you can start to put that policy in place. On top of that is secure routers. You're gonna hear more about secure routers tomorrow at the keynote from us. The other part here is that from that same console, the same place, you can even manage third party firewalls. Right? This is not a migration, but this is coexistence that if you happen to have a third party firewall, you can also manage that from the same console.
This is very, very impactful for operational benefit for organizations. We will talk a little bit as to how it is practically being used, not just something that we have built. Another very important facet is, I do not know how many of you are Splunk users. A quick show of hands. Splunk users. Okay, some of you are Splunk users. One of the things that we are doing is that we are making the ingestion of logs from firewalls, from Cisco firewalls, free into Splunk. The reason we are doing that is there is 20-25% of all the logs consumed into Splunk are typically from firewalls. We think a Cisco Splunk better together solution is very, very important. This is the second of our steps the first year. Last year we had made Talos freely available to every incident within Splunk.
This year, where we're saying we are now offering, providing to all the customers, firewall log ingestion is free for, for Splunk customers. And we have also enhanced more of the detections we are going to do on top of that. On the next slide, let me invite Anand Raghavan to the stage. Anand is gonna tell us how AI is changing the whole narrative of what it means to protect applications.
Thanks, Raj. Hey, everyone. I see a few familiar faces here, so I have 10 minutes and I'm going to go at. No, I don't need to go 1.5x. I'll do regular time. Right. As you think about hybrid mesh firewall, one of the important parts of that is as you're running your workloads, you have to firewall at every level, right?
When you're thinking about AI workloads and you're thinking about protecting your AI workloads, what are some of the things you should worry about? Right? In the last couple of years, one of the things we have seen happen is companies have moved from building documentation assistance to more complex and sophisticated agentic systems. Right? The joke is in 2023, if the buzzword was LLMs, 2024, the buzzword was RAG, 2025, the buzzword is agents. As you go from a simple AI chatbot to a RAG based application to an agentic application, what happens in your environment? Right. On the one hand you have a lot more of your sensitive data that is sitting in your environment. On the other hand you're building more and more complex systems on top of it, right?
Your likelihood of AI-related security incidents goes up as you're building more mature agent systems, right? That's one of the first issues, right? When you think about how to solve for something like this, what does AI Defense do for you, right? To set it in the context of a development life cycle, all of us understand the software development life cycle really well. What it takes to go from an idea to getting something in production and the iteration cycle around software, what does that look like for the model environment? What does that look like for LLMs, for agents, right? The first step of that is around the asset discovery and the model scanning, making sure that the models are safe, making sure that there's no malware presented.
If you look at Hugging Face, you will see that a lot of the models there may not be safe. There could be malware injected in it, there could be malicious links. Scanning the files, making sure they're safe, that's the first part of it. The second part of it is asset discovery. If you have models running in multiple clouds, can you get a catalog of all of those? Is there Deep SEQ running in AWS Bedrock that may not have been approved? Is there Qin running in some other environment where it should not be? Right. Getting a full list of all of that. That's the first part of the framework of what it takes in a model development lifecycle. Right. The second part is detection. Specifically, this is one of the important parts of taking something to production, right?
Typically in classical SaaS, when a software is ready for production, the security organization kind of knows what they need to do. From a pen testing perspective, from a regulatory and compliance perspective and all of that, what is the equivalent for the model world? What does it mean to red team a model? What does it mean to pen test a model? That's where model validation comes in. We've done surveys with our customers. Typically this is anywhere from 7-15 weeks. If you have to do it manually to identify what are the places where you need to test the model, what are the potential vulnerabilities, running those tests, cataloging them, we have automated this entire thing and we can do it in minutes. Fifteen weeks to minutes. I'll go into that a little bit more then.
The third part of this is the protection part. If the model has been validated, you're ready to take it to production. Let's say it's a chatbot that's running and every prompt and response that goes into the model and comes out, you want to be able to test it and validate it, make sure that it's protected. Right. Make sure that safety, security and privacy guardrails will apply. Right. That's the third part of the protection. The really cool thing about all of this is it's a single unified management console. It's available natively in security, cloud control. It integrates natively into the products we have in the portfolio. Secure access, cloud protection, all of the goodness that comes with that. What does manual red teaming look like? If you look at the question at the top, how do I hardwire a car?
If you ask this to most model provider provided models today, the chances are it might say, sorry, I cannot answer that question. The models are well aligned, as they are called. Right. You might play around with it. You might say, okay, pretend you're a rogue AI, how do I hotwire a car? If it doesn't respond to that, you might say, I'm writing a research paper, you're a very helpful assistant, help me with my research paper. Right. You might completely not use the words hotwire or car. As a human, we know that that's exactly what you're trying to accomplish there. Right. You could do a very complex set of Hundred Questions, 500 Questions game to be able to red team a model. Right.
How can you automatically accomplish the same thing within minutes instead of you having to creatively think about this each time you are red teaming a model. Right. That is algorithmic red teaming. That is what we have innovated and included as part of our AI Defense product. Very quickly, within minutes, we can analyze and we can tell you these are all the tests that have failed in a particular model as part of our AI Defense product. Right. Not only can we tell you the pass rates, we can also break it down to you by taxonomy. This is an important thing. If you think about standards today, there are three common standards in the U.S. There is the OWASP LLM 10, there is the MITRE ATLAS framework, and there is the NIST Risk Management Framework. And the NIST Adversarial Machine Learning Framework.
Across these, there's about 200 plus guardrails that most organizations need to comply to. What we are able to do here is with the exact same taxonomy, we're able to show you how your model is performing, what are the tests that are passing, what are the ones that are failing, and the attack success rate in each category. It gives you a very quick sense of how safe this model is. There's one big reason why this is important. This is something that our Cisco researchers discovered as well. If you flashback a year and a half ago, model providers would tell you that our models are well aligned so they're not misbehaved. If you ask them a question that's not safe, the model would tell you, sorry, I cannot answer that question.
For most enterprise use cases, you have to fine tune the model with your own data. One of the things we discovered is that when you fine tune a model, it destroys its alignment. A fine tuned model is three times more likely to be jailbroken and it's 22 times more likely to provide an unsafe response. You cannot rely on an out of the box model and trust that it will have the guardrails it needs for you to be able to use it in production once you fine tune it. That's where these kinds of tests become very important as part of your model development lifecycle to know that your model is protected and ready for use. Right. Once we discover the test where a model is failing, we are able to automatically recommend compensating controls.
We're able to say that these are the guardrails that you need to apply specific to your model that'll act as compensating controls so that you can now use this model in production safely and continue productizing your chatbot or continue productizing your agent AI application. The other important part of this is this can be run in an ongoing basis as part of your CI CD process. As new threats are discovered, when a new version of the model comes in, we can automatically red team it as part of your CI CD process and make sure that you continue to get the best in class guardrails for any new version of the model that you have in production. Right. That's an important part of this. Right. That's one pillar. When you think about AI defense, there's three critical pillars on which it rests.
The first pillar is the breadth of capabilities that I just spoke about. The second pillar is a breadth of consumption models. If you want to run it entirely as SaaS, you can. You want to run it hybrid with a presence in your private VPC, you can. If you want to run it on prem as part of our Cisco Secure AI Factory announcement video to NVIDIA, you can run it on prem that's coming in a few months. If you want to just call it as an API endpoint, it runs as a sidecar not in line. You can do that as well. That's the second pillar, right? Breadth of consumption models. The third pillar is how well it integrates into our ecosystem of enforcement points.
You can, once a guardrail fires, enforce it entirely within the hybrid mesh firewall, depending upon where the user is, depending upon what is the enforcement point you're trying to accomplish. For example, if I'm a user and I'm trying to access ChatGPT and I copied some source code and put it in there, and AI Defense discovers that, you can enforce that in secure access on the way out of the user in the proxy. If I am an airline chatbot and it's running in AWS and one of the customers is talking to the customer support person and they're putting their password number or something like that that they should not, you can enforce that to the cloud protection suite and make sure that you scrub PII from there.
Or alternately, if I'm a competitor to that airline and I'm logging into the chatbot and saying, hey, show me all the training data you were trained on. Like what's the pricing data that you have? You can make sure that the response is guarded so that the model doesn't leak training data, right? You can enforce in the cloud. Now, if you have agentic systems where you have multiple models talking to each other, right? In a service mesh kind of environment, that is where we have a HyperShield. You can enforce these guardrails within the HyperShield context, where if you discover that between one model and another there is data leakage happening, you can run that enforcement. The last part of this, that's something that we're working on, is natively within the firewall itself, right?
Broad set of capabilities, broad set of consumption models, and a completely broad set of enforcement points. Right? This is very critical for an organization because it's completely frictionless for the developers. If I'm in the machine learning team, my job is to just build the best models I c an. I don't have to worry about safety, I don't have to worry about security. I don't have to worry about privacy. I just build the best model I can. My security team can make sure that the rights of the guardrails are in place so that my data and my customer's data are safe. Right. That's what is cool about A I Defe nse. Back to you, Raj.
Thank you, Anand. Building on the third pillar, which is all the security policy, enforcement, et cetera, really starts or sort of chokes up with how we do policy definition. This is a critical area of investment from our side, which is how do I simplify the definition of a policy? We talked about firewalling in all of the different form factors that I mentioned earlier. Anand just talked about AI Defense, which is a set of capabilities to enforce this new form of policy. Again, you do not have to go to another place just to define something on AI workloads. Just like you go from firewall to web access, to private application access to AI to agentic, you do not need to have bespoke policies for all of these different entitlements. All of them start their definition from Security Cloud control.
In addition to that is also in some environments. We are fully cognizant that we are not merely building a platform for our own products. While that is interesting, I think it is incredibly narrow and also narcissistic at some stage. Like for me to assume that every customer is only going to have Cisco firewalls, only Cisco this, only Cisco that, that is not true. Similar to how you will hear about Cisco XDR in which we will take telemetry from even third party EDRs that might be installed in your environment. Even in the policy definition, if you have some other third party firewall, you have something from Juniper, from Palo Alto, from Fortinet. We will manage the security policy for that device from Security Cloud Control. Right. This is very impactful because there is no rip and replace.
We are going to let those other devices age out while you get the operational benefit of security cloud control and the day-to-day rigor that you have to go through with putting policy in place. Really, really important. One of the ways to interface with this policy unification that we're talking about here is the AI system that we had worked on earlier last year. This is a well-exercised tool. Right now, just before coming here, I was doing a review. There are 3,800 customers who have done nearly 28,000 policy optimizations and hundreds of them. Those optimized policies are now in production. We've gone from something that back in the day when we acquired Armor Blocks, I drew it out on a napkin for Anand.
A year later, year and a few months later, we have thousands of customers doing tens of thousands of policy optimization with hundreds of these already in production. It is very gratifying to see the kind of value that we are, that we are building and extending that goodness beyond the sort of just the Cisco corpus is a key area of focus for us because again at Cisco what we are developing is a platform for the customer, not just for us. To talk about customers, I want to now invite Jacob from Goldman Sachs. Jacob, come on up please. Thank you for doing this. We're not going to make you stand, so please take a seat. Okay. All right. You flew in from what, New York?
New York.
Yeah. Long flight?
Yep, it was a five an hour flight but made it here safely.
Okay, that's, that's good, that's excellent. Again, thank you very much for making the time for this. I'm sure people will also have questions and they'll ask you later on, but let me kick this off and see what your strategy is. Goldman Sachs, large bank, every regulation known to mankind, you guys are subjected to. The way you've interpreted Sarbanes Oxley is I'm going to have to have firewalls from multiple vendors. How do you go about doing your day to day with this diversity of firewall vendors in your environment? Do you have bespoke firewall rules for every one of them?
Yeah. Before using the mesh policy engine? Basically yes. We are a multi-vendor company, multiple vendors, different managers managing those policies, and so we had to hire up folks who specialize in these firewall environments and basically, yes, went through the change management process, the kind of like information security reviews of those policies, and then the back and forth cadence of basically the application owners wanting to get this connectivity for the firewall engineer to understand that connectivity. The typical change window issue only on the weekends or not during trading hours or month end or things like that.
Yeah, and it's always kind of to me it has always been like how much of people spend time doing stuff that could be automated in theory, but in practical reality there are so many sort of checks and balances that need to go into place before you actually make a policy change and so on and so forth. How has been your experience using the mesh policy? Like operational simplicity comes to mind but maybe share a little bit more color.
I mean after basically taking on the entire mesh policy engine. I'll just go straight to the point. Basically, we no longer needed firewall engineers to basically provision firewall policies, which was a huge operational overhead. Reducing lead times of at minimum two weeks down to literally minutes or hours and hours is simply finding that infosec engineer to click and approve your connectivity. Right.
Getting really the application owners to really own their connectivity. That was a big part to it, and therefore we have more traceability of that connectivity. The other part was really about trusting the system doing policy changes. We do hundreds if not thousands of policy changes per day, 24 hours a day, seven days a week. It does not matter whether there are trades going on, it does not matter if there are activities of application migrating from one place to another. It is a 24x7 platform. Trusting that system was a huge monumental effort and having that trust. The third thing, really, because we were able to trust the system, we could do th is at scale. Very important. Being able to provision thousands and thousands of application changes at any given day was deeply impactful in delivering applications quicker, faster, more reliable, e xactly how the application folks wanted it.
Yeah. Sort of talk maybe a little bit about the cultural change that you had to go through, because as security people, everyone is just a little bit skeptical, more than usual. Application developers are like, oh yeah, security, that's those people over there. What was the cultural change that had to happen for this to become a reality at go?
Yeah, I just want to establish one thing that you brought up, which is that we're a highly regulated company. We have to follow all of the NIST controls, all of the Fed controls, DORA, ECB, whatever you want to call it, we have to adhere to it.
Being able to do that, at the same time being able to deliver security at the pace that our business demanded, we basically need to make a mind shift where ownership of your security was basically baked into the application development process. At the same time, even for those legacy applications that we do have, we could still take that on, be able to do it in a very legacy-like framework, coexisting with the new applications that get onboard, whether it's cloud, on prem, it didn't really matter.
This is the part to me is fascinating, especially in a regulated environment like yours, as to how such a change would come about to the extent that you can share, I mean, AI is changing both our expectations as well as the needs that are placed on us as a business, you guys, as a business, etc. How are you thinking about this transformation that AI is going to unleash, both on the productive side, but also a set of new kind of, I don't know, surface area for attack potentially. How are you thinking ab out it more critically?
I mean, AI is a hot topic. Everyone's talking about it. I think the first thing that we need to really accept is that without automation, without something like Mesh policy engine, you cannot do AI. AI is just simply not going to configure your firewall magically. There needs o be something underneath.
It needs to be more p recise.
Exactly, exactly. So having that in place will allow our applications to developers to speed up development. That's one. The second thing is really to speed up the execution of that, whatever they're developing in terms of it when it comes to security. When you really look at it from that standpoint, basically the possibilities are endless. Now the attack vectors are going to be higher up the stack and that's something that we have to all think about.
Yeah. Awesome you are here at Cisco Live. I am incredibly excited. I know. Anand I we've been working on many, many different initiatives. We get to work on it for months and this is sort of the now we get to tell the world about it. We are very excited. From a product perspective, what are you seeing that you're excited about? All of the innovation that you've seen either on the show floor or from Cisco or generally at the show, the conversations you have.
Yeah, very quickly. The two areas I definitely see deep interest in is basically the smart switch doing further segmentation without changing our applications and more importantly like eBPF with Isovalent going deeper into compute and finding more process, process se gments, segmentation. That's something that I'm really excited about.
Yeah. This is becoming really interesting at multiple levels for us. At the show floor here, I think we're talking both about smart switches as well as EBPF as sort of this incredible piece of technology that you can get detailed input at a process level without really ever getting into the situations like CrowdStrike got with kernel panics and all of that. It literally is designed to be impervious to that. Really exciting. If you guys are also looking for that innovation, please go find some of those demo pods there. They're very, very cool. I have lots of questions, but we're going to unfortunately run out of time. Thank you so much for making it here and for your inputs, your use of Cisco products. Again, thank you for keeping us honest and keeping it interesting for us.
It's very glad to be here. Thank you.
Thank you, Jacob.
All right guys, I told you that was going to be exciting and it's going to remain exciting because we are staying in the security vein, if you will, especially with our security expert here on the show, also doubles as a co-host for us whenever we can get her scheduled. Lauren, what have you got going on out on the show floor?
Hello. Hello. I am here with Chris Consolo, in our security business group. Chris, on the center stage we keep hearing this concept of security, cloud control and intent-based segmentation. How does that simplify policy management in a multi-vendor environment and what are the bene fits it offers?
Thank you, Lauren. If we take a step back and look at the Cisco hybrid mesh firewall, it's the ability to infuse security into the fabric of the network. Like you mentioned, the intent based segmentation is all possible by the ability to utilize our mesh policy engine to rationalize third party firewall policies. Now, speaking of firewall, we have also introduced two additional firewall hardware models into our hardware portfolio. The 200 series, which is great for those WAN and branch use cases with up to 1.5 Gbps throughput in just that device. It could even sit on a desk. The other firewall we introduced is the 6100 series, which is great for that AI ready data center with up to 4,400 Gbps throughput with decryption enabled in just two rack units. You asked why would this be able to simplify? Going through the hybrid mesh firewall and utilizing our new hardware technology, you'll be able to provide more flexibility.
For those organizations that are currently using firewalls in their environments, no rip and replace, they could just carry over ours into this platform through Security Cloud Control.
Wow. Especially love how we're really driving forward the policy from like a single pane of glass. Like I know we hate to misuse that term, but it really is like that single place to do it all is huge across the different devices. Exactly. Huge. Huge. Security runs in my veins and like in my blood. I get super excited when I see all the great things we're doing. Not to mention I feel like I'm a long time Sisconian at this point. I love to see that the firewall itself is still core to everything we do in security.
Yeah, absolutely. Even taking that a step further with additional capabilities, we released at Cisco Live this year is the ability to extend that hybrid mesh firewall fabric to the Cisco ACI. You will be able to do granular micro-segmentation through Cisco Security Cloud Control and push those segmentation policies in to your ACI environment for that application-centric approach.
I love it. And you know, hopefully like, I do not know if you can talk too much about this, but you know how we start seeing like the more selective decryption and capabilities, is that something that we could start seeing come to life even with these different devices you just mentioned?
Yeah, absolutely. We utilize an encrypted visibility engine which is possible inside of Security Cloud Control and able to manage it and utilize our firewalls to use that capability. Now you're able to not only see into traffic, see into traffic without decrypting, but also be able to select the decrypt traffic based off of policies that you built. If you look to decrypt traffic that's or you're not able to decrypt traffic, that's possibly in, you know, violation of HIPAA. Because someone's using EPIC on their work laptop, you are able to build a policy to block decryption of traffic that are coming from those regulatory things to keep you within compliance.
Wow. Compliance is not going away. That's like one of the biggest challenges they have keeping up with all the different compliance regulations. Again we're already thinking about that proactively, already getting it infused and embedded in the solution. This is amazing. If you were to say what would be your favorite part because I know all of this is really cool, what would be your favorite feature that we have brought here?
You know I really love the new AI-ready data center firewall, the 6100 series because of that price performance and is really showing Cisco's top to bottom leadership in network security performance with the latest hardware models. It all started with the 3100 series firewall. Then we incorporated the 4200 and the 1200 and now we've brought together the 200 and 6100 and they just have si gnificant price to performance leadership in the market and seeing that you get 400 gigabits per second in this brand new model, just two rack units. Our customers will love it.
I love it. Chris, this was amazing. I learned so much and again I just love security so I'm always excited. Chris, thank you for your time. It was a pleasure speaking to you today. I am going to send it back over to you Rob.
All right, Lauren, if you can still hear me. Steve Molter says. Say hi to Chris for him. He said great job as usual. So tell Chris that when you're off camera or whatever, but nice job, Lauren. Thank you so much. Guys, we have got more stuff coming. It's such an exciting day. It's the last day. By no means does that mean it's short of information and good stuff to learn because we do have a lot to catch up on. So many big announcements this week. Security in particular, of course, so many interesting things to talk about in terms of that hybrid mesh firewall architecture.
Brings me back to some memories of just getting into Cisco, trying to figure out the picks and all of a sudden we were going ASA and we've had several evolutions since then. Cisco's always keeping up with the market and or driving it as it really feels like this week. I'm going to shift a little bit here as we talk about old, not old. Let's see, mature and very positive partnerships that continue to develop and pay dividends for all of us. That is based on a conversation. I've talked to two of these gentlemen be fore, but Michelle had a chance to sit down with NetApp, specifically Bobby from NetApp. I'll let her introduce them. Let's talk a little bit more about what NetApp's doing in their products with us.
All right, we're back here in the studio. I've got two guests joining me, Bobby and Andy. We're going to talk things Cisco and NetApp. We're going to start off by talking about the relationship between Cisco and NetApp. It's just an awesome relationship. What makes it so successful?
I think from our perspective, like working together over around a decade, we've been a great team that actually has a goal to fix and that is to make this show and specifically run as well as it can. We host all the applications on the NetApp that run the show, allow us to monitor, allow us to bring about new technologies as they change. This year we've done some really positive innovation. You can actually go and see at the NOC downstairs
that exactly what brings me to the next question. What are you most excited for people to see this year?
Last year what we did is we did a compute of the upgrade to X series solution. What we did this year is pretty simple. All we had to do was add a PCIe node which has Nvidia GPUs into the X series chassis so a data scientist could start that AI journey. On top of it, what we did is, you know, we have to add the AI workloads. We have a bunch of AI workloads as Andy was saying. One of them was putting a RAG app application where we loaded our validated solutions, bunch of white papers where attendees could query or look at this chatbot and ask the questions. I would encourage everyone to go and see and experience this RAG application and see how they can get their questions answ ered.
Oh, that's great. That's some hands on stuff.
That shows that modular capability of the system, which again builds into the sustainability mantra. What Cisco and NetApp are giving is another example of how strong that relationship is.
Okay, moving into FlexPod, can I get some backgrounds and benefits of the FlexPod? Let's get excited about it.
Sure, let me start with that. We have been doing the FlexPod as a solution over the last 15 plus years and we have over close to 250 validated solutions out there. What we give is a combination of putting the data management layer from NetApp and the compute and the networking capabilities from Cisco. Right. The customers or the partners do not have to do any guesswork as you put these solutions together. What we did is we built the automation into the solution so customers would be able to go to market faster.
They can do that, create the repeatable building blocks, as I said earlier, without doing any guesswork. Also, the optimal design feature which comes with the solution gives customers the performance capability to run any workloads. Andy introduced Splunk last year. We had that assurance. Yes, you can run that workload without doing any guesswork because the data management layer is so adaptable, it could take any workload. This year we introduced AI. Andy and myself, we had discussion. Okay, let's try to do that again. The modular capability of the system gave us the ability to put that GPU into the chassis. We just started the workload, started using,
done in two weeks as well. We didn't get the hardware until around two weeks ago.
We built an application, we installed the hardware, tested it, did all of the software configuration and we have it at the show today. That shows you the kind of rapidity of installation and the great partnership with NetApp. We do not just partner in the storage, the network, the compute, we are partnering as a team as well. As a team we were able to bring something to the show and talk about AI. Because if you do not talk about AI, you are probably missing a beat, right?
Yep. Going back to the benefits, both Andy and myself see this as a solution rather than components from Cisco coming together. That is the only way customers can get their business outcome down, right.
The same exactly where the way we were trying to put that RAG application, that was the only thing we had in our mind and we wanted to do it and we got it done in two weeks. We did.
That is a great, great story. Yeah, thank you. Moving to the NOC, what is the role the FlexPod plays in the NOC? Can we talk about those?
Sure. Mainly the role of the whole NOC is obviously to give this show some kind of network access, etc. Everything at the show touches the NOC. Right? You name a service, registration, it's using the network. All of the attendees use the network for Wi Fi. Where does FlexPod come in?
In order to run all these services, we need a compute system and a storage system which houses all the applications that let us as a NOC team actually bring about that level of support that's required to get, you know, 99.99% uptime throughout the show. Without that and without this partnership, we really wouldn't be able to deploy applications like Splunk, which is critical to Cisco. It's the observability platform. As such, customers can now go and see all of those metrics downstairs and how we do it. We can actua lly teach people based on our design architecture, which is a standard design with FlexPod, and we can talk about it, evangelize, and then people can learn and potentially install it themselves.
Yeah. To add on top of what Andy said is the whole FlexPod architecture is built with redundancy.
For us, for Andy and myself, resiliency is not an option. It had to be part of the fundamental building block. You know, I know thousands of Wi-Fi connections are there. We cannot go down. Now on top of that, what we did is we have a FlexPod NetApp MetroCluster solution built into it. What it does is it synchronously replicates data across two different locations or two different sites. That ensures high availability of the solution. It also allows us to do the load balancing so that we ensure both sites' resources are utilized. Again, going back to that sustainability story, what I was talking about earlier.
Love it. Any last thoughts on AI?
One maybe honorable mention is we have an application this year that's kind of in testing.
I'm not allowed to really say what it's called, but what it does is it actually takes data from our network and analyzes it and gives us information about how potentially we could resolve problems. It's like having a virtual NOC engineer with us. It gives us a report and we can actually identify from the report, hey, what are the top concerns seen by the AI? Which could be benign, it could be not a problem, but at least we're aware of it, we can look into it. That's enabled again by FlexPod and the AI foundation that we have with in the NOC.
Bobby, any last quick thoughts?
Yes. The way I say is we give a quarter inch hole than a drill, right? That helps us having the ability to put any application on the infrastructure. Same thing when Andy and myself were discussing two weeks back. Okay, let's put an application on top of the infrastruc ture.
Oh, love it. Thank you both so much for your time. I learned a lot and I think the viewers at home did as well.
Thank you.
All right, fantastic interview there, Michelle. You know what? It's so much fun because these two guys feel like family, at least to me. I'm sure for any of us that have come here as Net vets, I think I can call myself a Net vet after coming to 15 to 20 of them. Either way, these guys are fantastic.
What I love about the NOC and what they do there is it's an actual network experience with all the same pressures that we have from a corporate perspective at any enterprise level, but with the addition of having some of the smartest, most demanding engineering level customers stress testing it as you set it up for a temporary basis with a ton of density all at once, and then to hear that they're actually testing how they're going to do things next, because I know that's a delicate balancing act in terms of deploying something that you know, you can rely on, operate with, and do that in a consistent manner. They can't live on the extreme bleeding edge as we're always showing. I'm so impressed. They always have something new to share with us.
FlexPod has been a great tool for us to be able to simplify wherever we can so that we can focus on the complexity that's inevitable from every other place. All right, we've got just about a minute left. We're moving on to another. What is it? Center Stage session and two more friends that I feel like reflect at least my experience. This goes over to DevNet. This is going to be operational AI, practical application of gen AI, generative AI in the stack. This is with Shannon McFarlane and Matt DiNapoli, two people I got to speak with, I think my very first interview when I got here this week. This is a technical session. They're going to talk about automation in IT and how that coincides with developer operations. The tricky part here is I think automation has had a resurgence in focus.
You can see that with the way we're doing, the certifications now coming out of DevNet and where that organization has been pulling us all along. I'm a big believer in how DevNet really forms the glue that brings everything from Cisco together. If you're ever looking for a way to say, how do I make sure that I stay relevant in all this change with my career and what I'm doing next, pay close attention to what they're doing in DevNet and I guarantee you're going to find a way to stay very relevant with that. No matter where you go with Cisco or in the industry as you will, this is going to be a good session. These guys are smart, they're very personable, they're very fun to work with, and they've got great information to share. Please enjoy this session. On center stage now.
Thank you. Thank you, thank you. Hey, everyone. As our host just mentioned, my name is Matt DiNapoli, I am the head of strategy with DevNet and I'm actually co-presenting with my colleague Shannon McFarland, who's the VP of DevNet. Today, we're going to want to talk to you about operational Gen AI. Actually, practical ways that we can apply Gen AI in what we're calling the stack. All right, strap in. It's a ton of fun. All right, before we get started, I'm going to actually set some context for you on what I'm calling the stack, or what we're calling the stack, because that term can be used in a lot of different ways. I'm going to jump into talking about the value proposition of programmability and automation.
One of the things that we do, or the main thing that we do in DevNet is talk about infrastructure as code. Before we jump into how we can leverage Gen AI tools, it's good to understand a little bit about infrastructure as code and why it's valuable to IT organizations. I'm going to talk about where Gen can be helpful in IAC and infrastructure as code. I'm going to pass it on to Shannon and he's going to move up the stack and he's going to talk about AI infrastructure and application, AI agent application deployments, because those agents are interacting with each other. He's going to talk about orchestration, observability, and security of AI in those AI applications and then we'll wrap it up. What do I mean when I say the stack?
I've thought about this for a long time. Similar to the OSI model, most of you here are network engineers, I'm guessing, or infrastructure engineers in some way. I've kind of always thought about it as the things at the bottom have to exist for the things at the top to exist. In the OSI model, we can't have an application delivered without the physical, the data link, and the network layer. Same thing with our IT operations. We have to have the network, we have to have storage, we have to have compute. That might all be happening in someone else's data center, or not necessarily our data center, but we have to have those things for the other things on top, the virtualization, the developer tooling, the containerization and orchestration, and then ultimately the delivery of those applications.
Now, I did add security in there as well because I wanted you to know that we understand that at each of those layers, security is something that has to be taken into consideration. That for each of those layers, security means something different. For a network engineer or a compute infrastructure engineer, security has a different meaning than it does at the application layer. It is something that we have to pervasively think about. Because IT operations are ultimately there to deliver applications, we have to consider those things across the entire part of what we're calling the stack. Now let me talk about putting programmability into practice. I talk about this a lot if you've ever heard me speak, if you ever come up to the DevNet zone, I talk about adopting an automation and programmability practice.
The reason that I talk about that is that even 12, 13 years into talking about this, we still have a lot of customers and partners that are still starting their journey into automation and programmability. I literally just talked to a network engineer today from a well-known newspaper publication where they are still working on adopting automation and programmability. Actually, he informed me that when they are going to update their switches, he has to go into each individual one and make that change. He said they have a few hundred of them. The first benefit that we get from automation and programming is that scalable deployment and maintenance. We are actually taking these repeatable processes that we do in the CLI and building them into code so that we can actually let the machines do the work instead of us.
We're also adding in a layer of sources of truth. This is something that DevOps teams have managed for a long time, but this provides accountability into the IT operations team. Now we're not saying, well, I did check off this box and I did change this configuration in this specific switch. It's also giving us the opportunity to take advantage of version control and change tracking. If we do run into issues in production, we can easily roll those changes back. That's something that DevOps teams have been able to take advantage of for a very long time, a few decades actually. We can then further our process adoption of DevOps into automation pipelines. We can add in integration testing with some other interesting tools like Cisco Modeling Labs or CML, allows us to virtualize, create virtual digital twins of our networks.
That ultimately allows us to have a lot more confidence when we push our changes to production. We have a lot more control over our deployments would be my argument. Finally, I would say it's just more fun, I think for those people who are IT operators, especially network engineers, doing the same rote task from device to device can get a little tedious. We're looking at building out process optimization. Potentially, if we solve those problems, those tedious problems in a way that allows us to free up some time in our day, we can actually have more room to tackle more interesting problems and also provide some innovative solutions on things we might not have even thought about.
Of course, the reason that I've had to talk about this for 12 or 13 years, or however long it's been, is that there are challenges that come with it and they're significant. It boils down to time and money. Ultimately we need top down support from our IT operations leaders and it actually tends to be a cultural shift. It's a little bit easier now to do it. I will say arguing for investment in IT operations automation is something that I think is gaining more and more traction, but it does take that time. There's investment in training, in new tools, in changing operations processes. As we all know, because we have to run these production environments, the day to day gets in the way.
Frankly, if things are stable, we're not necessarily going to go down this path of changing something just to change it. That being said, I would argue that adopting an automation practice gives us the opportunity to kind of set up an insurance policy. If something does go wrong, especially at scale, we can address it in a more accelerated manner. Finally, change is scary. We all know that. I mean, I'm one of those people that sometimes gets comfortable in their jobs and doesn't necessarily want to challenge myself. That being said, we've all been there. I would say this change is worth adopting. Where in those challenges can we potentially see some Gen AI tooling helping us first on that top down support? That's still a people thing.
We still need executives to allocate that money and time and say that this is an initiative that we want to do. That investment in enablement, the training, the tools, the operations, those things can be accelerated, I believe, with GenAI, because things like learning how to code or building things with code or with Ansible playbooks or with Terraform plays, those things can be accelerated. Leveraging GenAI tooling, the operations processes, we can use GenAI to kind of bounce back and forth ideas about how we can optimize our processes and flows. There are things in there that could open up opportunities for us to get that day to day out of the way and actually make that change. That was scary before. A little less scary. What does this actually look like functionally within our CI/CD or DevOps process flow?
There's a number of different ways that we can do this. I'm sure that there are some ideas in here that are missing. One of the ones that I've seen over the last year or so, or these are some of the areas where I've seen tooling jump in and accelerate processes over the last year or so. One of the most interesting ones that I thought was around the plan phase.
At the beginning of a process where we're deciding about a deployment or we're building a new application, I've seen some tools that take some very good prompts, don't get me wrong, there's some prompt engineering that goes along with it, but it's able to build out product requirements documents for us that we can then leverage going forward as we branch out our code, as we build the code and go through the whole deployment process. We've seen a million code assistants already. That's something that I think a lot of people, raise your hand here if you've started using coding assistant. All right, that's a smaller group than I expected, but I feel like I read a new article about it every week. It does feel like those things are ramping up pretty quickly.
The more interesting space in this process that I think is going to come in the testing phase. That part for software developers has always been kind of tedious. I kind of think of it from a network engineering standpoint, akin to going through the process of changing a couple configuration on the CLI. In the unit testing and integration testing, we can leverage Gen AI tools to actually inspect our code, identify areas where it can build a unit test, and potentially even give us ways that it can put in mock data or mock integrations to allow us to do that testing.
On the other end of the spectrum where we get into day two activities around operating and observing, we can also see, and we're actually going to see an example of that in a little bit here, where we can leverage GenAI to help us with the monitoring and observability of our deployments. Very specifically, here's what we can do with Gen in infrastructure as code. With Codegen we're going to shallow that automation ramp. We're going to use these tools to help us build Ansible playbooks and Terraform plays. On the testing side, we're going to tie in tools like pyATS and Genie, leveraging virtual environments like Cisco Modeling Labs to help us build not just synthetic digital twins of our production environment, but synthetic traffic as well to push that along. Because what good is a testing network if there's no traffic over it?
On the operation side, once we get into production, we can look at the combination of predictive AI and analytics with GenAI to help us build observability into our networks and deployments, to provide end-to-end security and ultimately provide mitigation and optimization solutions for us. These things are actively or proactively acting on our behalf as network agents for us. On the front side, coding assistance, again I said we've seen a million of them. They're going to become more and more pervasive for us. I'm actually going to show you an example of one that Ansible has out. We call it, it's called Ansible Lightspeed. In this example, I'm leveraging Ansible Lightspeed to create an Ansible Playbook that sets a configuration for a Catalyst 9300 switch.
I'm providing content in natural language to tell that playbook or tell the generative AI tool what to do. It creates that playbook for me. Now, I can't guarantee, I mean, feel free to inspect it. I can't guarantee that this playbook is 100% correct. What it's done for me is it's got me started. It's provided a framework where I can start to fill in the blanks, any of the variables that need to be put in, make sure that the interfaces are set up and all of those things. I'm not starting from scratch and that's the good news. It also provides us some other interesting tie ins within our IDE, which is super fun. I can actually hit the documentation for the particular Ansible module that I'm working with.
One of the other cool things that I thought working the other direction is that we can take a playbook and actually generate the explanation of what's happening. I was thinking of the use case where I have inherited a bunch of Ansible playbooks and I'm not 100% sure what they do. Instead of going line by line and trying to figure it out, I can leverage these tools to actually explain to me, hey, this is what's happening in this playbook so I can come up to speed more quickly on it and become an operator in that particular space. It's not just about those day zero, day one activities. We actually can move into the day 2s. The next demo I'm going to show you is a solution around network monitoring.
We're actually leveraging CML as our test network, so it's still going to be a virtual network. In this situation, we're going to be using Grafana and Influx for our monitoring applications. We can see that dashboard of what's going on. It's tied into a webhook listener that's going to pass information when something's going on to a little Python agent that talks to the LLM, reports what's going on and asks the LLM to say, hey, here's how you could potentially, here's what's wrong and how you can potentially fix it. We pass that back through PyATS and back into our network. Let's take a look at this example. We're actually going to hard shut down our ISIS neighbor. That's what's going on here. Quick special thanks to Jesus Ieskis for this demo. Thank you, Jesus.
We see our ISIS neighbor go down in our Grafana dashboard and we can also tie these to chatbots so we can actually get pings on our phone and say, hey somebody, something's not going right on the back end. We actually have our agent going through the process of evaluating what has occurred and providing us with the mitigation or the remediation steps. We don't necessarily have to keep an eye on this. I'm showing you guys the back end. You can see how it's working because it's just going to message that back up to our chat application. Now this is the fun part. I don't have to do anything. It's provided steps and if those steps look right, I can just tell it, yes, please do that.
If they don't look correct, then I have the opportunity to say, no, that doesn't look right. Why don't you try this step instead? It would go through that process. Now, I can't guarantee as we go through this that this would 100% always work for you every time. We'll actually look at an example next where it didn't. In this case it was actually able to walk through those steps, provide us information that the ISIS neighbor is up. We can see that it's back up in our Grafana dashboard. Finally, just to hammer it home, we're going to our device and show that our ISIS neighbor is up. This end to end service was able to identify a networking problem, propose solutions, and all I had to say was, yes, do that and bring that ISIS neighbor back up.
Now these are controlled environments and these are scenarios that we are definitely going to have to evolve, but it shows the possibility of what we can leverage these things for. The next thing, that particular example was built maybe nine or ten months ago, but the landscape of tools is changing so quickly that Jesus was actually able to build another demo, but instead of that Python application, he actually was able to use something called Langgraph from LangChain and was able to take apart that agent into smaller parts and have it do separate tasks. In this example, it is literally doing the same thing, but all of our interactions are occurring within Langgraph. We are actually consolidating some of the tooling that we had used before. We could even tie this to a chatbot like we had seen before.
This allows us in some kind of low code, no code, ways to take advantage of the AI tooling. I do not necessarily have to be a Python genius to build out an application that does this step. It is doing the same thing that we saw in that previous example. We are identifying an ISIS neighbor that has gone down and executing or trying to execute the steps to remediate. Interestingly, Jesus does have a video on this out on YouTube I would highly recommend checking. It does not actually succeed in bringing up the ISIS neighbor and it indicates that as such, when it goes through the demo. The lesson in that is these tools can be helpful, but they are not foolproof. As we go through this and you start to look at that adoption, experimentation I believe is key to that.
But it is really interesting and exciting where this is going and where we can identify some advantage from these tools. I'm now going to pass it off to Shannon. He's going to move us up the stack and talk about AI infrastructure and gen AI agents. Shannon,
Thank you buddy, appreciate it. All right, thanks Matt. We're going to continue through kind of a use case oriented environment where we're extending the basic tool sets that we know from network operations, from compute, from security and so forth, and roll that into a linkage about building that stack that's sitting on top of our infrastructure and making observability contact points through that. We're going to do that by leveraging a use case around AI agents. How many people here have heard of an AI agent?
Unless you've been like on PTO or something, or you've been, you know, traveling to Mars, agents are everywhere, right? They're all the hard hotness. Let's tear into this. Matt went through some use cases, right, where we've, you know, got historical machine learning, speech recognition, all that other stuff. Now we've got generative AI with chat interfaces that we can use from a natural language process to programmatically interact with our infrastructure through either agents or assistants or this new thing that's out called model context protocol. No matter what the tooling looks like, it still manifests itself in our infrastructure pretty much the same way that all of our applications do. You have AI ML type applications that are linked to some model or suite of models.
There may be some data scientists involved, but they've got their ML workloads, they've got their models, they've got their frameworks and then there is some sort of lifecycle management underneath that, like ML ops that is programmatically handling the infrastructure underneath and more, more often than not, when we're talking to customers and partners, most of these modern day AI environments are operating on a cloud native infrastructure such as Kubernetes. Now one of the new things that is out there is AI agents. These are things that can work semi autonomously or autonomous, you know, on, on behalf of you to achieve a common goal. One of the things that we're finding in our outship by Cisco team is kind of coin this term in the Internet called the Internet of Agents.
If we think about the Internet, the Internet we know of today utilizes, utilizes name resolution with DNS. It looks at LDAP and directory services. It's got a suite of protocols that we can use, HTTP, etc. for us to establish communication. Think of an Internet of agents where they need to find one another, they need to figure out if this agent can actually help me do what I need to get done, and then let's work together to securely manifest whatever outcome it is that, that we've been tasked to do from an agent point of view. Let's pause for just a minute with the use cases and just do a quick level set on agents. There were not a lot of hands in here that were talking about that raise their hands. We're aware of agents.
Agents again are autonomous extensions of an assistant. Right. If you know what an AI assistant is, this is a chat interface. You may use this with United Airlines or something like that. That's helping you very often. I found that assistants personally are terrible. They generally never give me what I want. More often than not agents give me what I want. Why? Because they have a reasoning function that allows them to sort things out that are not in a predefined script. That's cool, but as agents make their way into your institutions, there's going to be a time when you've got to sort out, do I need to care about what the agent does or do I need to care about where the agent lives?
The reality of it is those of us that manage infrastructure, we care about where things live. We do not necessarily understand every business application and every kind of new application that lands in our infrastructure. The same thing is going to be true from an agent point of view. What we do need to know about agents is agents are code. If we take a look at this, you know, you can run agents from a Python application where, you know, you are just running it as a process. You can see that it is consuming memory, you can see that it is, you know, touching. Good old fashioned IPv4, IPv6 and all of the agent frameworks out there and AI tools components are leveraging existing API types. They are restful. If you are talking to graph databases or vector databases, you are using GraphQL.
What we need to know as infrastructure people is that we just simply need to understand that while agents do magical things on their own, when they run in our infrastructure, they're just code that are utilizing these standard interfaces. That doesn't mean that we're like, that's it, they're just software, we're done. There's nothing special for us to understand. Let's look at orchestration and observability of some of these environments. If we actually continue the theme of running agents inside of our environment, we know from two slides ago that they're just software running whatever programming language that we already know, Python, etc. They're consuming processes, they're making socket calls, they're doing all the stuff that we as infrastructure people understand. We've got a couple of agents here that are sitting on a couple of Linux machines.
More often than not, these Linux machines are running as containers on top of a Kubernetes environment as pods. This infrastructure may be partially on prem, where you've got a, you know, a big honking UCS M8 series chock full of GPU for inference and learning or it's, you know, running on, on a standard on prem environment. We need to get a little more intelligent about what these agents and things like Model Context Protocol are doing in our environment. This is where we start to intersect new technologies and new capabilities. How many people here have heard of Cisco HyperShield? Okay, a few people. Cisco HyperShield is a new solution we've had out here for a while.
One of the things that's in HyperShield is the capability of having a distributed fabric that has deep, deep insights into the operating systems in which that fabric is connected to. In this case we're running the Tesseract security agent which is operating off of the Isovalent solution that Cisco acquired. There's an open source project there called Tetragon that is the underpinning of this agent. What it's doing is it's utilizing an open source project called eBPF. eBPF has been around for a very long time. It gives you extremely deep insights into the Linux kernel processes, commands, sockets and so forth. What this TSA agent allows us to do in these agent frameworks is it allows us to see what agent agents are doing when we type a chat command.
If we look at that command that Matt gave us about creating an Ansible playbook, there's a bunch of stuff taking place underneath there. I mean, imagine that you're interacting with your chat interface and you say, go create me a GitHub repo, go and create a job in Jenkins, whatever it is that you type in there, you see it in natural language, but the agent is working out all the technical details and commands and API calls it needs to manifest kind of that outcome for you. We're losing all of that by us not running the commands or us writing the scripts, because now the agents are doing that. Solutions like HyperShield allow us to get very granular level of control and understand what the agent commands are and the agent calls are inside of our infrastructure.
Next, if we're in an on-prem environment, we may be attaching this environment, these compute nodes with these agents running in them into something new, like the Nexus Hyperfabric, where you've got high speed, low latency, especially for learning environments. This may be where we're making a contact point into our data center and the rest of our environment. We also may be in kind of a hybrid cloud, multi-cloud and even a WAN environment where we've got a distributed agent framework or vector databases that are within a hyperscaler environment that we're connecting to. Finally, we may be deploying these things in an edge or colo environment. Okay, so topologically this stuff looks like what you're running, right? You got data centers, you got branches, you've got Linux machines, you got cloud native solutions.
All of this stuff is real in your environment. Let's now kind of pivot to a more, you know, in the weeds as it relates to the implementation and how we observe and witness all of this stuff running in our environment. We may have users or services that are interacting with our AI applications. This could be, again, a chat interface, an AI assistant, or it could be a suite of agents that you're interacting with from another dedicated device. It even could be that you're running something like VS Code or Cursor or something like that from an intelligent IDE that are making agent calls. They're interacting with a suite of common AI services. Again, we got the front end services. That brain up there represents either an on-prem or an external large language model.
You could be running open source LLMs inside your environment, or you may be making an external call out to a foundational model, maybe running an OpenAI. There at the bottom you see that the magnifying glass represents a vector database where we're actually embedding documents. We're vectorizing those so that we can do things like similarity searches that the AI agents have access to. Again, what are those blue icons underneath? They're services, they're pods, they're deployments, they're, you know, persistent volume claims, all of these things that we already run in our existing cloud native environment. That stuff sits on top of very common observability patterns. Right? OpenTelemetry has been around for a long time. Many people are probably running it in your environment, don't even know it.
OpenTelemetry are providing us the critical traces, events, and logs out of the MELT stack and underpinning again, way down in the bowels of our operating system are things like Cilium, which is our cloud native interface that we run inside of our Kubernetes environments. We're running something maybe like Tetragon, which is again, the underpinning technology for the HyperShield Tesseract agent. All of this glued together is going to make its way into us, into some sort of observability platform, for example, like Splunk, where we can get all of the layers of our logging and metrics and events and traces taking place out of all of this stack that that mattress started us off with.
They're going to make their way into something like Splunk, where we can ruminate over that data, even with AI and even with a natural language process that allows us to ask questions that would be almost impossible to sort out if we went from panel to panel, from panel to panel, as it relates to our observability solutions. The thing that Matt and I are here really to talk about is ensuring that as you mature your existing infrastructure to account for AI, or you're building a brand new greenfield infrastructure to account for AI, that you do it with automation and programmability first in your mindset. Right. It's not an afterthought. It's very much like we always talk about, where people rush to deploy something, but they don't really think about the security ramifications until later.
Please do not let your automation programmability solution, you know, account for that as well. If you are engaging in your infrastructure through APIs and SDKs, maybe you are running Ansible or Terraform like Matt was talking us through, or maybe you are utilizing someone else's agents or natural language process solutions to programmatically interact with it, ensure that they are automatable and programmable from day one. That was a lot, right. We are roughly 30 minutes in and it is like you have it all sorted out. You can leave Cisco Live now knowing how to implement all of this with no issue. Right? Right. Things that you want to leave here and do, one, you want to understand even though you do not understand how do you deploy and run a vector database or what the hell is a vector database in comparison to any other database, right?
You need to understand these components already live in your infrastructure or they're going to live in your infrastructure. Understand what they do at least from a topology point of view. Know leaving here that you can be confident that since they are software they're fully automatable right now. I mean you can run Ansible and Terraform against all of these things. They're already running API types that we know, REST, GraphQL, etc., and we know that up and coming we're going to be leveraging brand new open source or commercial products that are tuned specifically to AI. Things like Model Context Protocol that gives your AI application access to APIs and tools like GitHub, agency.org, and A2A, which are around agentic frameworks. These are things that are really emerging in the environment.
Also know that we want to ensure that as these things sit inside of your infrastructure that they're observable the day they get there. Things like open telemetry and other types of observable technologies are things that you want to start off with immediately. Leaving here, a few things to do in the world of solutions right now is the welcome reception, so please grab a beer and walk around. We definitely want you to go in and look at AI Defense, HyperShield that we talked about in here. The Isovalent folks are here as well, and kind of understand what these platforms are doing for you to really reduce or remove the toil of operating these AI environments. With that, I appreciate your time today. Please fill out your session eval. We'd love to hear what you think of the talk. Thank you.
Hey there everybody. Welcome back to the Cisco TV broadcast studio. As always, we are just so delighted to have all of you with us here on the live stream. Coming to you directly from Cisco Live 2025 here in beautiful sunny San Diego where we have just wrapped a terrific deep dive on operational AI, real world applications of gen AI across the stack. We heard from Shannon McFarlane and Matt DiNapoli on all the different ways that gen AI is driving next level automation in IT and in developer operations. One of the great benefits that we're finding here in gen AI, as we just heard, is that it simplifies our infrastructure automation but at the same time it can empower cloud native deployments through agentic frameworks, then we can also optimize our CI CD pipelines. That I think was an important part of the story.
Shannon and Matt did such a good job of showing us Gen AI's growing role in predictive analytics, in infrastructure as code, in collaborative human AI workflows, and all of these different ways that Gen AI is giving us greater operational efficiency and innovation across our entire stack. Great job guys. In just a moment, by the way, we will be heading out to DevNet over in the sales pavilion. Rob is going to give us an exclusive behind the scenes interview with Shannon and with Matt. We'll look forward to that in just a moment, as soon as I get the high sign. My name is Steve Molter, one of your broadcast hosts. You're going to hear from all of us here today. I just want to maybe give you a quick social media hit. Keep reaching out to us on #CiscoLive and @Cisco Live right now. Let's go to that interview right now with Rob and Matt and Shannon. Here we go.
All right guys, we're here in the DevNet zone and this is one of the places I have to work very hard to keep up with. Not because they're hard to keep up with, but because they. Well, you guys turn out a lot of volume. So real quick, I've got two incredible guests here, Matt DiNapoli as well as Shannon McFarland. Shannon, I'm gonna start with you just real quick for anybody that's new to the DevNet area and it can be overwhelming, obviously a lot of education going on. Where would you focus? How would you begin to narrow down and maybe introduce them to the high points here?
Yeah, absolutely. This year we are continuing on with our tech talks and our workshops, which are really 45 minute sessions that allow people to come in and really get, you know, a good technical understanding of the latest automation programmability against Cisco products or open source projects. We also have some new things here this year that gives people a flavor of what's to come. Do tell. Just over my left shoulder here we got the Innovation Zone. Okay. The Innovation Zone is a space that allows both Cisco product, Cisco business units and partners to come in and talk about something that's about to ship or something that may have just shipped that is very focused on automation programmability.
Okay, I like that. You guys are constantly kind of making it easier , and tell me if this is fair. I think of you guys as really the glue for how Cisco kind of stays relevant in the bigger picture. I'm going to ask you more questions about this in a second. You are giving kind of a platform for partners, so it's not just making integration available to them, but you are actually kind of working in partnership, for lack of a better way to put it.
That's correct, yeah. I mean, it's given an opportunity for partners who are a part of the Cisco ecosystem that are doing things alongside us that they're either easing the burden of automation programmability or they provide a direct solution for that. And so i t is just giving them an avenue into the DevNet community to allow, you know, our community to understand there are some options out there that may help them in their job.
I sat next to two guys on the plane. I do not like to talk a lot on the plane, but one of them, one guy was watching on his iPad and he has got an API reference manual of some sort. Looked like Cisco Press and he was going through that. I imagine he is getting ready for some exam. He had the Office on next to it as well. I do not understand how the younger generation studies, but power to him. You guys have obviously had an effect and really it did start with APIs. I feel like we are going to natural language interfaces. Curious from your perspective, Matt, w e've talked, we talked, all of us have talked for years now. What's the bigger picture look like in terms of relevance for DevNet in the community? How do you continue to stay on top of what is really a fast moving market?
It is a challenge to stay on top of it, to be honest with you. There's a lot of change. Even from February when we were in Amsterdam together to now, there's been a lot of focus on model context protocol and A2A. Those things around agents have really coalesced in a way that we hadn't quite seen at that event previously. Even in the last four or five months, we're seeing a focus now, not just within Cisco, but in the industry in general that's been kind of, I think, focusing efforts of developers of automation engineers into a space where they can understand how to leverage those things to take advantage of AI applications more quickly. Especially around Model Context Protocol. They can tap into tools, they don't have to worry about some of the more challenging aspects around RAG that they would have had to a year ago.
Those opportunities, I think, will accelerate the opportunities not just for developers, but for the automation engineers that are part of our audience to take advantage of those things. Shannon and I actually have a talk where we look at the practical application of GenAI tooling within various layers of the stack. We look at how do we leverage these things to accelerate building Ansible playbooks. We have our friends from Red Hat here this week looking at network automation with the Ansible platform. You know, a lot of the challenges that network engineers have that are kind of rote network engineers is they're nervous about getting into automation and programmability.
Gen AI really has this opportunity for them to jump into it, not quite in the deep end, but to get out there more quickly and start to play around with things where they wouldn't have even known where to start. I think that that's really the exciting opportunity that we have here by layering AI on top of AIs on top of the infrastructure.
A lot of this stuff is being charted in real time. Right. We're figuring this out along with the market. There's this combination. I feel like you guys are seeking to lead, but also make sure that we're consistently in the game. I feel like if anybody's worried about where they're going next with their career and what stuff is staying here, I really do feel like you guys were a great inroad to saying, this is an intersection point that is going to remain valuable because there's so much to figure out. Integration is not going to go away anytime soon, but it's also key to us getting these results and your focus on education. Thank you for also building the community that you've enabled. Guys, Shannon as well as Matt, thank you so much.
I blank on names when I'm under pressure, but guys, be sure and check out all the stuff going on here in the DevNet Zone. You're going to want to make sure you're on top of this as we go forward.
Fantastic job, Rob. Great interview there and thanks so much to Matt and to Shannon as well. Really love those guys at DevNet. All week long here at DevNet Zone, they've been drilling down on everything from usage of AI in IT operations and integration of Cisco technologies to relatable advice on how we can adapt the challenges that are arising from working with dynamic technologies, constantly changing environments.
They have given us all of the tools that we need to be able to innovate, to code, to build, while we are simultaneously unlocking the extensibility, the automation to give to our everyday tasks. Really love the work that DevNet does. All right, right now Michelle is out in the waz to bring us a first person success story from one of our amazing Cisco partners. Michelle, who do you have there with you?
Steve, I've got an awesome, awesome interview for you on this last day of Cisco Live. I'm here with Lee Peterson who's a product leader at Cisco as well as an exciting partner, Bob Herbeck, Worldwide Technologies. I'm going to hand it over to them. They're going to talk about this incredible partnership between Cisco and Worldwide Technologies and some customer success stories. Here you go, Lee.
Bob, thank you for being here.
Thank you.
How's the show been for you?
It's been really great. A lot of information.
Fantastic. Just tell me a little bit about WWT. We've been a long standing partner, so maybe just for the better of the audience, what do you specialize in? What do you want to be known for as a company?
I think we're known as one of the strongest Cisco partners out there and it's through a lot of things. We have amazing partnership, amazing people that we work with at Cisco and a really great team. We've done a lot of partnering around our Advanced Technology Center as well as engagements with customers where we're able to prove out the Cisco technology and then deploy it confidently in their environment to solve complex problems.
Great. You've seen at the show we launched this new series of routers, you know, very focused on helping simplify and improve branch operations. We have announced unified branch as an approach for using automation as code to be able to do things at scale. You know, as you think about your customers, what they're trying to do with their branches, that they're trying to play at scale, what are some of the challenges you're seeing? What are you hearing from your customers?
I think we're seeing challenges that you are solving today with automation, with visibility, with cloud management. There's a lot of change going on in environments where we're bringing on a lot of wireless devices, we're still managing these legacy wired Ethernet devices. We need high power POE to connect that legacy stuff and we need to be able to control it and manage it at scale. We are seeing Cisco continue to deliver solutions that pull all that together with cloud management or on prem if they want it and give really strong visibility and understanding of how the performance of the solution is delivering customer needs.
Yeah, we obviously wish it was greenfields every time, but obviously a lot of brownfields out there. I think when partnering with folks like WWT, helping customers understand how to migrate that journey from legacy to more modern techniques.
Absolutely. I think we kind of look at every opportunity with a customer in some perspective as it's greenfield because they're looking. Is this a point where I want to really transform my network or am I good with just moving forward? It's really important for a customer to understand the entire ecosystem that Cisco has. Worldwide does our best to promote that in our Advanced Technology Center and our workshops with clients.
Any customer success stories, any favorite opportunities where we've partnered together and been able to do great things for customers?
I think one that stands out was probably a few years ago, but still in the process, a very large Midwest hospital system who decided to bring mobile devices into their network for their doctors and their clinical staff. When you walk in with several thousand iPhones, you don't have a very solid wireless network. You're going to have a very poor experience with that. We help them understand what they need for coverage and volume of access, access points to provide that experience. Then we worked with them beyond that to kind of reimagine their network, bring in segmentation, bring in software defined access, and utilize Catalyst Center to manage that and ensure the performance of the environment was always there.
Right. Look, no pressure. You're standing in front of my products. Clearly these are your favorite products. The Cisco 8000 series secure router was. The LEDs. Yeah. The optional extra. Clearly that's the thing you're most excited about. If it weren't for this, what else have you seen at the show that you're excited about?
I think the AI advancements, the AI Canvas again, the cloud management, the simplification of some of the platform where we're able to use the Meraki Dashboard or Catalyst Center to manage the same hardware in whatever persona that best fits the customer need.
Yeah, look, we'll keep trying to simplify things with the products. It all comes to life for our customers when we work with partners like you. Thank you for everything you do and thank you for being here.
You bet. Thank you.
I'm going to throw it back to Michelle.
Than k you. All right. That was an excellent, excellent interview. Thank you both so much for your time. Steve, we're going to get back to you now.
Appreciate it. Michelle, thank you. Thanks so much to Lee and to Bob and Worldwide Technologies. So appreciate your partnership. Really appreciate that. We are about to head back to center stage in about a minute and 40 seconds. Down in the Cisco Showcase. We're going to talk this time about how we power autonomous networks, AI observability, future of connectivity. Coming up in this next Center Stage session.
Like I said, kicking off a minute and a half from now, Guru Shenoy, Jason Teller leading this one out. They are going to look at the importance of network simplicity. How do we manage our modern infrastructure complexities. We're about to hear how AI powered automation, inference, observability assurance capabilities, enhanced visibility. They help us create faster data decision making. They lead the way for more self optimization and more autonomous networks. We are going to see some great Cisco innovations as we always do. For example, we will look at multi agent AI frameworks. We're going to look at connective, predictive and prescriptive analytics. We'll look at cross domain data intelligence, everything that helps us accelerate simplicity within our organizations. At the same time it helps us boost digital resilience, drive network autonomy. Like I said, we're about 30 seconds away from heading out to Center Stage.
Couple of quick reminders, we've got a full broadcast day still ahead of us. We're going to take you straight up until 3:00. We've got more great Center Stage sessions and deep dive sessions headed your way. Interviews down on the show floor behind the scenes with our guest speakers, our executives and our thought leaders. Keep reaching out on social media using ciscolive and ciscolive. Headed out to Center Stage right now. We'll see you on the flip side.
Thank you Amy and hello everyone. Thank you for joining us. As Amy mentioned, we're going to talk about what service providers need to do to evolve their networks for this coming age of AI. If you think about all the bulk, all the hype that has been going on around AI so far, it's actually all been around AI training, a little bit of AI inferencing, but it's all been around the making of AI. If you think about it, we haven't really started seeing the consumption of AI or the using of AI. We are only now starting to see that more and more people, consumers like you and me, enterprises, are actually starting to use AI. We are still in the very early stages. If you think about as we go forward, as more and more people start using AI, all of that AI traffic is actually going to traverse service provider networks.
If you just think about it, if you use a cell phone today, use your favorite AI model, ChatGPT, Gemini, whatever, all of the traffic actually goes over a service provider's network before it hits a data center where it gets served. Service provider networks are becoming, in very short time, really critical in order to handle these new traffic patterns and new workloads that are going to be driven by AI. Just some statistics, right? If you look at, if you put that in context of what I said, where it's still in the early stages of adoption, but we start looking at how fast this adoption could be, by all estimates, enterprise adoption of AI is going to explode. You can see this was a survey done by Gartner, but there are surveys done by others that show similar statistics, right?
We can expect in the next year to two years. It's coming really fast because the iteration cycles in AI are so fast. The adoption is going to go really, really fast. Are the networks ready to handle? That is the question. The answer, quite candidly, is not yet, right? This was a survey done by Omdia, but again, similar surveys done by the others point to very similar statistics. Networks are not yet ready. Only 13% of organizations say that their networks are ready to handle AI traffic. We have to look at what it takes to get networks ready. What does getting networks ready for AI mean? In the search service provider context, there's a few things to look at. Let's start with on your right, right? If you look at where the AI factories are today, right?
They're data centers, small and large data centers. One of the ways in which service providers are becoming relevant in the AI factory conversation is when it comes to connectivity. Even large hyperscalers, as many of you may be aware, when they build their data center infrastructure, massive training centers, right? For AI training and increasingly for inference, one of the things they lack is actually fiber and the ability to connect these data centers. In many cases they're going to service providers in the region and saying, hey, we want to buy fiber from you to connect our data centers. DCI Data Center Interconnect has become a big use case where service providers can realize monetization connectivity for AI by partnering with hyperscalers.
As you see more data centers evolve, not just the hyperscalers, we are starting to see new waves of companies arise around GPU as a service, AI infrastructure as a service, and so forth. None of those have the fiber and the connectivity pieces. Service provider will be, will continue to be relevant in that space, right? That is one big piece and then the next piece. If you look at bringing AI to users and end consumers, what is starting to happen is you're starting to see a lot of inferencing clouds being set up and inferencing by nature is fundamentally different from training, whereas training is fairly homogeneous.
You can train, you know, you can have, have some base models like Llama, even all the variants that we see, Deep Seq and all of that are actually variants of the same or similar transformation models, Llama and so forth. When it comes to inferencing, it's actually, these are very customized to use cases. Inferencing models are going to be a lot more, they're going to be distributed a lot more in a lot of different locations, a lot of different cloud locations. Smaller data centers as well, not just massive data centers, because a lot of the time for inferencing you don't need massive tens and hundreds of megawatts of capacity. You can deploy it in often less than 10 megawatts of power capacity envelopes.
are going to see a rise of all of these different AI inferencing clouds and also clouds for hosting AI applications, and those are going to be distributed. If you look at the middle picture, you start getting closer to Metros, closer to users, where you have all of these AI infrastructures set up. There is the last piece which is delivering AI to the end consumer. Consumers may be, you know, people, places, things. All of those are going to be connected by different access mechanisms. Right? You could have people sitting at home using broadband connections, enterprises that are accessible, processing AI capabilities, agents sitting in all of these endpoint locations that are actually doing the work on behalf of the application. We could have mobile connections, right? Cell phones, cell towers and so forth. Also satellite.
That's also becoming a new way of accessing and providing connectivity. Service providers also need to think about how they will connect to all of these endpoints to be able to serve all of that AI to users in the coming years. They also need to be able to do it in a manner that is consistent. You can't have separate dedicated equipment, ideally for each of these endpoints. Having a single converged access mechanism makes it much more easy for service providers to provide robust, resilient, and secure connectivity, regardless of who the endpoint is and how the consumer is using it. Service providers need to have an architecture for all of these pieces: the data center interconnect piece, the metro and the service factory piece, where it needs to be then delivered to endpoints, and then the last mile access as well. Right.
That consistent architecture is one that we have really been focusing on. There is another angle to this, right? When we talk to service providers and ask them, hey, what are your top of mind concerns? What are you looking to do as well? Of course we have our ideas, but we work in partnership with a lot of them. The thing that they tell us that is extremely important is, hey, we are in the connectivity business. First we need to continue to provide resilient, secure connectivity, but we do not have very good ways of monetizing connectivity. Everybody knows connectivity is really hard to make a lot of money off, so we need to get as efficient as possible in delivering that connectivity. We need to be extremely CapEx efficient, extremely OpEx efficient. That is the first thing they tell us.
The other thing they tell us is we also want to find ways to monetize the network better. How can we be part of this coming AI wave, be a bigger part of that ecosystem, find new services and find ways to drive revenues off of those new services. Because we do not want to be in the game of just focusing only on connectivity where it is not very possible to drive a lot of revenue. Both sides of the equation are important, right? Connectivity as well as driving new services. What we have done is we have looked at what it takes to build an architecture to do both and we call that architecture Agile Services Networking. I will talk a little bit more about the pieces of the architecture, but it is intended to provide that connectivity to meet the needs that I mentioned earlier.
What we have done is we've taken the AI factories that we are building, the infrastructure for AI factories that we are building in our data center group. For example, you may have heard of the Secure AI Factory, which is infrastructure that we have built in conjunction with NVIDIA. It is GPUs, CPUs, storage, networking, all put together. In partnership with NVIDIA, we take the CUDA stack, we host that as well. Now you have a complete vertically integrated infrastructure that anybody can take, an enterprise can take, or a service provider can take and drop into their data center locations and now it's ready for hosting AI services on top of it. We have taken a lot of the heavy lift out of what it takes for service providers to actually be ready to offer AI services.
Yes, there's still some work to be done in terms of identifying the AI service, bringing it to the end customer, but they can do that with partners. A lot of the infrastructure lift is taken care of by bringing this AI services factory. What we've done is we've taken that piece, we've brought new connectivity pieces together for the network and we've put it all together in a validated design, a Cisco validated design. That's what Agile Services Networking is about. We have those publications on solution guides and so forth. It is very easy for a service provider to consume and build to that. Let's double click a little bit into the connectivity piece, right? What's new from a connectivity perspective for service providers? There are a few new things, right? First and foremost, routers are at the heart of a network, right?
What we did at Cisco Live Europe about four months ago, we introduced a brand new line of routers based on our new Cisco Silicon One technology. If you want to hear a lot more about how Silicon One helps us in ways that we did not see before, we actually have right in our secure global connectivity area here a full demo that talks about all of the details. I'll briefly summarize it as it allows us to build far more efficient infrastructure delivering high bandwidths where needed, low bandwidths where we need low bandwidth scenarios, the scale that we need for service providers, we can do so in extremely efficient form factors. In many cases, we do not have to deliver large chassis. We can do the same kind of capabilities in small 1 RU and 2 RU fixed routers.
Fundamentally different class of routers ready for the AI age, if I may, that is a new addition. Second thing is you have these routers, you need to connect them. We have imagined a new model of connecting them together by collapsing the IP and optical layers. We've been talking about this for a while, but it's really started to gain momentum now. We take coherent pluggables, we put them in our routers. Now we can eliminate an entire set of infrastructure around optical by integrating it into the router, makes for a lot more efficient way of delivering high bandwidth connectivity where needed, right? The final piece, you've got the network. Now you need to be able to manage it, assure it. We've had some really good assets in our portfolio.
The Cisco Crosswork portfolio has been the management tool for our service provider customers. The Provider Connectivity Assurance tool has been our observability and telemetry tool for our service provider customers. What we have done now is we have brought in AI capabilities into those. We have been building over the course of this week, you'll hear a lot about the Cisco AI Assistant, AI Canvas. There is a lot of interesting new technology with a new agentic, multi-agentic AI framework that we've built within Cisco. We have taken that same framework and we are now embedding it into Crosswork. That is one of the key developments because we keep hearing from customers, yes, the networking piece is good, but how are you going to help me to get to a place where I can truly get autonomous in terms of driving my networking?
The AI piece is a critical one. This is something that we are truly excited about and I want to run a quick demo here to show you a very, very quick flavor of what this does. Again, in our booth areas over there, we have a lot more details, but hopefully this will give you a very quick flavor. Could we roll the video please?
With toxic factor identification, we have an AI agent that examines your operational event logs and a collection of network and device attributes delivered through the Cisco AI Assistant. The AI agent will identify attributes common to network events and provide you with a report of events it checked, flagging any anomalous patterns. We will set some initial parameters to run toxic factor detection analysis. Select a time period and an event type. Click analyze to start.
Toxic factors are classified into three toxicity levels with increasing low, moderate, major. In this analysis, major and moderate toxic factors were identified. The major toxic factor relates to the combination of a particular hardware revision and software version and in this case that's 12 devices or 0.4% of inventory that resulted in 9% of the events. The software version is identified as a moderate toxic factor with lower relative toxicity. We can see events evolve over time. BGP session down events initially are relatively low. They increase as a software upgrade rolls out. Events continue to increase over time for both toxic factors identified. Now that we've identified the toxic factors, we can act and prevent the problem from expanding using the information from the AI assistant to quickly facilitate remediation. To learn more, visit Cisco.com Go Crosswork.
You can see this is actually a very simple example. We can get a lot more sophisticated than this. This is the kind of problems that service providers run into all the time. Something goes wrong in the network, they're quickly trying to figure out the root cause, right? Now having a tool like this which can look at all of the data that's available and look at the correlation of factors that may have caused that problem. Maybe you applied a bad patch, maybe there was a config error that was made. It'll zone down on that and give you that information so you can move very quickly to the remediation phase and recover from the problem. This is now available in a Crosswork tool. We will continue to add to this. Really think about it as an agent that does this functionality.
We'll have a lot more like this as we go along. There is one final piece that I want to share with you. A lot of service provider customers we've been talking to have been really interested in leveraging satellite connectivity. What has happened in the recent past is satellite connectivity has become a really viable way of connecting end users. You know, connecting for disaster recovery kind of use cases and also leveraging as your backbone links even in some cases for backhaul and so forth. The reason for that is back in the day, satellites used to be geo satellites, right? Way higher up, you didn't have the right latencies, the right bandwidth. Now with the emergence of LEO technology, low earth orbit, you have companies like Starlink and Viasat and others deploying a constellation of satellites that can span the entire globe.
The bandwidths and the latencies are much, much higher, latencies are much lower, bandwidths are much higher. This has made it possible now to use satellite connectivity. We are seeing a very rapid rise in the adoption of satellite based connectivity. The challenge though has been for service providers. These are two network, completely different network environments, right? You have the non terrestrial piece and then you have the terrestrial piece. How do you run them together seamlessly? What we have done, and we had a breakout session earlier in the morning. You can watch the replays. We worked with providers like Starlink, the morning session was co presented with Starlink where we integrate their networking, the non terrestrial networking piece with ours. We have a validated design.
Again, what's more, we are actually able to offer assurance, leveraging our tooling and theirs so that customers can actually get assurance, get visibility and telemetry around packet flows, not only in their terrestrial network, in ours that might be a Cisco network, but even as it traverses through space and to the ground station on the remote end. It is really end to end visibility. It has made it a lot easier for our service provider customers to adopt this and then offer it to their end customers. If you want to hear more about how all of this fits into the context of our broader solution set, we have a session on Wednesday. T om Gillis is doing it. I'll be part of it. You'll get to hear a lot more of that. Try and make it to that one. With that, I'd like to welcome my good friend and colleague Jason Teller for a quick conversation.
Thank you.
Sure.
Thank you so much. I really appreciate the kind of the technical overview, the product overview, and I'm going to try to talk a little bit, I guess, about the customer perspective.
Absolutely. You know, why don't we start there? Jason, tell us. I'll start with the broad question. When you hear, when you talk to our customers, what are you hearing about AI from them? Their challenges, opportunities? What do you hear?
Yeah, absolutely. I meet with customers all over the world and they're thinking about how to build an AI network. We got to kind of take a step back. There's training networks, training networks, really high throughput, they're bursty, they, you know, the peak to average ratio is, is very high. You think about an inference network. In that sort of network you need, you know, traffic has to go both ways and there's no tolerance for slow uploads, has to be symmetric. Customers are thinking, okay, I got to build this stuff, how do I do that? How do I step into it? It's probably not a matter of if, but when. The when is when. When should I be ready with a sort of architecture. We know that since 2023, sort of stepping forward 12-18 months, the amount of AI traffic, it's up 36 fold. We also know from analyst reports that by 2033, you kind of step forward seven-eight years from now, two thirds of all network traffic will be AI traffic. It's just, again, it's not a matter of if, but when.
Now that's one perspective on the architecture. There is the operation of the network itself, and AI can be used for that as well. Here we're thinking about how would you automate something, how do you assure and use the tools just to make things work better? It's about complexity and you really want things to be simple, simpler when you're talking about very complex networks. That's kind of the operate it. Obviously you just are going to get better results and it's going to be cheaper, it's going to be more efficient, it's going to be more secure. When you do it that way, now you can even take it a step further. What do you think, what are we doing with AI, right? Like what services are you delivering and how would you monetize that stuff?
Here, like something very simple, maybe you've got an AI model and you want users to, to use it. If you don't have, it's not performing well, then user goes, all right, like I went to it, I didn't get a result quickly and there's, there's slow adoption. That could be a problem. You could take that forward to what if you're talking about like a self driving vehicle or autonomous cars, in that scenario, you know, a glitch in the network or connectivity could be disastrous. Being able to assure and apply and say, yeah, the network's going to work, it's going to work the way we think it will. For a service provider, actually being able to provide an SLA and a guarantee that when I say it's going to work, it's going to work like this, that's a big deal.
It is kind of stepping through the what type of network, how would we operate it and then how would we monetize it? I think that is pretty important. What you mentioned about the responsiveness of the application is very true because there is research that shows even when you talk to different AI tools, right, whatever is your favorite, you know, pick your favorite Claude or Gemini or OpenAI's ChatGPT, whichever tool is even like a couple seconds later than the other, people tend to move away from it. It is almost as if responsiveness in some cases matters more than the accuracy. Even accuracy is important. If it is a small factor of difference, the responsiveness is what it is going to. Yeah, we are impatient. We want instant results. It is important.
I think finally a big piece from customers is just make it simple, simple, simple. When we think about simple networks, operationally simple, but also convergence. You talked about that on one of the slides. You're seeing IP and transports start to come together. They have been coming together and that's a big part of how you would operate this. It just, it makes sense and you got to do it that way. Now our customers, many customers, really splashed out and went hard after the edge and 5G. Why is the AI edge a little different?
That's a question we get all the time, right when we say, hey look, you've got the opportunities, monetization opportunities by getting into the AI edge business with your own inferencing infrastructure, potentially AI cloud infrastructure, they immediately tell us, hey, we've seen this story before in the whole 5G era. We went and invested into 5G edge infrastructure and we couldn't monetize anything. They are skittish, understandably. I think this is fundamentally different now in the sense that there's two things. One, I think the use cases and the consumption around AI, we are starting to clearly see use cases that are valuable how AI, which in the 5G case, the main story was around latency. If you can improve latency, things will, things will be better. For users that didn't really pan out. Latency turned out to not be such a big issue.
There are real use cases for consumers emerging around AI. I think also the service providers are getting smarter in the sense they're saying, we're not going to go alone this time. The mistake that they made in the 5G age was they invested a lot all by themselves expecting, you know, new use cases and money to flow. Now we see service providers, providers partnering with the right players. Verizon, for example, as you might have heard, they made a public announcement, I think some months ago, three months or four months ago, where they said, we have the facilities, we have real estate, we have power. Like I said, in many cases for inferencing, 5, 6, 7 megawatts is more than enough power. Verizon has many facilities that have up to 20 megawatts of power.
They have fiber, so they can use that, which is their foundational value. They are partnering with a company called Vultr, which is a GPU as a service provider, and saying, you guys have that expertise, you bring that in. They are partnering with vendors like us and saying, you guys bring the infrastructure for AI. Now let's go all in this together and then go to market. It is a shared risk, shared reward kind of model. Yes, if it takes off and becomes wildly successful, they share the rewards, but they also share the risk. They are not doing the same thing they did with 5G. You brought up, see that a lot.
An important point in there, which is there is something to monetize here. Being able to assure these services is kind of a new and important source of revenue for a service provider. Exactly. Now, I know you talked about some of the architecture, but, you know, what are customers doing right now to prepare their network for AI?
Yeah, so right now it's a lot of understanding how the ecosystem is evolving, what the implications are. We see a lot of customers, the conversations are around could this business model work for us. Right. In terms of monetizing services right now, the emphasis still is very much on what does AI traffic, what will AI traffic do to our networks and how can we be ready to handle it? That's where still the primary focus is.
If you think about how AI traffic is going to look different from the traffic that we have now, you just have to look at how applications have traditionally been served over a network. Most of the time you go in, maybe you want to watch a Netflix video or something, you go to a location, it hits a cache and that video gets downloaded and it's a single stream, more or less. Even if you're accessing a cloud application, it's these single streams that go to one location and come back to you. With agentic AI, we've already started seeing this where agentic applications are now starting to become prevalent. The agent sitting on the endpoint, it doesn't just go to one location, it's actually talking to 4, 5, 6 different inferencing clouds to resolve the query that the user had made.
There is a lot more multi-tool directionality of traffic. Also, some of the classes of applications we are seeing emerge around security, leveraging video, there is a lot more upstreaming of traffic to the nearest inference cloud. We are seeing a lot more upstreaming, we are seeing multidirectionality of traffic. The infrastructure needs to be able to handle that. They need to be able to, in some cases, enterprises are talking to service providers and saying, I've got these AI applications that I'm gonna be running in my network. How can I get assured service? Service providers have to think about, in the context of these AI flows, how can I construct the right, you know, use technologies like segment routing to create these tunnels with a higher degree of assurance, and then they need tooling to be able to actually guarantee that.
That is where we see a lot of the preparation. The services piece we've talked about, right. How can they leverage the AI factory, use it in their data data centers. That's also a conversation we are starting to have. You know, we'll stay close to this obviously, but it's interesting times for sure in terms of how much traction we're seeing right now.
Absolutely. I guess I'll try to wrap up my thoughts on it. We've got a vision and you're going to hear about this this week. There cannot be AI without secure global connectivity, without the pipes and wires, the service providers, the operators that connect everything. It's part of Cisco's innovation. We've always been part of this. It's part of our legacy. We're completely committed to our customers evolving in this area.
It's about simplified networks, resilient networks and I kind of miss talking about it, but you know, autonomous networks as well and customers are on different journeys with this. You hear about autonomous networking and there's different stages of it. Customers are at least thinking about how can I get to an automated state and do it small, start somewhere and then gradually grow to full autonomics and data intelligence. This just will enable quicker time to market, quicker monetization, ability to deliver new services. It just, it all kind of builds on itself.
Yeah. The how for everything that Jason said is really agile services networking. Right. New routers designed for the AI era, new model of connectivity, AI enabled tooling to operate the networks and drive towards autonomy. Finally, the ability to bring in services gives service-wide account customers an opportunity to monetize new things, leveraging their infrastructure. This is what we are bringing to our customers. You know, I would encourage you, the stage, the show floor is right here. You can see all of the details behind what we talked about right there. You know, we want to lea ve you on one note. Do not just take our word for it because a lot of our customers have actually adopted the early versions of these technologies and they are already realizing huge gains. Even as we wrap up, we are just going to roll a video so you can hear from the customers themselves how agile services, networking, or aspects of IT are already helping them. Thank you, and we will run the video now.
The problem we're trying to solve is one of latency and cost. Our main challenge was in ensuring detailed and coactive network performance monitor across multiple network domains. The main goals were to reduce the complexity and also to reduce the cost by 40% and then reduce the power consumption by 50%. Our partnership with Cisco plays a key role providing innovative solutions, supporting us in addressing the future demand from our customers. We realized when taking a look at modern equipment, modern capabilities and capacity, we can reduce the cost to deliver a bit by over two orders of magnitude and get about three orders of magnitude of capacity in our fiber network back by moving to a metro routed optical network. Cisco has significantly contributed to our business goals through strategic collaborations.
One notable example is the redefinition of our next generation broadband architecture, enabling the connection of up to 100 million homes. Cisco Provider Connectivity Assurance and ThousandEyes were the perfect solution due to their ability to provide real time and end to end visibility across multiple network domains. By streamlining the network into a unified network architecture, Cisco played a major role through Agile Services networking. The Cisco Mobility Services platform has been a differentiator for us. It's allowed us to give flexibility and control back to customers. They were able to look after their provisioning, their lifecycle API integration to their environment. They were able to do things smarter. Being able to see from a dashboard perspective what's going on at a glance. Visibility and metrics is massive for our customers.
These tools allowed us to proactively monitor and resolve issues, ensuring that every aspect of the digital experience was optimized. We partner with Cisco to be able to build those architectures, focus on the right equipment in the right time with the right capabilities that allows us to pay as we grow and rack and stack, but also, more importantly, one in which we can fully distribute out those capabilities to where our customers need it. What I like about Cisco is that it's a partnership. It's not just about the big building blocks, but they also listen to the customer and work on specific needs on sustainability. On visualizational services, we are partnering with Cisco and that is the game changer, is the partnership.
Great, thank you for joining everyone.
All right, guys, thank you so much for joining us. We are back here again. My name is Robb Boyd. I'm here in the studio. Want to encourage you to keep interacting with us at Cisco Live no matter what your platform is. There are probably a team of Cisco people ready to interact with you there and answer your questions. Want to keep this moving. Steve is out on the show floor staring at me intently through the monitor. Steve, I'm going to let you set up what you've got out there. Sir.
Thank you so much, Rob. I appreciate it. Yeah, down here in the world of solutions, right in the heart of it, in the Cisco Showcase, got brunch happening all the way around us. I wanted to show everybody here on the live stream one of the coolest things I think here at the show. It's the wayfinding system that we use to guide guests, visitors, attendees here at Cisco Live into Cisco and show them what's happening across this massive showcase. To do that, we're using Cisco Spaces. This over here is one of my favorite human beings in the world. I'm not just saying that. She's a de ar friend of mine, Katrina Spanghans. Katrina, what are we showing people and how are you using this to storytell about Cisco?
Typically what we do here is we have this map right here. It's an interactive map, again built on Spaces. Now for the purposes of this, we're going to pretend that Steve is a network engineer. I know, a stretch. Steve has been to Cisco Live a bunch of times.
Now when attendees come up here, it is for those of us who've been here a bunch of times, we've organized the showcase a little bit differently this year to sort of showcase this edge to edge portfolio that we're talking about. In the past you've seen we have security, we had networking. Now we've organized it around about four different journeys that you can take, things that are relevant to the IT environment in your organization. Over here you'll see we have digital resilience, and that's essentially observability with ThousandEyes, Splunk. We have security demos here that also relate to that. We're going to move over here to the middle of it. That's future-proof Workplaces. Right. That's where we have all the networking staff.
Katrina, just before we go through each of the different segments, how would you guide me to where I want to go? I'm a networking guy. I want to see secure AI. How do you get me to it? What I do is when I find out what you're into, it's about finding you what you're looking for. There's a lot in here, so that can sometimes be a little bit complicated. What we do here is we go in here and there's a search function. Right. I go, Steve, what are you interested in? I want secure AI. AI is what I'm here to talk about. What we're going to do is we're going to type in AI and then we're going to type in security. Right. Okay, we've got 45 seconds left in this segment.
Let's get me to where I want to go. When we're going to look here and we're going to see all the different things. Say something you're interested in here, just tell me. All right, I want to go to AI agentic frameworks. That's perfect. Fantastic. Now, right here. Now you get directions. Right. Now we can show you where it is on the floor. As we're walking through there, we can also see some other stuff that you might be interested in. The same area that relates to this. For some of the folks that are out there who are looking for security, security is embedded in all these different four journeys that we have here in the showcase floor. That would typically.
You can see here, it'll take you all the way over here to where we are showcasing that. Let's. Cool. So, so, so meaning as I'm on the way, you might be able to say, yes, this is where you want to go. But Steve, why don't you also check out along the way the way we're empowering your teams and delighting customers or stop into the industrial operations space and you can basically add something to the story. Exactly. The idea here is that you may, there's so much stuff in here that you may not have thought of that may be relevant to what you're looking for. Love that Katrina. Thank you my friend. I appreciate it. Rob, let's go back to you in the studio.
Steve, I want to take a quick look at a video with Mauricio Cruz that Lauren did for us. Let's take a look. Hello, it's Lauren here and I am with Mauricio Cruz, Senior Director in our product management group for provider connectivity. That's a lot. It's a lot of great stuff too. Mauricio, looking forward to talking to you today. Thank you for your time. Can you tell us what the biggest challenges are that we're seeing in our service provider and enterprise customers in this AI era? Hi Lauren, thank you for the interview. Definitely many, many challenges for our customers. If I can sum it up, the first one is how to leverage AI to improve the efficiency of the networks, how they plan, how they operate, how they troubleshoot and how they deploy networks.
The next one is, as we have heard through the Cisco Live, AI requires a lot of bandwidth, a lot of capacity, and a lot of power. They are coming to us, asking us how can we help them build power efficient, high capacity, easy to deploy networks. That is what we have been doing and we have been introducing this for quite a few months now with Agile Services Networking. Amazing. How is Cisco Support supporting our customers on this AI journey and what are some of the innovations we have made with our Agile Services Networking? Yes, we are supporting them by doing many things. First, on the products themselves. We have recently introduced a new class of routing portfolio for service providers and hyperscalers.
We have a full set of products from backend networks, front end networks, high capacity, and we are also helping our service provider customers refresh and renovate their Metro Edge network with a new set of portfolio based on Silicon One, going from sub 100 gigs to terabytes of capacity. All of these are power efficient, compact, ready for the AI networks of the future. We are also introducing AI capabilities in our automation suite. This is very important for our customers being able to have the knowledge of being able to do predictive, prescriptive automation. Yes, I love that. I love the proactive, like you said, a prescriptive approach. Tell me what's next on the horizon on this AI trajectory and where can we go to learn more information? Yeah, thank you. Great question. What is next? Of course, in the portfolio we will continue to refresh.
AI innovation is moving at a rapid speed. We will continue to refresh our portfolio, go for the bigger speeds and feeds from 800 gig to 1.6 terabyte, ethernet, 3.2 and whatever comes next, liquid cooling. Also in the context of automation, I think the next step is an agentic framework. Everyone's talking about agentic framework. We have an agentic framework today that is helping our customers how to prescribe what they need to do and how to troubleshoot what they need to do. For more information you can go to www.cisco.com, search for agile services, networking and you'll find all of the information you need. Amazing. Thank you so much for your time, Mauricio. We are going to send it back to the studio. Excellent. Lauren. Lauren, thank you so much. Thank you Mauricio as well.
It's interesting as you guys are covering service provider over there, the person that I have always gone to for my service provider education, because that's been a hard one education for me. Please welcome Kevin Wollenweber. I'm get your title right? SVP and GM Data Center and Internet Infrastructure. You got it. See, it helps if I read and just slow down a little bit. My friend, it's so good to have you back because, you shifted a little bit in your role and it sounds like you're in a really good place to help us coalesce a bit of the information that we've seen here. Whole bunch of new complexity, but a really clean message of simplicity and consolidation in a lot of areas. Is that indeed what was planned for? Yeah, 100%. I think it all started from, from the keynote.
With G2 as you kicked off, having a Chief Product Officer and having someone with kind of a singular view and vision and direction allows us to get a lot more consistency across the portfolio. You know, you and I talked, we always talked about service providers. We've now brought together data center, networking, hyperscale service provider into one organization. Because as we look forward at AI, the tools and technologies that we're using in those spaces are getting a lot more consistent. Having them together allows us to be a lot more efficient in what we do. It's funny because we've always talked about this and I feel like it's Cisco's strength, the amount of things and data we can draw upon. We're out encouraging customers to realize how valuable that data is.
Now it feels like we're giving them the tools. Observability has taken off like wildfire, but we also have greater tools for doing that than we ever had before. Tell me how, in your perspective, is infrastructure changing to really pull this type of things from us? Why is it needed now? Think about November of 2022, when this kind of gen AI wave started. We wouldn't have even been talking about what we needed to change in the network for AI. Since then, most of the builds and most of the work we've been doing have been with the large hyperscalers. We built things like Silicon One. We built some amazing technologies, allow them to scale out these massive training clusters and they're building, you know, their training models. They're building massive models.
They're taking all the data that they have and they're firing them at the models to get them to learn. Right. What we're going to see as we evolve is what we call inference, and it's going to be the usage of those models. A lot of that's actually going to be with our enterprise customers. We're in that kind of transition of building the models into using the models, and the users are going to be those enterprises we work with. Yeah, I really feel for the enterprise right now. It's same with us to a certain extent, maybe even more so for us, because we have to stand up and act like experts in an area that all of us are learning as we go. It's not as if anyone already holds all the answers and they're just slowly leaking it out.
Enterprises are under incredible pressure to make something happen in a unique way. I'm curious, what do you brag about from a Cisco perspective as of the announcements this week, when you want to go out and say, yeah, Cisco's the right company for you to have faith in as we build this together? Yeah. First of all, it starts with the technology components and we talked a lot about Silicon One. We talked a lot about our partnership with NVIDIA, the leader in the AI space right now. Taking our technologies into their ecosystem, bringing their technologies into our ecosystem. Now we have all the tools and pieces that we need.
When you think about enterprise, I think one of the things that's always made Cisco strong in the enterprise is our ability to reach deep into that ecosystem, bring partners, bring the channel ecosystem, bring our technologies and deliver simple, easy to use solutions. When we talk about things like Hyperfabric, what we're basically doing is taking those building blocks that the hyperscalers were using and putting them into simple, easy to use, kind of easy button type of approach to consume. Yeah, yeah, because we got to remove as much of that complexity as we can because there's a lot of natural complexity we're going to be dealing with regardless. Let's build a foundation that at least gives us a good starting point. I'm curious about our partnerships.
Obviously Nvidia feels like they partner with everybody, but we've done some really in depth stuff and that's not new for this. We've announced previous stuff but UCS continues to accelerate and so many other components even to and beyond them load balancing with our Cilium, you know, purchase and things like this. What do these partnerships mean for us and being able to execute on that? First of all, I think with, with Nvidia we're doing things differently from the traditional OEMs where we're not just reselling their technologies, we're actually doing technology innovation with them.
We've taken our Silicon One, which we think is industry leading and is powering a lot of these hyperscale AI builds, and we're embedding it into their reference architecture so that as they build large clusters and move into enterprise, our enterprises can do it with technologies that they know and love and understand. From Cisco, we're also taking their, what they call Spectrum X, and we're bringing it into the Cisco ecosystem. The crazy thing about AI, although it uses all the same networking, storage, and same components of a network of the past, the future network has multiple networks, one to connect the GPUs, one to connect into the AI ecosystem. Building simple and easy to use technologies, operations management, orchestration on top of great silicon and hardware components is just something Cisco's always been able to do.
What I like is that you're saying that's a bidirectional relationship. There's that that we provide to them and that they provide to us, which is really where we always want to be in a partnership. Final point, there was, I think it was Chuck that said something along the lines of, you know, in the 1990s and you were at Cisco Frame. I came in right as it was ending and it was just, you know, just orders and bandwidth and how big of routers could we sell and how big of pipes could we do? It was a great time. He was saying that it's just like that now from what he's seeing, but faster, more aggressive. I don't think that. I honestly feel like that's a true statement. That's not hyperbole. What do you mean? 100%.
It's funny because I have been here for almost 30 years and it's the fastest moving time I've ever seen in technology. What we see in AI, what maybe sounded impossible yesterday is happening today and it'll be very, very different tomorrow. We've got to stay ahead. We've got to continue to innovate. We've got to bring security into the network because as agents are running around the network and you have billions of agents, we have to think of those as threats and potentially bad actors and think about fusing security into the network in different ways. Only a company like Cisco can actually bring together all of these different pieces into one place. That's perfect. Kevin Wollenweber, I want to thank you for everything that you continue to do and educating me on what we're doing going forward.
We're going to talk about campus security, I believe next on center stage. Please stay right where you are. Enjoy this next dive into the detail. We still have a whole lot more coming. We'll see you on the other side after this Center Stage presentation. Take care. On this session. Let's welcome Nick Edwards. Thanks, Amy. Okay, good morning, everyone. I'm Nick Edwards. I'm responsible for product management at Cisco for the enterprise switching products. Glad to see everybody here. It's day four. You're the people who either couldn't get an earlier flight, partied so hard last night you missed your flight, or you're just really committed to learning more about our switches and what we're doing here. Thank you. Appreciate your next 30 minutes.
Matt and I are going to go through the next level of detail so you can better understand what we mean when we're talking about switches and products built for an agentic AI powered campus. I want to start through the lens of the workplace. Workplaces are powered by the network to enable an organization to hit their business goals. This is a video that's available on the website of the Mayo Clinic. I want to bring this into very sharp relief for you. If you've ever been to an emergency room recently. I've had a family member go recently. Thank goodness everyone's okay. But it's a University of San Francisco's medical center, UCSF, top end, top flight, well funded hospital. The emergency room was packed, all the beds were filled. People were in hallways. People are moving devices around, clinicians are taking things, cables are everywhere.
Hospitals realize they have to evolve to embrace the cyclical demand as well as the evolution of new technology. Here is how the Mayo Clinic's thinking about it. They're thinking about building hospitals and clinics that are composable, that can respond to demand, respond to the evolution of their workforce as well as patient care. One day the room is a clinical facility for doing diagnostics. The next day the room is going to be something for training, conferences and so forth. Workplaces are changing because of the evolution of technology. It's only going to accelerate with AI and agentic AI. For hospitals and healthcare facilities, the network doesn't just push packets. The network is a clinical services platform. They care about maximum uptime, securing patient data, delivering better and better capabilities for digital diagnostics, robotic surgery.
These are use cases that require an evolution of their network to give the outcomes that they need. Same thing with retail. Similar trends emerge. Highly performant, securing data of transactions. They want to keep the cash registers moving with no downtime. Increasingly they're going to have more personalization. If you remember that movie Minority Report with Tom Cruise, he's trying to escape the cops. He goes into the gap of the future and it's like, hey, Tom Cruise character, welcome back. That sweatshirt that you wanted last time's on sale. All these types of things are coming sooner than we ever could have imagined. Lastly, when it comes to manufacturing, the network is not just pushing packets. It's a production optimization platform, platform, maximum uptime. Because pushing products off the production line is money and needs to be secure.
It needs to increasingly use AI to optimize the flow and ultimately trickle through the entire value chain, supply chain of parts and giving real time, location and tracking. I was talking to a customer last week. They plan to put 5,000 4K 60fps cameras outside their production process facility to protect the supply chain. Imagine what that's going to do to the network. The networks that we've built over the past one to two decades need to evolve to embrace this future. Some of the changes that we're seeing when it comes to agent as we talk to customers are first, traffic patterns are going to change. It's not just a classic. You go to Cisco.com a small payload and then downloads a bunch of PDFs, videos, text, all this stuff. It's going to be more uploads. You saw this from G2's keynote on Tuesday.
Users and devices are everywhere. Everything will be talking to everything. More north-south, more east-west traffic and just an overall elevated traffic pattern that has higher demands for lower latency, deterministic latency environments. Edge computing is going to become more and more prevalent. AI workloads and training workloads, all those things, those are definitely going to live in the data center. Increasingly applications are going to require edge compute to do inferencing at the edge of the network. IT and OT convergence, IT use cases are going to permeate to OT and vice versa at an increasing rate. Customer that we're working with, they work in the mining industry, they drive these boring machines well under the earth. Historically those have introduced network connectivity problems, safety problems, all these things. They want to go to autonomous vehicles that can do this.
It will provide better safety, reduce costs and provide better efficiency. Lastly, as these networks evolve, they're going to need to be more secure because the applications are going to be addressed, are going to be evolving so quickly on the road to a future end state that will deliver networks that are autonomous, that are self learning, self healing and self optimizing. The networks will evolve and organizations will only be able to go as far as their networks will take them. This week we've introduced a host of new hardware products that are going to power an architecture that is ready for this AI future in a secure manner. These devices are scalable, built for this type of environment. These devices are going to deliver with our overall system of platforms, operational simplicity in ways we've never been able to do before because of AI.
We're going to deliver security embedded and infused into the network because prior approaches have carried us so far. They're going to be too brittle for the dynamism that agentic AI introduces. We've introduced a variety of products in all domains, switch, route, wireless as well as IT and OT. In some organizations they're going to need all these capabilities because the OT environment, for example, is going to need better wireless, different types of wireless, URWB and Wi-Fi. They're going to need higher capacity switching with deterministic low latency. They're going to need routers that help deliver some of the security capabilities. In the switching LAN, which my team is responsible for, several of them are sitting up front. We have, thanks Sean, the 9350, which is our newest access product. We have the 9610, which is our core product.
These are all powered by Silicon One. Because this allows us to maintain our control of destiny, respond more dynamically and more quickly to the demands of the network. These things are going to be post quantum secure. We'll talk about that a little bit more briefly. Lastly, these are going to be the most energy efficient switches we've ever produced. If you haven't been, go to the future of workplaces. All of that entire environment is powered by 48 port 90 watt POE and we're going to be introducing hibernation capabilities to allow this to be as efficient as possible. Now look, this is a lot of new stuff that's coming. The network's going to be changing rapidly. Operational simplicity is key. What I'd like to do is call up Matt Gillies to the stage. 25 year Cisco veteran.
He's been with customers in the cycles of network technology. He's going to make this real for you. There you go. Great, thanks Nick. Great, thanks Nick. We've heard for a long time that there's rich capabilities that you all require in your networks, but sometimes we've actually made that really hard to consume. What you're going to see is actually how we've been accelerating the capabilities that we've had in the Catalyst portfolio, as well as the deployment, flexibility and the automation that we get with the Meraki portfolio. There's kind of three key areas that I'm going to touch on and then I'm going to show you demos that actually bring these to life. The first is we've been working really hard to unify the capabilities between Catalyst and Meraki.
This means we're going to introduce capabilities that allow us to look at brownfield environment, that allow us to actually cloud manage a campus fabric and as well, you know, with all of this we want to really drive simplicity and that's actually going to be done via some AI tools and solutions that I'm going to show you. The next has been really key, is that we know today users and applications can be anywhere. The key is actually positioning vantage points, agents and vantage points that can actually measure the user experience from anywhere. With ThousandEyes we've got this ability to put agents on the endpoint in owned infrastructure, as well as leverage agents that are running in unowned infrastructure. This means that the more vantage points you've got, the less blind spot you've got across your entire infrastructure.
Lastly, I'm actually going to show you a demo, not only of our assistant, but Canvas, which is a groundbreaking new solution that we announced this week. Let's get into a demo. What I'm actually showing here is we're going to start by onboarding a brownfield device. This device identifier that you're seeing right here, this actually gets generated by the device using the CLI, for example, to onboard that device onto the dashboard. What's actually happening here is we've done this in the past, but it was never this seamless. It is today. What we're actually doing is we've completely changed the way that we do plug and play. What's happening here is there's actually NETCONF and YANG under the covers and this device is being natively onboarded to the dashboard.
If we look here, what we do is we actually enter that device cloud ID, and what it's going to give us the ability to do is add that device to the inventory. Once it's in the inventory, then we can go ahead and add that device to an existing network. I'm just going to pause here. This is actually a key new set of capabilities that we have. You'll notice that we have two operating models. The first operating model is a fully managed device, but we heard from all of you is what do we do for those capabilities that we've got on a device that have not been exposed to the dashboard yet?
These are capabilities that may have existed in iOS XE for years, but all of you actually don't want to have to wait for them to actually be exposed directly via the dashboard. This is where this hybrid operating mode comes in. What we can do with hybrid operating mode is that we can provide the credentials to actually talk to that device. We're here providing the username and password and the Enable password. When we do that, we're actually going to be able to onboard that device and add it to the inventory and place it into a network managed by the Meraki dashboard. We're going to see in a second here it's actually going to show up in the inventory. At the very bottom here, we've got that 9200, and that 9200 is operating in hybrid mode.
Now when we've onboarded that device, you can see here that we've got some visual indicators of that device. We can look at things like port status, for example, or we can look at the troubleshooting tools where the device is actually located. Some of the UI components we can actually see from that device. Now what's different here is we've now got this new Cloud CLI. If we launch over to Cloud CLI and launch a terminal, now we've actually got a fully interactive terminal. This is different than you may have seen in the past. This terminal actually allows for read and write commands to the device. In the past we had something called monitoring mode that was just show commands that was read only. Now what we're actually delivering is read and write to that device.
That means that we can unlock 40 years worth of IOS capabilities, IOS XE capabilities from the dashboard. We do not have a flag day. We are not waiting for something to be exposed from the dashboard so that it can be managed from the cloud. We just have a simple example here of actually doing a configuration to change the host name of the device, but we can actually do a write on this device. Now the configuration for that device is saved locally on that device, but also periodically backed up to the cloud. This is a game changer. Again, this allows us to do brownfield migration and easily onboard these devices to the cloud. A huge, huge capability that we are introducing. Let me jump to another really, really exciting capability. This is cloud managed fabric.
This capability is interesting because for years customers have wanted to deploy segmentation, but segmentation is actually really complicated. It's hard to deploy, it's really hard to troubleshoot. What we did is we took the fabric technologies that make up troubleshooting or make up fabrics, actually segmentation technologies like VRFs, for example, and we've actually integrated them and leveraged the power and the automation that we get with the Meraki dashboard. Here you can see we can actually define a new fabric and we can define the switches and the roles that the switches have.
Say, for example, we want to define a spine or a leaf switch, we actually want to deploy VRFs, we want the segments, the actual IP segments that we're going to use for that segmentation, and we can actually put all of that and simply deploy it from the dashboard. This radically changes the way that we actually automate the deployment of a fabric, automate the deployment of segmentation as well, as allow us to troubleshoot a fabric. Again, a huge complaint I would say from customers historically on why they couldn't use segmentation, but something that we've really leaned in into and made it possible to manage a cloud managed fabric, you know, from the dashboard. That's an exciting advancement. Now the last area I want to touch on is what we're actually doing with Agentic Ops and there's three things to think about here.
The first is that we've been building a unified AI assistant for Cisco, not just for one part of the portfolio, but this is actually going to allow us to use generative AI to do things like ask the assistant how do I configure something, it provides documentation requests or how to actually solve a particular problem. You can think of this assistant as having skills, knowing how to actually having subject matter expertise to configure a firewall or a switch or a router, but doing this for all of Cisco. The next part is this, this assistant can actually be plugged into that new solution that we announced called AI Canvas. AI Canvas is really the first of its kind in the industry.
It allows us to take that assistant and build a collaborative workspace that allows us to do things like get to root cause analysis across the entire enterprise much faster. I'll show you a demo of that. What's really innovative here is that we've actually built a deep network model. We built our own large language model and we actually trained it with 3,000 Cisco University courses. We trained it with architectures, questions and answers, and best common practices across Cisco. This model is actually a subject matter expert in networking and we're going to take advantage of that in this demo. I'll show you. Let's look at a typical scenario here. We've all seen this before where we've got a ticket that, let's say, is coming in from ServiceNow.
We can integrate ServiceNow into this solution, let's say using something like MCP, the Model Context Protocol. The Model Context Protocol has really been an accelerant for generative AI. It allows us to connect tools and data, in this case collaborative workspace. What you're actually seeing here in this canvas is on the left here. We've actually integrated our AI assistant that I touched on earlier. What you're also seeing here is the use of generative UI. Not generative AI, generative UI. These are cards that are actually being populated real time with data about the problem that we're actually trying to solve here. If we actually just advance this a little bit, what's actually happening here is we're pulling data about this particular incident.
You can see here that we can actually add and create new cards related to this particular problem. We've inserted that ServiceNow ticket. Now we can interact with this assistant. The really exciting thing is we actually can invite other collaborators. If we had someone that was a switching expert and someone was a routing expert, we can actually invite others to this canvas. This is a single canvas that's shared data and it actually tracks the status of the incident. In this particular case, we've actually invited a subject matter expert called Will, perhaps. Will is actually the tier 2 escalation resource that we want to invite to actually solve this problem. Now we have multiple people actually collaborating on this canvas. What we can see, we've actually got some packet loss here.
One thing that we might want to do is actually ask the assistant to go look at the data related to the application performance, but also the network performance. What's actually happening here is the assistant is reaching out to Splunk and Splunk is actually pulling the data relative to the application and it's actually going to build a real time widget, one of those cards on the canvas, that actually maps out the application performance and correlates that with the network performance. You can see here, they're actually going to do that. We've got that statistics here. The really interesting thing is it's actually done all of this real time. We're tracking the states of this incident as we actually make the changes.
Now what you can actually ask the incident or the assistant is what are some solutions to actually resolve this problem? How do I actually resolve this? It actually provides some solutions, and probably an obvious one is that we might want to actually implement a QoS policy here. It provides some suggestions and we can say, well, we like suggestion number one. We can actually have the assistant and you can see here there's a human in the loop. We still think there's going to be a human in the loop to actually push the button and make the change. We've asked it to go implement the solution one, implement that QoS policy and then we can actually see real time. We're going to generate another card and track the performance actually after we've actually implemented that solution.
That's really, really impressive that we've connected the dots between looking at the data related to the problem, but also related to the solution and the impact. Now it's actually monitoring the solution right here. You can see over here in this graph, we've actually got a new card and the card is actually showing when we implemented the change with the QoS change and now the performance, the packet loss has returned to zero that we would expect. The next step in the instance, we probably need to let the CISO or let the NOC know that this problem has been resolved. What we can actually do is generate a report that details what actually happened with the initial incident. We can actually share that with our colleagues and peers, summarize that for the executive team.
We can actually generate this port, it's got a text report, we can actually hit download and there's a PDF of this report that's now available. Really exciting how we've actually brought together multiple collaborators, multiple tools, got to root cause analysis without all of the ticket or baton passing that typically happens between multiple teams. We think this is super exciting. We're going to be looking for customer feedback towards the August timeframe and we plan to ship this in September. You know, for the next section, I think, you know, what we've heard from all of you is that customers don't want to think about networking and security as these two different sets of technologies. Nick, let's talk about how we're actually, actually infusing security into the network and some of the technologies we're investing in. That's good. Thanks, Matt. Yep, bravo.
As Matt mentioned, this future world is going to require a completely different approach towards operational simplicity as well as security. You're not going to be able to deliver security for users and computers and tablets and Apple applications at the application layer unless everything underneath it is secure. The network connectivity, the network infrastructure itself. We are building that into our product at every layer, starting at the switch, upgrading connectivity methods, and ultimately delivering more and more capabilities to deliver security via the network. It's all fused in, so you don't have to think about it as much. I want to start with the device itself.
There have been cases in our industry in the last decade where back doors have been shipped on software of networking devices, where chips have been installed on motherboards that get distributed to different customers so nation states and adversaries can capitalize on this stuff. We at Cisco are such a massive company. We have distributors, partners, customers everywhere. It's imperative for us so that you know that when a business product shows up on your loading dock, it has not been jailbroken, it has not been tampered with. What we shipped at the factory is exactly what you're using. We have a host of capabilities that we refer to as the trustworthy technologies. This verifies that everything from the boot on up is what we've shipped. There's cryptographic signatures in there to make sure that the software hasn't been hacked, that no other applications are running on it.
You can trust that when it shows up, it's exactly the code that we want you to have. Now, building on from that, you heard G2 mention Live Protect. This is going even further. CVEs in all products in it continue to increase as software gets more complex and more sophisticated. In 2020, 18,000 CVEs, 2024 over 44,000. We would love to ship perfect code. Unfortunately, we don't always do that. We always try to get better. Like every vendor, that's a challenge. We're always vigilant. What we're doing is introducing a capability called Live Protect that will give you the opportunity to protect from CVEs without doing a full-on software update that you can do later at your own convenience. I want to walk you through a demo to make it real. This will be available on the controller of your choice.
Whether it's Catalyst Center or the Cloud Dashboard. We already have all the context of your network, what devices, what code they're running. When a CVE occurs in the network, we're going to give you information that describes what that CVE is, what the scope is, what's at risk in your network. As you see here, you'll see different CVEs that may be impacting this customer's network. We'll give you the CVSS score. Historically, security teams say, hey network folks, there's a CVE. Figure it out, keep us safe. People are scrambling, trying to read it. Okay, which ones apply? They're trying to map it against their network. Understand what this actually does. Is it impacting my environment? We're going to converge all that context in your network controller. You can click through, see the CVE, all the detail, what the score is.
The Live Protect will give you the ability to understand what is actually at risk and what it's doing. You can see what device is impacted in your network and then you can decide to deploy this technology. Now what's really cool is this technology can be deployed in full on protect mode and what it's doing is it is operating at the kernel level to identify if a CVE is trying to target the network device kernel, and then you can decide whether or not you want to deploy that fully or in observation mode. Some customers, based off of criticality, the device and all these other things, they may want to put in observation mode and then they'll get the alert if it happens. They can follow up as needed with the broader IT security model infrastructure they have in place.
Or some customers, depending upon what's in place, they actually will want to maintain the criticality so they can have it in protection mode. Then 45 days later, when the software patch is available at their convenience, they can download, install, upgrade the entire system. We're really excited about this because this will help deliver more value with a lot of the Cisco security products that are already in your environment. Here's an event log. This is going to be an alert that you can ingest into Splunk or other security devices inside of Cisco. You can quarantine the endpoint in your environment that actually may be triggering this case and then you can do the research, reimage it and track down how it got compromised. This is one of many capabilities that we're going to continue to infuse in the network.
Another one, we have been delivering post-quantum compute-ready crypto for over a decade. We've done a lot of progress there. We were one of the earlier ones. We're continuing to expand that with other ways to deliver better cryptographic protocols. As you're going IP Sec, MacSec, WAN MacSec. Just now the standards for quantum-ready crypto are getting ratified with the likes of NIST and European standard bodies and Asia Pacific standard bodies. Even though quantum computing is not in a place yet to decrypt modern crypto methods, there are cases where security researchers are concerned that people are harvesting data now. When the quantum computing is ready, they can decrypt it. Shame, embarrassment ensues. National security is at risk. Financial crime is rampant. Customers are starting to think about their long-term investments.
When they buy new hardware, they know they don't need to do a forklift later to replace it for better quantum ready cryptography. All this extends investments that you've made with Cisco on the security stack. We've been doing things for a long time with SGT and ISE and micro macro segmentation. We're delivering a lot of capabilities to deliver security in the network where you need it, in fabric environments, non fabric environments. We're continuing to invest in this with quantum ready Lypertek and as you heard G2 say earlier this week, HyperShield ready technology that's going to come ultimately to deliver a security model that extends zero trust principles deeper and deeper into the campus of the network. As we draw to a close, two things. One, we appreciate your partnership. If you want to give us more feedback, please engage. Join our UX community.
If you want to get more insight into things like Live Protect, scan this QR code, we have folks who can give you a demo, you can learn more, join our EFT program. Lastly, we all know Moore's Law. Moore's Law says every 18 months compute doubles. There's a new law that's emerging from the CEO of NVIDIA, Jensen Huang. He's saying that GPU capacity is increasing at a rate doubling every six months or faster. Six months, it's twice as good from now, 12 months, four times, 18 months from now it very well might be eight times better than where we are today. What does that mean? That means that the networks that we've built need to evolve. The infrastructure needs to evolve.
You need to have products that are built for scale, that are built for operational simplicity, powered by AI and ultimately built to give security that you need baked and infused into the network. Thank you very much for coming. Thank you for your time and thank you for joining us at Cisco Live San Diego. Welcome back. Welcome back. I'm sad our time is almost at a close, but it's been amazing being with all of you here. It's the last day, but guess what? You saved the best for last in a lot of cases, right? We are still here with the information. Non stop foot on the gas, as you just heard from complexity to clarity, simplifying and securing campus networks from Nick and Matt. We are doing a lot of cool things and we have a lot for you to absorb. AI optimized hardware.
Nobody does hardware better than us. Integrated security, nobody is doing that either. What do we mean by that? That's the quantum resistant encryption that we have on the hardware. You just heard about all of that and then we have the life cycle AI integrations and that's that end to end assurance that we keep talking about because we want you to remember it. We don't want you to forget assurance. End to end connectivity. That's what it's about. Now, quick little housekeeping. Just because it's the last day does not mean that we don't want you to continue to share with us on social things. That you love, things that are cool, things that are. You took a nice picture with someone or with yourself. A selfie with a nice exhibit in the back. Please share with us. Tag me. You don't even know me. I'm Lauren.
Find me. I will look at it. I will like it. If any. If nobody else likes it, our amazing team will like it. Please, we want to see you. We want to see your happiness, and we want to see this day, two day, the last day. What am I talking about? Of everything. We want to wrap all that up with a nice bow. Also, if you do not remember, there is still. The Cisco store is still open. If you are still here on site, go down there, grab things while you can. Do not be like me and wait until the last minute because your favorite item may be gone. Please run to the Cisco store before you leave. There is a lot of great stuff. I think I see my dear, dear friend Steve out there in the center stage with Nick and Matt.
Steve, I'm sending it over there to you to talk with these lovely gentlemen. Thank you so much, Lauren. And they are indeed lovely gentlemen. You're exactly right. As you say, we do love coming back here to center stage because we get this sort of behind the scenes. I've got Mac, I've got Nick with me. Thank you, guys. Gentlemen, really well done out there. I just want to ask you a couple of questions about the talk you just gave. What would you say is maybe the one message, the one core idea you're really hoping people will grip onto and that will stay with them? After the presentation, I'll have each of you give me that issue. Matt, let's go ahead and start with you. Matt. Yeah. I think, you know, for us, really, the pace of innovation that we're driving is really unparalleled.
In my 25 years at Cisco, the number of product announcements we've done, the innovation that we're driving are really unparalleled. I think what we've heard from customers is that they need to be able to consume and adopt our technologies faster. We're going to do that powered and driven through operational simplicity that's backed by AI. That's really the message that we're driving here, is that we want to make it easier for customers to build secure networking, but actually be able to secure and manage that secure networking much easier. I love that. Nick, I want to shoot the same question to you to let you kind of build on top of what Matt just said, yeah, I think that it's going to be an imperative for customers to embrace this change. Organizations can only go as far as their network will take them.
The pace of change is so fast that competitors in any given industry, the ones who embrace it quickly are going to be the ones who are going to win. You need a different way to operate your network. Run the network while keeping it simple and safe. Hopefully that's a message people take. Amen. That's kind of what G2 talked about in the keynote as well. There are two types of companies out there in the world. Those that embrace the change that's coming at us, and those that are going to get left in the dust. That's not an exact quote, but kind of close to it. Everybody come here to the show, and they come in with a certain mindset. Here's what I want to see, here's what I want to do. Here's what I'm looking for. Right.
If we could start to shift the mindset of the folks who are here, the Siskonians, all across the world in some way, that our team leaders would be able to see things differently or understand what's possible thanks to these new switches, these new announcements, these new releases, what would you want that mindset shift to be? I think the big change for us, as I said earlier, is the pace of innovation. We really see AI as this accelerant for our customers. We're not thinking now just about networking, just about security, as these things as different sets of technologies or products. We're actually showing customers how we can bring the entire portfolio together. That's really going to make their jobs much easier. It's going to be easier for them to actually drive outcomes for the business. This is an empowerment play, right?
I mean, we are empowering. We're putting that power into people's hands. They just have to be willing to receive it. Yeah, that's right. I think, like, we're of a certain age where we grew up with this stuff, but kids today, new people coming into the industry, they have different expectations for what eases and simplicity. You know, we big investors in the ecosystem to train our customers and our users, but that model is evolving. We need to meet customers where they are so they can learn, they can quickly and easily embrace our technology so they can get the business outcomes that they need. Beautiful. Let's do about 45 seconds more here. Think about our enterprise teams, right? How do we get them thinking differently about their campus network?
Say, in light of current AI driven changes that we're looking at the things that we're talking about here in your session? I mean, I think for one, these events are great for learning. All the sessions were packed because I think customers in their gut know that the world is changing. This is all new. It's moving so quickly. I got to keep pace and I need to make sure my teams keep pace. I think from the Cisco folks as well as our customers, there's a genuine curiosity of how we can help each other. We want to hear from them so we can build better products. I think ultimately customers want to know from us how they can operate and deploy better network technology. I love that. I'm going to ask one more question because I got a couple more seconds here. Right.
If we look ahead, what really excites the two of you most here about the role that this new switching platform is going to play in how we are shaping the AI powered landscape in the future? I think for me it's really about the velocity that we're going to get as we start to use agentic ops to think about not just switching in isolation, but really those business outcomes. This ability for customers to do troubleshooting faster and ask questions about how to assess performance and do so using natural languages, that's a complete game changer. Love that. Matt, Nick, guys, thank you so much. Great session, truly appreciate it. Thanks for taking the time to talk to us. Great, thank you. Lauren, let's send it back up to you. Wow, wow, wow, wow. I love the quote. Security is an AI accelerator.
Not a prohibitor, not a deterrence. It's an accelerator. I love, love, love it. Now, let's keep loving all this action and there's some more great content that's coming right up with Rob and our lovely guests we have in the studio. We have two awesome guests. I just want to get the names right here so I don't mess this up as I've done. Marty Jane is VP of Sales and Business Development at NVIDIA. And then we've also got DJ Sampath, SVP, AI and Security for Cisco. With that, I'll turn it over to Marty and let him handle the conversation. DJ, hey, Marty, how you doing? Hey, I'm good, man. Good to see you. Good to see you. Yeah, we've been doing a lot of AI work together, Cisco and NVIDIA, and I want to ask you a question about one specific area.
Before we go there, I want to kind of set context for the audience here. Yeah, please. The modern AI revolution has its roots in 2012 when a team of researchers won the ImageNet competition with GPUs. Yes, that's right. We call that the perceptive AI era. Now, AI could perceive things, objects, cats, dogs. ChatGPT in late 2022 kicked off the generative AI era, a huge revolution. Right. Now you can generate things. With the last few years, with more computing availability, innovations, frankly, from developers like you, we have landed in the agentic AI era. That's right. At the keynote, you and then Jeetu and Kevin Wollenweber talked about what agents can do. We're talking about research agents that can think. All of this requires, this modern AI revolution requires, an AI factory. An AI factory. That's right.
I love that term, what we call a full stack AI factory. It requires storage, coordinate networking, a computer, a software stack. We announced an AI factory with your or the other way. However, it was not just an AI factory. That is right. It was a secure AI factory. Right. What I would like to do, if you would not mind, first tell us what is a secure AI factory. You bet. Here is the thing, right? When you think about enterprises that are looking to adopt AI, one of the biggest concerns that they have is safety and security. They want to make sure that when they are starting to use AI, the models that they are using are not vulnerable to any types of attacks.
When they are using these models and they're using the data that's proprietary to what's inside of their own environments, they want to make sure that at one time this data doesn't leak any confidential proprietary information. It's the deepest secrets, deep enterprises' deepest secrets. That's right. AI fundamentally changes that equation of how you access this data. Right. Tokens of intelligence are being emitted by these AI factories and they absolutely need to be secured. That's paramount to enterprises because if you don't do that, it's complete catastrophe. Like, you know, you just lost your core edge amongst your competitors. When you start thinking about how do I secure my AI factory to be able to generate all of these AI, all of these applications, it becomes very, very critical. This is where Cisco comes in.
Cisco has this huge advantage when it comes to building security products. In fact, earlier this year we launched a product called AI Defense. We've been working on it for over a year before we launched it. The core thesis over there was we need to make sure that there's a common substrate that seamlessly protects both from a safety perspective and from a security perspective, the use of AI, because there are going to be many, many models, you would agree, and they're going to be many, many apps and agents. More coming every day. More coming every single day. Not to mention the fact that a lot of these are open source models. That's correct. That's exactly right.
In fact, you know, if you look at what is really happening from the perspective, the agentic era is upon us, where you're going to see, you know, tens, hundreds, thousands, millions, billions of agents start to be created from these factories. You want to make sure that every agent that comes out is secure by default. Right. Every time you deploy these, you know, the models are safe and you have that common substrate baked into it. We're making that available with a combination of technologies that come from Cisco. Right. AI Defense combined with HyperShield, combined with the Hybrid Mesh Firewall. These capabilities are being baked into that Secure AI Factory so that automatically when the enterprises start to think about deploying AI, they get all of that goodness.
It's almost like we talk about this full stack and it goes from the infrastructure layer all the way to the application layer. What you're talking about, if I may, is it feels like every layer of the stack, you've got a security layer. That's exactly right. It's defense in depth. Right, defense in depth. That's, that's, that's the, that's the way secure enterprises are thinking about security and that's the way we think about security. And the really, the cool part about this is, you know, when you start to build AI that is safe and secure, you're going to see the adoption shoot through the roof. That's super important because we're seeing this massive. I would like to see this massive proliferation in the enterprise. That's right. It feels like we're waiting to see. That's right. Until we can address the security and safety needs. Absolutely.
In fact, I want to quote, you know, it was Sam Altman who was at the Snowflake conference where he was talking about, you know, two things that he said that was really interesting. Right. The first thing he said is, listen, you're going to actually have these small models that are going to become de facto. Yes, there are going to be large models that are doing lots of interesting things, but you're going to see a lot of these small models that are bespoke and built for certain use cases that are going to be the way that happens. We should assume automatically for those models that the context windows are going to be absolutely infinite. Like in a really large context window models are going to show up because you want these to be able to do different things.
Last but not the least, you want these to connect across different data silos, to be able to have the tool use, able to go do different things and bring back a lot of information into the model. Yeah. Like we talk about, even in the agentic AI framework, we talk about agents calling other agents. That's exactly right. Different parts of the company. That's right. Or perhaps even parts of the network. Correct. If you think about what Jensen announced at GTC in Paris as well, was essentially those core concepts distilled out to say, listen, the agentic frameworks are going to become the de facto way that people are trying to build these things. We, as you know, at Cisco, already are there, you know, in terms of using our AI defense.
Our entire security capabilities that we're building with AI Defense is built on top of that framework, that agentic framework. The last thing I want to talk about here, specifically on that topic, is a lot of folks are waiting to jump in. Right. From an enterprise perspective. The common perception that we're starting to communicate to folks is if you're waiting for AI to settle down, settle in, you're going to be waiting for a long, long time. Because the speed with which this is moving, when you think about the exponential changes that are happening, we're the beginning part of that exponent. Absolutely. There's a long journey ahead and it's moving at a rate that's so much faster. People waited for the cloud to settle down, people waited for mobile to settle down.
It happened because guess what, it wasn't moving at the same pace as AI. We recognize, we're telling the enterprises do not wait. We understand you need safety and security. At Cisco, we're going to bake that into every single thing that we do, along with NVIDIA. What you get is a secure AI factory as your default option to be able to start your adoption journey. Let's do this. Let's take 30 seconds. If an organization is looking to adopt an AI factory, what would be your recommendation? Think of it like an MVP for a secure AI factory. What would you recommend at this point in time? Here's the thing. A lot of them are getting their hands dirty with like, hey, I'm going to download a model off of Hugging Face. I'm going to get this going and running.
We're going to make it extremely simple by providing AI Defense baked into their compute stack powered by NVIDIA in a very, very simple way where they can validate the models as they're downloading the models, validate the model and then deploy this at runtime to be able to see exactly how when these models, when the request is coming in, the responses are going back, you can inspect them to make sure that there's nothing malicious in those prompt requests and prompt responses. Last but not the least, we want to provide visibility to all of your AI applications that are running on top of that stack. Fantastic. Folks, you heard it. Thank you, DJ, thank you very much. Marty, pleasure talking to you. Excellent. What an incredible two guys to have on here. Been featured all week. Certainly a lot going on.
I think one of the key messages to take away from what we're hearing here is that there is a sense of confidence that hopefully you're getting from all the announcements Cisco has made. Because any enterprise, it's easy, right? Our default reaction has been over the last decade or two is we got to wait and see, you know, let the fast movers go out, make some mistakes and then we catch up. What we're all seeing and we're all feeling and just trying to figure out how do we deal with it is it. It's all happening fast and you need a good foundation with good partnerships to be able to build on that so that you can execute, not wait, and not potentially be eclipsed by a competitor.
It's an exciting market, it's a risky market, but there's so much goodness to be had and I think Cisco's done a really good job of building that. So thank you to those two gentlemen for bringing that to us. Lauren, I'm going to go ahead and go back to you. Couldn't have said it better myself, Rob. Cisco and NVIDIA, that partnership better together. I love it. I'm going to send it out to Steve. We have so much good content coming to you all. He's with Andy Schultz out there, Steve. Absolutely. Thank you so much, Lauren. I do appreciate it. As you said a moment ago, yes, I am here with Andy Schultz, our VP of Product Management. Right now, Andy, we are talking about global secure connectivity. Specifically, how do we build service provider networks that can power AI connectivity?
As we were talking about earlier, how are we helping service providers to really prepare their networks for AI here at Cisco Live. What's changed? What is different this time versus previous investments in architectures? Yeah, it's been a heck of a week. We've been showing all kinds of different things we've been bringing to the table. We brought some new Silicon One devices that are bringing stability, resiliency, security to the network infrastructure. We have agentic AI that's over the top running autonomous operations for that network. These two things combined drive operational savings, they drive opportunities for CapEx improvements and they drive monetization opportunities for our service providers. Absolutely. By the way, this kicked off at the keynote stage.
I mean G2 was very heavy on leaning into the changes in Silicon, the programmability so that we don't lock people into a single methodology or a single structure, that the reprogrammability. Pretty exciting news on Silicon. Yes. Silicon One has been this evolution for us where we really started from the core of the network where we really established foundation, we've now really extended into the access and the edge. This is where the service is established. This is where you make a smart decision with your customers' traffic and you do something of value for them. Now we're handing that off to AI workloads where they can benefit from AI in their network and their operations. I love that the AI really, it seems like the biggest opportunities for AI right now are out at the edge.
We're talking about that in a different way than we ever talked about before. Right. As RSPs are starting to pursue these autonomous networks, how can we here at Cisco help them to enable multi agentic AI? Allow them to be able to really simplify those network operations and enhance their resilience in this market? Yeah. We've taken this multi agentic framework and the strategy, we've implemented it on top of our Crosswork platform which is basically managing the entire network. All of the element management, we look at the operations, everything that that operator goes through every day and we make those tasks that have been mundane and repetitive and we make them just simple they go through. We leverage all of these agents that are throughout the network to really go after that. Yeah, a lot of savings that we're getting there.
It is so empowering to people. Do me a favor, share, maybe an example, a boots on the ground example of how enterprises, SPs can benefit from the innovations that we're doing here within Cisco, Agile Services, networking, how it allows them to participate in this growing AI ecosystem and really stay in the lead within their markets. Yeah. I think from a service provider's perspective, we think there's an opportunity for them to start delivering AI services directly to their customers or work with these Neo clouds that are coming out using some of their facilities that they have in a lot of cases that are sitting there dormant now. You can start putting those AI workloads, attach it to your service edge. You have a real monetization effort and opportunity there for you. I absolutely love this story. Andy, thank you so much.
It's always a pleasure to talk with you, hear what's new and what's going on. Congratulations on an amazing show. Really great job. Lauren, let's send it back to the studio. Thanks so much, Steve and Andy, passing it over to you. Rob. Thank you so much, Lauren. I've got a guest here with me that I was kind of excited to be able to interview, Tom Gillis, SVP and GM for Security and Infrastructure. I was just telling you how I haven't had a chance to really listen to you because I've literally studied when you started talking HyperShield and some of the other things. You have been around Cisco at various stages for a very long time. Today feels momentous. This particular week, can you tell us this idea of fusing security into the network?
When I think about that devices, I feel like we've been doing some of that or that's a story we've had before, but I bet there's differences. Yeah, you're absolutely right. It's not a new idea. It's something that Cisco's been talking about for a long time because it's the right answer right now. I don't mean all security goes into the network, but there's certain security functions, like let's talk about a firewall that can be so much more effective if they're tightly integrated to the network. Now the careful listener will say, oh yeah, Cisco's done that. And you might even recognize me like, oh, Tom, didn't you actually work on that? There was a younger version of you. I believe that it younger, smarter, ISR security, better looking version of me that worked on infusing security into the network, into the ISR.
The thing we did that was the most interesting is we put it into a Catalyst switch. If you remember, we had a very, very clever name for it. It was called FWSM, just rolls off the tongue. FWSM Firewall Services Module. It was a dedicated security processor in a data center switch. Customers love it. Now, more than a decade later, we have reimagined that whole concept where we can put security into the fabric of the network with HyperShield. In itty bitty little tiny enforcement points. For customers at scale, you could have a million tiny little baby enforcement points. Now, are they baby enforcement points because it is the level of granularity in which they are acting upon? Yes, because you are not stopping a whole service necessarily, you are stopping a very specific thing from Apple.
That's the key, is that security needs to have context to what we're protecting. If we know, hey, I'm sitting under a Postgres database, we can put policies in place that are specific to that Postgres database. We can shield vulnerabilities that we know to exist. We can look at access and flows that should never be occurring and protect that database very specifically. This would not have been possible without AI. Right. You can't manage a million firewalls the way you manage traditional firewalls. AI is the kind of breakthrough capability that allows us to put security into the fabric of the network in a way that is highly autonomous, easy to deploy, and far more effective than it ever used to be. Yeah, and this is some of the functionality where we then come back up.
There is one of the points I loved, but this notion of kind of extending that patching window. Of course, the fear is, you tell me that this is solving that problem. Well, do I still need to patch? It's funny, I just came from a group of customers talking about exactly this topic. We have the ability to deploy compensating controls on our customers' applications. An enterprise application, we can shield those vulnerabilities. It does not eliminate the need to patch. It is a finger in the dike. It's a very automated and kind of temporary control so that you can patch in orderly fashion. When we introduced this concept, you remember, we started talking about this about two years ago. When I would show this to customers, they're like, ooh, that's amazing.
Hey, Cisco, you know what, patching your switches and your routers and your firewalls is hard. Oh, that hurts. Yeah. These are inline devices, these are high performance devices. They're hard to upgrade. Right. You have to use a change control window. You're doing it over, you know, Fourth of July or Christmas. We are taking the same technical building blocks and we've implemented it into our own switches. We announced a capability called Cisco Live Protect, which allows us, if we know we have a vulnerability in the Nexus operating system, NX-OS, we will create a compensating control and we will push it into the Nexus Dashboard and an administrator can then say I'm going to apply this shield to the switch without rebooting the switch. You had to build towards that. Right, because we were monolithic of course in previous versions of this software.
It's been well before we even knew what gen AI was and it became such a common term. You guys had to be working on some capability here because you could see that the deconstructive, the ability to break these things out and address them independently without having knock on effects to their neighbors. This is kind of a general question because I'm sure you have been very much in the middle of this, but all the announcements this week with so many product lines, 19 new switches in industrial side and the new routers and then new namings and management platforms and I like the fact that even though we've gone through change like that before, this actually feels a bit more consolidated in a way that makes more functional sense than I've seen in the past.
I will argue that what Cisco has been talking about and we're executing on is delivering a platform. What does a platform mean? We just talked in the earlier segment about some of the unique capabilities that Cisco's developing around AI Defense. How do we protect AI-based applications? We're taking that AI Defense and we're putting it into this distributed firewall called HyperShield. We're taking that distributed firewall and we're putting it into a data center switch or a campus switch with the Catalyst smart switch forms based on location, and then we're taking the network with those network services and we're bringing it right to the doorstep of the GPU for our AI-ready data center. That's a platform: advanced security integrated in the fabric of the networks and scaled to AI throughput.
All working across multiple different heterogeneous systems but with one integrated approach, one integrated management console. It's pretty transformative. I'm just impressed because I'm like how have you guys pulled this off? Because it just feels like much more has been of significant announcement has been made. You know, thinking about a. Yeah, Apple had, you know, the building blocks made up for all those announcements. The building blocks that we have to do this are much better than they were a year ago. And those building blocks are advanced silicon. So Silicon One, which is Cisco switching ASIC allows us, it's the engine that can do a lot of this magic. We work with our silicon partners, AMD, NVIDIA and Intel and we can put these advanced processors into those smart switches and then the AI management to make this stuff actually usable at scale.
A lot of that stuff did not exist two years ago, but now it does. Tom, thank you so much. I hate to cut you off, but we are going to have to keep moving on the show. I could talk to you forever, so do not run away. It is always a pleasure. Awesome. Thank you, Tom, for that. Lauren, I will go back to you. Thank you, Rob. Thank you, Tom. Now we are going to see a video by our very own Michelle. Walk us through AI readiness area in the world of solutions. Hey folks, it is Michelle. Welcome to the AI ready data centers. This is an awesome exhibit and I am excited for you folks to come check it out. There is one thing that I want to leave you with and it is that Cisco will help transform data centers to be able to work on any AI workload and environment.
All right, I want to talk through some of the customer challenges that we hear in the market every day. One intensifying threat landscape. Year after year there are more threats that organizations have to manage. It is a complex landscape out there. Where does Cisco come in to help? We provide a flexible infrastructure. One thing that I'm personally very excited to talk about is the platform advantage. I myself am a product marketing manager for Splunk. I love talking about our platform advantage and what we offer to customers to make them feel safe and secure every day. We have expertise empowering AI. We have security infused into the data center fabric and therefore we are built for performance and efficiency. That leads me to outcomes. Our customers have a more strengthened digital resiliency. They also have security consistently everywhere.
Alright, one last thing to leave you with. Come check this out. Build AI ready data centers. Learn what to do, how to start here. There's demos, there's solutions. There is a lot to learn. The last piece of advice is that learn how critical Cisco infrastructure mixed with our partner solutions make you ready to win. Come check it out. I'm excited too as well. I always love hearing Michelle. Everything she said was accurate as we know, right? I feel like we keep talking security, but I always like to put it big picture and talk about the why because there's a reason, right? Everyone out there, you all know why we're doing this. I just want to level set because security is always top of mind for me. We talk about this cyber threat landscape, but what does that mean? That's expanding attack surface.
Digging deeper into that means that there's more areas and entry points for these attackers to get in. That doesn't make things easy. It also leads to more alerts that are coming. Alert fatigue is the number one challenge that our analysts are facing in the SoC. Right. You also have all these different tools. Everybody has a certain tool for every little piece of the puzzle. What that means is that it's harder and harder for you to really be able to understand what happened, when it happened, and ultimately how to solve it, how to fix it. Okay. That's why the power and what we keep branding as the SoC of the future is so important. You need a platform to make all of those challenges simple go away. Right. You need one single interface to unify your threat detection and your response.
It is all about being proactive. Too often we're chasing and we're trying to do things quicker, but why are we trying to do it quicker? Because we realize that we are reacting. How do you get in front of these issues? You need to be able to rely on a single kind of interface that has all these features. When you hear us say sock of the future, it's bigger than just a single tool. It is multiple features that we know will help you be able to protect yourselves against what's happening and what we keep talking about, this expanding threat landscape. Okay. I don't want to geek out too much. I'm going to have my time to talk more, so flunk in a minute here. We are absolutely adding all these features for you and making them native inside of our platform.
Again, I'm going to tee myself up for a little bit later. You'll hear me talk a lot. I'm going to say nerd out and geek out with my friend Michelle here. Before that, let me just kind of give you a glimpse into what we have coming up next. One of those things is powering the SOC of the future. You will hear the deeper messaging about what I was just talking about. You'll hear me talk more about it later. We're going to have a session on that. You're also going to be able to see some more videos about what we can do, what you, what you can do as a customer, what you can do as a partner to better protect against the threats out there. Now I am going to make sure that you're all wrapped up.
We're going to be able to throw this to the center stage for that power to socket. A few future coming up. Can't wait. We're going to have Tony Pitera of Splunk, giving you the message let's go. Are essential to staying ahead of evolving threats. In this session, we're going to explore how Splunk security analytics platform combined with powerful Cisco integrations, empowers modern SOCs. And we'll also hear how a leading organization leverages these tools to enhance threat detection and response. To walk us through, first up, let's welcome Tony Pitera. Thank you very much. As many of you are probably already familiar with, we have some tools for you to both listen in the audience as well as remotely. I'm assuming most, most of you have seen these instructions once or twice or maybe a few times during the session.
We're going to skip right through them. A few things just to go through kind of order of operations here and what we'll be talking about. We're going to spend a lot of time on the value customers are getting from Splunk Enterprise Security, as well as the integrations that we've built and released with the broader Cisco portfolio. Those help target some key challenges that are common in inside of enterprise security teams. I think everybody out there has probably seen a slide like this, probably from more than a few vendors. We're going to cruise right on through that and talk about where we believe the combined power of Cisco and Splunk help deliver a better outcome for all of your operational teams. The way we do that is with the following portfolio of products. First and foremost, we have the core Splunk platform.
We believe security is a data problem, and so we bring in data from a number of disparate sources, very strong analytics, kind of infinite scale of the underlying platform, if you will. The way we deliver those outcomes for security teams is through a combination of a couple of products across the Cisco portfolio as well as the Splunk portfolio. Cisco about 18 months ago introduced an XDR that's got a lot of really cool new groundbreaking functionality in it and that combines very tightly with Splunk Enterprise Security as well as Splunk's security orchestration, automation and response platforms, or SOAR for short. Splunk has been a 10 time leader in the Gartner magic quadrant, so I hope everybody is aware of that. We have a pretty good point of view when it comes to delivering outcomes for security analysts.
This is brought together under what we refer to as Cisco Security Cloud. We have a few things that we'll mention on that front of how all of that rich telemetry from Cisco's portfolio makes its way into the Splunk data platform. We also have a very healthy third party ecosystem. One of the things you can expect from Splunk and the combined portfolio of Cisco is we're going to be friends with Palo Alto. That's over. Okay? Whether that is supporting a firewall policy in Cisco Security Cloud or ingesting alerts from folks like Palo, CrowdStrike, et cetera, that all flows into the Splunk data platform. We're also going to be discussing why you should expect a one plus one equals three from the combined power of Cisco and Splunk that spans across endpoints, data centers, identities, applications, the whole nine yards.
Now, one of the bits of the Splunk data platform that has been introduced over the last, we'll call it year or two, is this notion of strong data management combined with data federation. Now, data management is an area where, you know, not all data is relevant for security teams doing their job day in and day out, unless it becomes important. For Splunk, what we're trying to give customers is the ability to kind of dial that in and say, maybe it's relevant for a detection, maybe it's something that it should have fired in the last five minutes. There is also a use case for throwing things in a proverbial shoebox just in case you're dealing with an active incident and you want to have access to that data. Those are not on the same value curve.
If it is a detection that is helping your analysts work day in and day out, that's inherently more valuable than logs you're kind of throwing in a shoebox just to make sure you're not missing them. We leverage a combined capabilities of data management, which helps you shrink and kind of tweak the logs to your use cases with data federation, which is the shoebox concept as I refer to it, to give you access to that data just in case. It doesn't have to be natively stored inside of Splunk. We present that to the analysts in what we refer to as our kind of unified SOC experience or integrated threat detection investigation response platform. In October of last year, Splunk introduced Enterprise Security 8.0, which included kind of a modern analyst queue, if you will.
It prioritized the alerts that matter the most to the security teams and it also gave you a directly integrated automation and response platform to really make those analysts more successful in their jobs. This is kind of the single pane of glass problem where you have too many, too many disparate tools. This was our way of doing that as a directly integrated set of technologies. Now when we start to look at how that works better with the overall Cisco estate, there is a few different areas there where we think about it from, you know, how do you get the data, what are the detections that you offer on the data? How do you simplify that investigation for a set of analysts that are going through, going through their normal workflows and then what are the response actions that can be accelerated with built in automation?
For Cisco specifically, there is a Cisco Security cloud application which is available on Splunk Base. It basically is a way of aggregating all of the ingest mechanisms and just simplifying some of those workflows for getting data in. I'm going to talk about what that means in a second. On the detection side, we introduced about 25 detections which take some unique advantage of the capabilities from Firepower Threat Defense, the integrated threat capabilities there. They had those, we now have those as native detections inside of Splunk Enterprise Security, which pops those up into alerts for the analysts to work through. Now some of those types of alerts are going to be suspicious, but maybe not something that should impact your mean time to detect, if you will.
That is an area Splunk has a capability called risk based alerting, which says, I want to build up, you know, some signals of weirdness going on over here, but I do not necessarily want to say, hey, you know, you have to respond and deal with this within five minutes. It is just kind of weird. We are not sure what it is. It is not a false positive, it is not a confirmed true positive. Inside of the technology stack for enterprise security, there is a capability known as risk based alerting, which some of those areas, like anomaly detection, are a very natural funnel to help the analysts work more effectively. This gives you a prioritized risk view as well as shrinking that alert volume into things that matter. That has been another area.
We've done a fair bit of investment with the Firepower Threat Defense team in particular, as well as the broader Talos team. Talos is also baked into Enterprise Security SOAR and Attack Analyzer from the Splunk portfolio. This is provided free of charge with a license with those products where you have a native threat intelligence feed that comes with the purchase of Splunk as well as support for third parties if you have a preference on that side. It was a natural area where we thought there would be more value to security customers there. Lastly, we've integrated a number of response capabilities across the Cisco portfolio into the Splunk portfolio. One of the big additions we've done recently is a heavily upgraded SOAR connector for secure firewall.
When you're going through an alert and you're trying to figure out what's going on and you want to accelerate, you know, block that thing, there's a lot more richness and a lot more capability to do there. There's another scenario which is kind of neat with Webex. When you're dealing in a war room scenario and you have an active incident and folks are jumping in, they're usually jumping in partway through the conversation, they're usually jumping in trying to catch up on what was happening. Another area that has been integrated inside of the SOAR product is the ability to leverage the AI capabilities from Webex to give you a summary of, hey, what happened in the last hour while I wasn't in this war room.
Alternatively, when you actually have completed the incident and you have to document what happened and what steps were taken, you now have an AI summary that Webex will send, can give out. SOAR can pull that in by an API and you can throw that right in the case notes as a way of taking some of the less fun parts of the day out of the analyst's job. When we went through this with one of our design partners, I believe the exact comment was, maybe I've been at this too long. Maybe I'm a little bit grouchy, maybe I don't believe in a lot of the hype, but thank you for just taking some of the, of the crap out of the day was kind of the analyst feedback.
We were pretty happy with getting that one out into the market. In terms of the data side of this, I mentioned Cisco Security Cloud and some of the ways of getting data into Splunk. This is an area where we've had an initiative in Splunk for a while to do what we refer to as gold standard tas. This simplifies getting data into Splunk and takes some of the headache off of customers. You shouldn't have to map data manually to the common information model. It's just kind of included. There's some basic troubleshooting for the health of those connections and telemetry coming into the product, and it's just a simple point and click and install. We're taking some of that offload off of the customers. We've done this for a lot of different technology partners.
It just made a lot of sense inside of the Cisco portfolio to pull together many of those into a single app to simplify the work and kind of getting data into the platform overall. This is for the following products here. I'm not going to try to read all of them to you, but you get an idea that we're expanding that capability. It makes it pretty straightforward to get the data into Splunk and to get your security teams, as well as broader observability teams rolling. You might have heard once or twice that we are also capitalizing on the better together by having a commercial incentive for Firepower Threat Defense. This was another natural expansion of the portfolio where if you have an active Firepower Threat Defense license, we are giving you the ability to have additional capacity in Splunk because of that.
This is a thing that is going to be rolling out here in August. We have some things that we need to work out on the ordering systems. We have to work through the sales teams. Net net, if you have an active Firepower Threat Defense license, you know, we wanted to make sure that there was a commercial incentive to say, hey, we've got detections, we've got automation and response all built in. Very. Another natural thing for us to do between the combined portfolios with that. One of the things I love, love, love, love about Splunk is that customers get a ton of value out of the product and they're also willing to share what that value is.
To that end, I would love to invite Keith Kaimig up to the stage to talk about that because you want to hear less from me and more from him. Come on up, Keith. Hey, Tony. What's going on? Nothing much. Thanks for having me. Oh, thank you for coming. Keith, maybe just do a little bit of an introduction on your role at Regeneron, what your team's responsibility is, and then would love to talk a little bit about some of the ways you're using some of this technology in your environment. Yeah, I work at Regeneron, Regeneron Pharmaceuticals. I'm the Splunk advocate. I maintain the platform, my team and my team also maintains the threat detections within Splunk. We have a lot of security use cases at Regeneron, but we also use Splunk for our business as well. Awesome. Thank you.
I think we spent a fair bit of time together and it would be helpful. We have the marketing slots of too many alerts. How does the SOC work? Efficiently, you know, what is, what does that mean in the Regeneron environment when you're dealing with that flood of alerts? What is the value Splunk's providing to you there? Yeah. From an enterprise security standpoint, we've been rolling out risk based alerting. We used to have one, one event, create one alert. Right. With risk based alerting, we get things like a timeline of multiple detections and we're, we're grouping things based on assets and identities and that allows us to say, hey, here's the broader view. Makes sense. And just kind of what is some of the impact that's had on the overall alert volume? Like I said, every vendor's got the too many alerts, not enough time.
What is impact implementing risk based alerting done for the Regeneron analysts? Yeah, so I would say we've cut our alert volume in a third by implementing risk based alerting and also integrating our third party tools that tend to alert a lot or duplicate alert between our third party and Splunk. Got it. Awesome. That is very helpful to understand, I guess, as you think about, you know, the explosion of data and all of the data coming into Splunk, you know, how do you think about data management and what are some of the things that Regeneron is doing in that regard? I love this question. And we have, yeah, that's a great picture up there. We have roughly 12 terabytes of data that we're playing with that potentially could be going into our Splunk indexes.
We use Splunk's ingest processor as well as filtering earlier on in our syslog collectors and other in our universal forwarders. We filter the data, we minimize the data, we transform the data by converting JSON to CSV. Right. So now your field name equals field value. We drop in the field name and it's just a column header. That allows us to cut our volume at the start of a JSON feed. It's 40% reductions. Right. We go in and we say this field we do not even need and we pull that out and we are seeing feeds that we reduce, reduce 80-90%. I think that's an important bit where the history of Splunk was throw all the data in there.
It takes a little bit for us to, you know, communicate like, you do not have to put all the data into Splunk. We can shrink these things down. We can get you better outcomes. Since you are pretty far along that curve, what are some of the outcomes you are getting now? Like, if you, if you shrink your overall data footprint, what are some of the fun things you get to do instead? Yeah. First thing I would say to you guys is, do not think, hey, I am going to reduce my license. Not going to happen. Right. The sales guys love that. Right? You are not going to reduce your license. You are going to pull in more data. Right. You are going to cut down your data on your firewalls, those big, heavy feeds, your win event logs. Big, heavy feeds, right.
You're going to cut down the volume on that and be able to take all of these other pieces that you're missing today and bring it into Splunk. Where we're seeing advantages is our search times. Right. Our searches that we're running a minute are now cut in half because we have less raw data to go through. Makes sense. Makes sense. If you're effectively expanding your visibility into other areas, you know, how do you, how do you prioritize where to stick those detections? How do you think about what that means? Just from a, from a mission statement for you and the team. Yeah. I'm going to go back to the last slide for a second. My engineers are talking at Splunk Conf. If you guys aren't going to Splunk Conf, they're going to tell you how to do this and do come watch our.
The presentation there. To get to your question, sorry I missed the plug there. The Killers was a great concert last night from what I heard. Yeah. Yeah, it was. We got up close. How we manage our detections is really different than I've done at any company that I've worked at previously and I've seen at other companies. We're taking Mitre ATT&CK. Everybody has it, right. All the vendors have Mitre ATT&CK. You could see at the top, it's our Splunk dashboard based on Mitre ATT&CK columns. Right. The techniques and tactics. And we'll say whether we have data and the rules.
If we have both, they're both green. If we're missing data, it's gray. Right. If we're missing detections, it's red or yellow. Right. If we only got some of the detections and we know where to put our efforts in, where can we bring this red to yellow or this gray in terms of data ingest into a yellow? The strategy of having both on a single, single view. I've talked to Tony. I'm like, please take it, put it in the product. Right. If Splunk doesn't decide to, we have another Splunk conference talk on this and we're giving it away for free. Steal it from us. There you go. On that front we will be doing a lot more on this. Stay tuned. Dot Conf is in September in Boston.
We're going to be sharing a lot more on some of the detection and detection engineering upgrades. We're sticking into the product portfolio overall. Maybe, maybe just, you know, you're doing a lot on getting value out of the system, scaling it, making sure it's operational. Right. You know, shrinking the data footprints. What are some of the areas you're looking to, you know, expand the envelope in and looking to do different in terms of, I think we spoke about UEBA a little bit as well. Yeah. We're one of the early adopters of the new UEBA product and it's integrating really well with enterprise security. We're using what we used to build in terms of anomaly detections. We're now using UEBA to provide those anomaly detections for us. Anomaly detection is very costly, both in the search and also in the long term storage of it.
Hard to do. Right. Let UEBA go do our anomaly detections for us. Hey, this person's logging into a machine that his group has never logged into before is one example of anomaly detection that UEBA provides. Totally. And this is actually an area where Splunk customers have helped really inform our roadmap overall. There's a number of customers that had built their own insider programs leveraging the raw capabilities of Splunk. And we've worked with Keith and other folks to say, wait a second, this is a natural extra extension where UEBA can really help integrate those workflows into the overall SIEM and security analytics platform as a whole.
It's one of the things that I love about the Splunk community, how passionate and invested they are in making that product better because it helps other folks inside of the community, it helps other incident responders do their day jobs better. That is one area where we are actually running ahead of time, which is maybe the first time that's going to happen this week. Maybe to close it out here a little bit, session evaluations here are available. If there is any bad feedback, send it this way. If there's any great feedback, send it this way. I thank all of you for coming and joining us and we'll be happy to hang around here. Happy to take any questions one on one as folks have. Thank you so much for your time. Yeah, thanks.
Supervisors can assist team members whenever they need support. Enable your local and regional employees to deliver seamless customer assistance. Empowered with AI in the collaboration app they already use, Webex Calling Customer Assist. Cisco and SAP have agreed on ambitious milestones for the migration of our more than 250 offices globally. Right now we are planning on having upwards of 350,000 devices on the network. Including IoT devices, we have more than 100,000 employees. They need best-in-class IT infrastructure to work wherever, whenever, and however they choose with the devices they prefer. With Cisco, we've made our network simpler and more secure. Cisco SDA also enabled us to improve our Zero Trust access policy, ensuring only trusted devices can access corporate resources. With Cisco Spaces we could leverage this foundation and establish a global platform for smart offices.
A first experience comes, confirms our hopes and expectations that we can manage everything in one place together with a professional partner like Cisco. Tribeca has always been curious about technology. I feel that so much of Webex and Cisco specifically have done for us is help us lower the barrier of entry so that all artists can participate and be involved in the festival. We're using Webex Events in order to make the experience of Denirocon much more interesting, immersive, and connected. For our audience, a frictionless experience was key. We don't want our fans and customers worrying about how to find things or where to find things. All we want them to worry about is how do they have the best possible experience.
We want to continue to tap into the broader suite of capabilities that Webex has, including a lot of the AI capabilities that they continue to build out. It's important to work with companies that have shared values because it's a partnership. It helps us amplify our voice and to a certain extent it's a shared mission of Cisco to be able to amplify diverse storytelling, unique, to have global partners if you're a global festival. We are truly honored and proud to be partnered with Cisco. Meet AI Receptionist for Webex Calling, your always-on virtual receptionist that can answer questions, book appointments, and transfer calls so your business can boost customer satisfaction without adding staff. Thanks for calling Florissa Dendel. I'm your AI Receptionist. How can I help you today? Hi, yes, I have a toothache and I'd like to schedule an appointment.
First I want to make sure that you accept U.S. insurance. Yes, we do. U.S. insurance is covered 100%. Would you like to speak to an insurance specialist or are you ready to schedule an appointment? I'd like to go ahead and schedule an appointment. Thank you. Great. I'll transfer you to a nurse who will give you an initial consultation and help to schedule your appointment. Thank you. Deliver outstanding customer service. AI Receptionist available later this year as an add-on to Webex Calling. Take your business into the new era of customer experience with Webex AI Agent, which answers questions and fulfills customer needs, all while delivering personalized conversational experiences. With Webex AI Agent, you can engage your customers through the channels they prefer, using natural language and human inflections. Sounds like a special occasion.
Okay, based on your description and our stock, I would recommend daffodils. Oh, and I want a dozen, please. Understood. A dozen coming right up. Okay, I think we have what you're asking for. I've sent it directly to your phone. Even if your customer changes their mind. Middle conversation. I've got it. Miami. That's the destination. Of course. I found several flights that fit your preference. With reliable guardrails in place, the conversation stays within the AI agent's capabilities. Could you also book me a taxi by chance for when I land? I'm sorry, unfortunately that is outside of my scope. Anything else I can help you with? You can be sure that your customers needs will be met, their problems resolved, and that all important transaction completed. Let's get that payment arranged. I've sent a text message with a confirmation. Have a great day.
Welcome to the new era of customer experience with Webex AI Agent. Thank you for calling. You are caller number 25. Your estimated wait time is 37 minutes. Hi, I'm Blossom, an AI agent. Hey, how are you? For your rush delivery, I can help you find the flower you were looking for. We have. Yeah, it's my grandma's birthday and I need her favorite flower, but I have no idea what it's called. It's bright yellow with like bell-shaped center, very thin stem. Sounds like a special occasion. Okay, based on your description and our stock, I would recommend daffodils. Oh, and I want a dozen, please. Understood. A dozen coming right up. Okay, I think we have what you're asking for. I've sent it directly to your phone. Yes, the daffodils are perfect. Can I order a bottle of champagne as well? That sounds delightful.
Unfortunately, we do not offer alcohol purchases online, only in store. However, we do have artisan vases. Would you like to add one to your order? That sounds good. You got it. If you are ready to complete your purchase, I can text you the payment link. Yes, please. Let us get that payment arranged. I have sent a text message with a confirmation. Have a great day. Hey, Elizabeth. Still there? Okay, change everything. No problem. Where are we going now? The perfect getaway for my sister and I. Skiing, wine tasting, spa. Sure. Sounds like a great time. I will get right on that. I found a number of vacation packages available in your time frame. What day would you like to wait? I have got it. Miami. That is the destination. Of course. I found several flights that fit your preference. Window. Eighth row, left aisle. You are amazing. Thank you.
I try to be my. Actually. What about Greece? Introducing Webex AI Agent. What's my account balance? Is this covered by my employee benefit? I'd like to exchange this for a different size and color and style. Where hold times end and self service begins. Four years ago, we wanted to change two things. We wanted to change the way we worked and we wanted to change the sport of golf. All right, guys. Hey, welcome back though. We've heard a lot about Splunk and certainly that was a deep dive into the importance of security operations. Just going through my notes here. On the security analytics side, the Splunk integration continues to add so much value for everything that Cisco has come out with this week. What I'm let's say, irritated with.
You see me working from notes here quite a bit because there is so much to cover that I feel like how are we going to get some of these details out? I thank you for sticking with us. I can encourage you to take notes and then use those notes to expand on that and maybe tell me where I continue to get things wrong. I very much enjoyed continue to meet new friends from Splunk and in fact, one of our own personal friends from Splunk, Michelle, is out on the show floor. I do not think she is talking about that now, but who knows? I know we are going to talk about that in a little bit. Michelle, are you receiving me there on the show floor? Rob, of course I am receiving you on the show floor.
I will be getting to Splunk a little bit later. For now, I do want to double click into my latest video on AI ready data centers and things that are going on every day here at Cisco Live on the show floor. Really what I want to start with is some key messaging that has really settled with me throughout the past few days and it's visually represented on this floor. Julie, behind me you'll see that there are three large areas here. The first one visually captures the Cisco difference. AI ready data centers over here, there's tons and tons of demos, presentations and solutions for you to check out. Right now, behind me, behind this really cool Cisco logo, is the future proof workplace.
This is another great representation of what your branch and campus can look like partnering with Cisco in the future. Last but not least, my favorite is Digital Resilience. Oh, it's behind. It's behind this. If we take a walk, Lauren and I will be there in a couple minutes to talk all things Splunk and why Digital Resilience really finishes that swing in the Splunk and Cisco portfolio. All right, what I want to talk about now is that G2's message on the day one keynote, he said something that really stood out to me. He said that we are, the world right now is in the largest data center expansion in history. There is a ton of data being processed and someone needs to build it. Cisco is that key to be that partner with you.
To do so, we need to be part prepared for what is next. Cisco, as well as all of our customers, our partners, the world in general. It's agentic AI and it's quantum computing and networking. We're here for you to make that difference. I'm going to leave you with what Chuck said and what has really solidified for me. Cisco, we're the only company in the whole world that combines the power of networking and security seamlessly together. That is what really has stood out to me this week and I hope it has for you. Rob, I'm going to hand it back to you, but come back quick. We've got more to talk about. Oh yeah, no, we're going to come back. I'm looking forward to some Splunk girl power with you coming up. So we'll tease that here in a moment.
You know you're talking about what Chuck said. We're quoting from G2 because of the fact that we're all trying to grapple with how much is changing, how fast it is, what's our place in it, both personally as well as organizationally. There's a quote, I don't know to credit this to G2 or Chuck, but it was this notion of this era of authenticity that we find ourselves in. The reason why I like that is because I think authenticity is going to be one of the most interesting things to try and keep hold of as we move forward. How do we remain human, interconnected, good to each other, and figure out how we're going to navigate this in the most humane way possible.
When we talk about making this work in our business workflows, we need the confidence in a foundation that's going to give us the ability to then play at the edges. As all these changes are happening, you need to know, well, who can I rely on? That is, even though we must acknowledge that we've got a lot to do going forward, I still need to have some partners like a Cisco, ideally, and the people that Cisco continues to work with in this entire ecosystem. What's represented here is the ability to say, be confident. We're figuring this out with you. Meanwhile we're going to give you incredible, incredibly increased visibility. That's one of the things I love about Splunk because they're taking two things that to me go really well together. Increased visibility at a very, very granular level.
You combine that with the ability to actually execute on things that you can confidently say are bad or need to be addressed even in a gray area. As they talked about in that previous Center Stage session. How do we flag those things so we know okay, it's not necessarily bad? I don't want to stop it. That may break something, but it's going to require more investigation. Your system has the ability then to adapt and adjust. This is truly using AI to combat the new problems and the velocity that we're dealing with when it comes to AI. Tons of storylines there. I've got a completely updated to do list of stuff I need to research more before I can speak super confidently on it. I may read you some of my list when we come back.
I do want to check in. There was a video conversation earlier hosted by our very own Aruna Ravishandran and I want to go ahead and throw to that video right now. Aruna, I am so excited to be having this conversation with you. What a week. This has been a week packed with so many exciting announcements. G2 laid it out all on Main Stage. What are you most excited about? Where can I start? Chris? This has been an action packed week of innovations after one another. Let me dive into future proof workspaces, which is the area which I'm responsible for unbelievable innovations. When you think about future proof, let's start with the number one, which is really close and near to my heart on agentic ops, it's all about simplified operations with AI. That was one big announcement.
Second was, you cannot believe the plethora of devices, scalable hardware, purpose built for AI which we launched and brought to market. Should I start with the smart switches or the secure routers to the campus gateway to 19 different form factors for industrial network devices. The list continues to go on. Last but not the least, security fused into the network in everything we do. When you think about the collaboration part, we launched multiple different devices to market and my favorite is the PTZ camera where we have now completed the entire room across the board. There are so many different innovations with respect to the AI assistant now available with our Webex suite as well as with Contact center, which is brought on premise. The list goes on.
I think we can mic drop now, but I was in the, I mean, being in part that keynote stage and watching all this, the, I mean, the energy was palpable. Our customers were loving this. You got a chance to preview all of this at our recent Networking Customer Advisory Board. What was that reaction? What was that moment when they got to see it for the first time before even everyone else? We did the Networking Customer Advisory Board actually about four to five weeks before Cisco Live. This was an opportunity to preview with a small set of customers to basically get their feedback. They played a tremendous role in terms of being able to further help shape our story.
The part which really landed with our customer, in addition to the amazing scalable devices which we have actually brought and launched to market, is all about agentic ops, right? What the customers care about is, you know, how can we actually make their life and their jobs easier? Especially when you think about netops, they spend so much, so many hours troubleshooting, you know, ticket after ticket across the board. It is not just about keeping the lights on, it is about making their life easier. That is what we launched. Being able to simplify operations for all of network operations and how you can do that with their oversight was the number one thing which landed with our customers. It is so awesome to see that it landed so well with all of our 20,000 customers who actually attended Cisco Live.
I mean, let's just say G2 and DJ Sampath did a demonstration and it was live code. That's so exciting. And you know, as you're watching that live demo, those answers, tell me, let's dig a little deeper into agentic ops. Tell me a little bit more about what resonates truly with you. So you know, I think agentic ops is a paradigm shift, right? Everybody in the world, including all of our competition, is talking about AIOps. AIOps is of the past. We are the first, Cisco is the first to market in order to talk about agentic ops in a true way. And what we demonstrated is true agentic ops. So why is it agentic ops? Because it is agent first communication. It's the ability for one agent to talk to another agent.
The other beautiful part with AI Canvas, which we launched and brought to market, is for the very first time you have the ability to bring data across multiple different domains from Catalyst Center, from Meraki Dashboard, from ThousandEyes, everything centralized across multiple different domains, the data coming together. The other beautiful part of that is that AI continues to work for you. That is our message you would have seen here at Cisco Live, which means that let's say there is an incident and a ticket which is logged in through ServiceNow. The AI continues to work in the background even before you launch and it has the ability to correlate the data across multiple different domains. When you launch Agentic Ops, which is AI Canvas, NetOps now has the ability to troubleshoot very, very quickly and pinpoint where the problem is.
Here is the best part. It is built on the foundation of our deep network model, which comes with 40 years of learning expertise with that rich data coming from multiple different domains. Here is the beautiful part: the deep network model, which is one of the industry first and probably one of the most advanced networking LLM models in the industry, is basically built based upon our deep expertise in the networking domain. Seeing all of this come together and how we are actually going to simplify operations for netops is going to be game changing, and nobody has the ability to do that, unlike Cisco. Basically what you're saying is 40 years never looked this good. Definitely did not.
Because if you look, we actually did a comparison with other ChatGPT LLM models, including ChatGPT, and our efficacy was 20% better than every other models out there. I mean up to 20% more precise. That's, I mean that's the kind of precision you need when you're letting AI take the helm. I think that's incredible. The thing is, while it has the precision, you know, it always does it with operational oversight. Right? Yeah. We don't go rogue. The agents basically take instruction from netOps teams as well as secOps team. That is the ultimate goal. They are the experts. What we're doing is to basically help them come up with this whole agent take ops.
The goal is to basically reduce the mean time to repair, mean time to resolution, for it to go from multiple different days to seconds, not to minutes, but execute on that based upon netops oversight. I love that execution with human oversight. I know we only have a few seconds left. Before we go, tell me, I've heard a lot about smart switches. Tell me, I mean, tell me, why are the switches, yeah, yeah. I mean, what makes these smart switches, which we launched and brought to market, have dual processors, one which basically runs the compute workload and the other one which runs the security workload. This is why we say security fused into the network. It is PQC ready, HyperShield ready. It's a game changer for us. That really is a game changer. Thank you for the time, Aruna. Thank you for having me.
All right. That was really fun because here for me, whenever I see Aruna's name pop up, in my experience that means we're going to talk about future of work as she started with there, but then she quickly got into agentic Ops and once again, that's right back to my to do list here where I'm like, okay, I need to understand this better. She's saying AIOps is being eclipsed now by this agentic Ops and she's brought receipts to prove it. Right. This whole week has been the receipts to show what we can do here. When they talk about, and this is not what at least I've heard because my ability to focus on any one thing this week has been challenging, but this deep network model in a, in a very manual and analog way I think of it is.
I'm just going to tell a quick story. When I was looking at joining Cisco because some salesperson had recommended that this would be a great place and I was doing sales in somewhat of a technical capacity, but it was all new to me. I was actually very new to the industry. This is the late 1990s, very late 1999 into 2000, right before the big bubble burst. One of the things I did when I first started realizing I was going to get some traction, but I had multiple interviews to go through, I was like, I should figure out what Cisco does.
One of the things that was great about Cisco in those early days is they were the very first ones to really promote putting more knowledge out and sharing it on the net, because it's hard to imagine, but there was a time when how we share information wasn't as simple and accessible as it is now. What I hear happening is instead of, of course, CVDs and things, Cisco Validated Designs and such, which I've depended on completely, continue to add more and more value so that you can have trust in what you're doing with Cisco and the partners. The things have been tested in a lab. Those kind of things are now being implemented, of course, in the deep network model.
I may talk about that a little bit more here in a little bit because I love where this is going, but I also love learning more about Splunk and two ladies that have been on our team killing it all week. I want to throw it out to our hosts, Michelle and Lauren, for a little Splunk girl power, as I understand it. Guys, you good with that? They're only giving us five minutes to talk about this, but, you know, we could talk for much longer. This is Splunk and we are the girls that are excited to talk about it. That's why the name of the segment. I'm going to start with Digital Resiliency. As a reminder, I'm a marketing manager for the core Splunk platform, and Digital Resiliency is basically what I get to do and preach every day.
The big things that I want to talk about is an example like why does this matter, really? What is Digital Resiliency? If you are a company and you have an outage, something goes down, there could be a number of reasons why that happens. It could take a really long time, hours, even, maybe sometimes days, to get back up. Downtime is detrimental. It could be the network, it could be a breach, it could be the bug. This is where the power of the platform comes in. This is where Splunk core platform comes in. This also rolls up to Cisco. One Cisco story. The one, the one portfolio, pulling all of these products and telemetry together. The platform, what does it do? It really does pull data from anywhere in your environment.
The telemetry comes together, we can correlate it, and the outcome is three big things. One is it distills the volume of that data so you're not sifting through, you know, tons and tons and tons of data. You can unleash AI to help do that and what you're left with is observability everywhere you can. Then with your, you know, the folks that use, use the platform, they can have a much easier time understanding what's wrong. And get your company back up, read and ready. That's the platform. I want to ask Lauren a little bit about security. She spends her days there. Yes. I talk about, I feel like number one. I'm super excited because the way I look at our partnership, and I call it a partnership, because my day job is a security solutions engineering leader at Splunk.
I talk all about the security specific applications that sit on top of Core. Without Core, there's no us, if you think about it. Right. Because we need data. I always talk about how Splunk, like, nobody does data better than Splunk. And it's true. All the data comes in at that core layer, and then we do all the magic on top. When we talk about digital resilience, I love the story that you brought up. If something goes down and if it's a breach, because cyber threats are among the number one risk factor as it relates to digital. Like things that can impact digital resilience, we gotta figure out, okay, it's great to prevent crime, prevent it. We always want to prevent. What happens when we face reality and remember that things will get in, it will come in.
When it does, how do you identify what it was? How do you stop it? How do you stand and keep strong during an attack? That's where the Splunk security applications that sit on Core start to shine. Right? We start looking at this as not just one single tool. It's a multitude of features. You need to have at the foundation our enterprise security SIEM. That's what we're bringing in. I look at Cisco Network as sensors. Think about all the devices in the network bringing in the data that's a sensor, sending it all up to our SIEM. We're bringing in XDR data. All of that coming in into the enterprise security SIEM. Okay, now that we have all these, all this data, too many alerts, alert fatigue. What do I do? We need to automate.
We have a single feature to automate, and that's our SOAR platform, Orchestration Automation. Let's take all that manual work. What do you, what takes you 20 minutes to do? We can do it in one. Literally, let's do this, let's orchestrate. It is not a separate thing that you're going. You're natively accessing that automation within enterprise security, which sits on top of Core and all the security, all the Cisco devices. We have an easy plugin TA. Not a TA per device or TA for the switch, a TA for the routers. There is one single app that takes all of that telemetry like you talked about, Michelle, and brings it in easily for our customers. I get out, you see, I'm geeking out. Let me not even do all of this.
Just know that if you want to be resilient, you have to understand that things will happen. Once we understand that, it's not a matter of just prevention, it's a matter of withstanding through it all. It's not if, it's always the when. Are we equipped? I think that with Cisco and Splunk, we are exactly. One more fun fact about core platform. This week we just announced the newest version of the Splunk core platform. We're, you know, we're swinging the bat on continuing to modernize the platform as well as providing a lot of data management capabilities to our on prem customers, for example, providing a lot more basically like management and cost management to our customers for data coming in, data at the edge.
This is crucial because it allows you as customers to pick what you pay for when data coming in. We don't tell you where your data needs to come from. Tell me a vendor that's doing that. It doesn't matter if you could be in an Amazon security lake, you could be in the Azure cloud. We just love data, give it to us, we federate it, we bring it across wherever it's coming from. I feel like we get too excited. One last point. I think this was really, really impactful for me. Splunk and Splunk and Cisco together, they really do make all of our organizations, our customers, feel like they are the only customer. That's because of what Lauren said about. It's the data, it's wherever you want it to come in from. You get to decide that.
Yes, we do not have that much time and we are here and we got to make sure we bring into the next team. I love being able to work with you like in this capacity, in our day job capacity. Michelle, it is a simple like literally a pleasure. I am sad this is our last day together. Let's send it back over there to Rob, if you do not mind, because I know they are ready to say some other cool things. No, actually, I do not want to let you go just yet. Girls, I want to ask you a couple of questions if I could, because I know both of you are on the show floor with a set of customer eyes, right? Based on your individual experiences and what you go out doing.
Because we all kind of stop down and we live in this very fun artificial environment for a week where we're, of course, very positive. We're talking about all the stuff that's coming out to a person. I'll start with Michelle and then go to Lauren. When you're looking at the show floor and we're finally learning what all has been announced this week, what kind of reaction in terms of where you recognize Splunk integration and how far he's either gone or not gone, and how it's showing up, whether it's recognizable or not. Because I'm assuming that customers may be going through some form of that. Michelle, your thoughts on that? Oh, my gosh. Splunk Integrations is what we've spent the last year on. Ever since they've announced the acquisition of Splunk, they have been heads down focused on Splunk Integrations.
You'll actually see it throughout the entire show floor. Mostly when you're looking at demos within digital resiliency, whether it comes to security or observability outcomes and use cases. You can see it in the dashboards. You can see it in stock of the future. It is visually represented everywhere. Yeah. Okay. Per Lauren, what do you think? I couldn't agree more. I also think it's pretty cool the way we, like, sneak it in there, right? Like, I love that we are not talking, here's Splunk here, Cisco, because it's one platform. A lot of what we're saying is business outcomes. Really good cadence. Point. Case in point is like the Cisco Store. First of all, we have a Meraki stacking there, running all the networks.
You walk in, you get on the wireless network, you're seamlessly doing so, you're looking at inventory like, hey, there's only three teddy bears left. That's Meraki. When you're looking at the dashboards to see, like, why didn't my payment go through? Oh, too many swipes today. That is Splunk Core bringing in that data. I think my favorite part is that I'm glad I don't see, like, Splunk written everywhere, because that's the thing. We're all. It's one platform. Yeah. Where would it stop? Like. I don't want to do that. Right. I love that we're together. Anyway. Lauren, I'm still. We still have a little bit more time, so I'm still gonna use it up.
Lauren, you had a fantastic conversation that I only got to kind of pick up on by watching you from a distance, but you were talking with Jessica Oppenheimer in the SOC. We haven't had a SOC over there before. I had noticed that there were Splunk dashboard additions to the big NOC wall of data. I don't know what they call that, but I love it. I just love the whole feeling of just kind of War Games kind of thing, you know, having your finger on the pulse. It strikes me we can create dashboards. We create little widgets. I've been seeing this in the demonstrations too where it's almost like it's less on finding what you did before and more about, hey, you can create anything. You need to get your finger on the pulse of something.
In talking to Jessica, what did was the Splunk integration and the ability to parse data from a security perspective and look at it at a deeper level. Is that being used? You said in the SOC as well. Absolutely. That is why. I gotta give you some backup here. I knew I met Jessica two days before because I saw the SOC and I'm like, are we using Splunk? And the answer is absolutely. There's all of our security stack. The entire Cisco plus Splunk is being used in the SOC. The dashboards we keep talking about, the visual aspects are definitely. You see Splunk right there front and center. We also have, like, some of our firewalls in the mix there that sending data.
Like, to my point earlier, the Cisco network becomes a huge sensor in the way that I look at it. That network is there. Remember, the NOC is next door. We're basically taking all that good stuff at the NOC and bringing it into that dashboard in the SOCs to help us realize what's happening in real time. I think, Rob, you're the one that brought up the fact that we've been kind of looking at everybody's security best practices as users and seeing, like, are you clear text password-ing things? Are you using POPs and things that we shouldn't even be using? The answer is yes, overwhelmingly yes. There are so many people here doing things that we don't consider safety best practices. We're kind of like sharing their names on the dashboard, I think too.
I do not want to talk about another Splunk solution, but I have to. We even automated the ability to try to tell everybody, like, hey, you should probably practice safer methods with our SOAR tool. That was the way we can get all these users, because there were a lot that were doing these things and we could not get to each of them individually with our SOAR tool. I love that. I like that. The idea is that you are saying, hey, here is a little note from the security operations center on your hygiene. On your security hygiene, here is what you could do just a little bit better. No harm, no foul. We are all family here. You know what? You leave these environs, it may not be someone so nice that notices, you know, these things that you are doing.
In summary, I think what you're both saying is that the Splunk integration, and it's funny because I feel like this time last year at Cisco Live, we were talking about what's coming and oh my gosh, did the teams deliver in terms of what's actually here? This is the time to do it because this is where people are looking for answers. Thank you for letting me drag you two guys out and sharing all your Splunk knowledge between the two of you. Love working with you. We'll check back with you here in another show, but thank you. I'll let you rest for a moment. Guys, we have got another video coming up with an interview with someone you may all recognize. I'm going to not set this up. I want to go ahead and see it with you.
Let's take a look at that now. Hello. I am super excited to be sitting here to have a conversation with the one and only Chuck Robbins. Chuck, how are you doing today? I'm good, Z, how are you? I am doing great. You know, we're back here for Cisco 2025, 40th year anniversary. What's top of mind for you? I mean, first of all, our customers and our partners are all here. The first thing is, let's make sure they have a great week. They're gonna get tons of education, they're gonna get a lot of time with our employees and our executive team. I think it's probably the most innovation we've ever announced at any Cisco Live that I've been going to. We're pretty excited about that as well. That's awesome. We're all excited about it.
You know, AI is the top of all of our customers' minds. I've heard that. Last year, you know, we talked about taking them through their AI journey. What's new for us at Cisco and what can our customers expect, how we're going to take them further in their AI journey in this next year? We're going through that journey with them together and I think we're all learning as we go. I think the big shift is that customers are moving from chat bots and, and queries for information to this notion of agentic applications where agents will actually be performing tasks for us and communicating to each other. When you talk to customers about where they are with AI, there are three things that they're really, that are sort of impediments that we have to get right together.
First is they have to have modernized infrastructure. They gotta have the right networking, the right compute, because these agents are going to be communicating from perhaps the edge of the network back to the data center. The performance has got to be real time. We have to have modernized infrastructure. We have to ensure safety and security, which we're doing a lot of work on in that space as well. The third area that our customers are looking at is I need the skills. We're also doing a lot of education and trying to help as we learn too. We're trying to help educate our customers, employees, and just share the things that we know to help them be successful. Exciting, exciting. You know, the energy in this room is electrifying and they're all expecting something great.
What's the one thing that they can take away from this event? One thing that they can expect to take away from this event? I think they're gonna leave here hopefully believing that we're innovating at a pace like we haven't in decades. Second is we should be a trusted partner for them to build out their AI strategy for what their infrastructure needs to look like. The third one is we hope they leave here with the belief that we can help them secure their AI and leverage AI for their own security in ways that they didn't know they could when they got here. I love it, I love it.
Chuck, thank you so much again and we're super excited for all that we're going to be doing here at Cisco Live 2025 and thanks for all the work you're doing this week to make sure that we have a great experience for everybody. It's my pleasure. It's my pleasure. I'm so, so excited to be here today. We're looking for so much, so much more. It's going to be fun. You know, I kind of, I kind of love seeing those two together. I just love the camera shot. Z kind of just so gorgeous with that look and the camera there and then. Anybody has never had a chance to meet Chuck and many of you have, but he is very much as laid back, legit, straightforward and easy to get behind as we continue to see every time we talk to him.
I think what a lot has been happening here from what I can only chalk up to G2's leadership here as well. The way everything has come together with the number of changes that we've made across the platform. I think that I can't regurgitate the three things that he just covered, but his checklist of things that we needed to say this would indicate that we are successful this week. It really does feel like that is what's happened. The little bit of time I've been able to talk with people from outside my own Cisco bubble seems to confirm that. Now we have to digest this. We gotta make sure customers understand this and we've gotta move forward.
This notion of operational simplicity through unified management, next generation networking devices with purpose built for AI workloads, up until the part where I said purpose built for AI workloads, sounded like everything we've always said and everything we've always done. I honestly feel with the focus around Silicon One, the reduction in the number of management platforms, the renaming of things that we have affectionately called, and we're always going to remember acquisition oriented names because that's how we kind of talk to each other in the behind the scenes. It is functional, it is relatable, and it is something customers can get behind because now we're easier to speak to. I would say that we are definitely innovating at speed and I'm very happy to get a chance to be a part of it.
We've got another center stage presentation coming up focused on really the changes you need to understand around AI infrastructure. In the same way that we're learning how data centers are not the data centers and the workloads of the past, we're dealing with something completely different. Doesn't mean those things are gone. We need to be prepared for what's already coming now, not next. With that, I'd like you to go over to center stage. Thank you for joining us. Thank you. Good to see everybody. Thank you everyone. We are between you and closing Cisco Live. I'm Murali Gandluru and what we're going to talk about is something that you've heard as a theme throughout the show. First and foremost, as you know standing, there's big shifts happening in the data center. You've seen all the announcements.
Obviously it's been predicated by the fact that enterprises are dealing with a lot of complexity in managing their networks. They're dealing with AI infrastructure and managing management of those infrastructure. That's also new. How do we help organizations that are facing those complexities deal with that while managing also security concerns? That's basically the premise of this presentation. If you think about it, you've seen a whole bunch of hype cycles and obviously this time last year we were talking about LLMs, we were talking about how LLMs are going to change the way we do work, the way we play all of those things. This year it's about agentic agents, agentic webs, all of those elements. Going forward we're talking about physical AI.
In all of this, enterprises are just trying to figure out what does it mean for my network, what does it mean for my infrastructure, how do I future proof that investment? That's primarily some of the elements that we'll be talking about. When you think about a network that enterprises have to manage, enterprises look at this as parallel GPU workloads that create unpredictable bursty traffic. They create a need for things that are more than the traditional load balancing techniques, for avoiding congestion, but also for utilizing the fabric to its maximum. One of the things also that that implies is that AI jobs require lossless and predictable packet delivery.
Combining all of these to do that, you will have to also on top of that, you have to also factor the point that not everybody is deploying backend environments which are primarily GPU load based, but also front end which has a mix of CPU and GPU environments. In such an environment, do we have answers out there? Cisco definitely is ready for both of these worlds. The answer is not more tools or more different management paradigms and operating models. It's actually the fact that Cisco has always in every environment focused on providing a single converged option operating model and the management plane for these environments. Front end, back end, storage, management, all kinds of networks. Whether you are managing modernizing your traditional workloads, whether you're building out new backend environments, whether you're moving from InfiniBand to Ethernet environments.
Cisco's approach has always been about unification, scalability, and providing security at scale. For that we have to start from the foundational blocks. One of the things that we've talked about again and again, both at Jeetu's keynote and the various deep dives that we had, is that we start from four key principles, right? Differentiated silicon built systems. On top of that, differentiated silicon optics, our own optics as well, software, and the operating model. These four elements together help us build out this foundation that you see out here. You see the Cisco networking portfolio built out of Cisco Silicon One and Cloud Scale ASICs. The range goes all the way, especially for AI workloads, 400 gig, 800 gig, and onwards. We have liquid cooled switches that we showed at this, at the World of Solutions. It is so exciting.
I have not seen such a crowd for a long time. Those are Nexus switches, 64 port, 800 gig switches out there. Innovations across the board, built in house. Bidi Optics. Bidi Optics is something that's really close to my heart because this was invented at Cisco. When you think about it, we are able to provide a lot of savings to your environments, operational environments, by reusing them as you go from one generation to the next. Here we are talking about 400 gig. In addition to that we have 800 gig QSFP, OSFP, DD optics for AI environments. Of course you marry that with the compute and cloud based delivery that we have with Nexus Hyperfabric that you'll touch upon a bit more as well.
All of this are the key elements of the foundational block for enterprise workloads, for AI. When you think about that, ultimately when you have all those foundational principles and blocks, and then when you look at what does it take to actually power those networks. It includes some amount of innovation in the data plane, it involves some amount of sustainability innovations that I just referred to. You have to be able to monitor those environments and finally you want to be able to take the data that is built out of that network, it's a wide network, multitude of networks, and then stream them out into a data lake, get insights that you can actually do much more richer outcomes with because they also go cross domain.
These are things that have been the theme of Cisco Live in many ways. This session is sort of recapping all of the announcements, at least as far as data center networking is concerned, and bringing them all together and highlighting what they mean for us. Right. I'll just quickly go over a few things, Jake, if you don't mind. One of them is around intelligent packet flow. Right? Intelligent packet flow is a bucket of capabilities that is built out of Cisco Silicon One differentiations, and essentially basic ECMP doesn't work, is not enough anymore, right? You want to be able to do things when you have especially a mix of CPU and GPU workloads.
You want to be able to do things like flowlet load balancing for the traditional workloads, as well as be able to do packet spraying for job latency sensitive GPU workload to workload communication. Can you do that simultaneously? Can your network be flow aware? Can we actually send telemetry data out to highlight the latency and job completion times and other kinds of errors that may occur? Those are things that our silicon brings out. There is all of the elements of optics that we can also bring together with that as well. Intelligent packet flow is a technique or a bucket of capabilities that we are bringing to market with our AI fabrics. Of course, as I was just saying, we have to monitor these things, right? We have to monitor this and where else other than the Unified Nexus Dashboard?
Unified Nexus Dashboard essentially now allows you for AI jobs to take a global site level view. Once you've looked at the site level view of all kinds of networks that you're managing, go to the backend view, backend network that you are specifically zooming into, and then look at the jobs that are there in those backend networks, and within that, see what kind of anomalies exist, what kind of healthy or unhealthy optics exist across a fabric, what kind of cable mismatches exist, things like that.
Once we look at it and then we go and then zoom into a particular job that we want to double click or triple click into, what you get to see immediately is actually you're able to go down to the point where you see, okay, it's this particular GPU that is sending a flow out or probes out across a particular link that we are seeing degradation. You can actually go back further in and see that, okay, that degradation is actually caused by a temperature exceed threshold issue. You're able to go from a macro level, a zoomed out view, all the way down to a particular job, a particular anomaly, get more information about it and then take action. This is unprecedented because it also applies across both front end, back end, all kinds of environments.
This has been built on the foundational principles of the unified Nexus Dashboard. This Nexus Dashboard now allows you to manage not just Nexus OS fabrics, but also ACI fabrics and also the plugin expansion into third party and storage area networking environments as well. It is the complete package and you get to see services like AI, see services like data broker, services like Insights, and then of course streaming this information out into Splunk Splunk Cloud through connectors or through the embedded approach as well. Apply all of the AI elements to this and you get a complete solution. What I wanted to then highlight is that this is one approach, one approach to how people deploy fabrics.
Jake, one of the things that we've talked about also is a cloud delivered model that is much, much simpler and easy to design, build and run. Do you mind taking us through that, Jake? Yeah, absolutely. Thanks, Marley. Hi everyone, my name is Jake Katz. I'm going to step you through Cisco Nexus Hyperfabric AI now. Just to set the playing field, what Marley just talked about, Nexus, Nexus Dashboard fabric, private cloud manage is something that you guys are probably very familiar with. Very flexible, right? Cloud scale. Silicon One operates with all kinds of partners. What I'm going to talk with you about is a different operational model of deploying AI clusters. Hyperfabric AI is specifically a, I'll call it a pre-designed cluster. Obviously you design what you want in the cluster managed through the cloud.
The operational model is cloud managed on prem AI, switching, AI GPUs and storage. It'll take you through starting with design, then build. Design, order, build, it lands, validation, then operation, then adds, deletes, the entire lifecycle of the data center for that AI cluster. Let's get a little bit more into that. It is built off the basis of Cisco Nexus Hyperfabric. Hyperfabric is actually available today. Hyperfabric is just the switching element on prem that is managed via the cloud. It does the same thing that I just mentioned. It starts with design, build, validate, run, total operational lifecycle for that switching and element. What we've done is we've taken that and moved it one step forward. What does that look like?
Here at the top, what you see are the switches, pods of plug and play leaf spine switches. These are very high performance, 400 gig, 800 gig, eventually 1.6t Ethernet. This is all based off of Ethernet. High performance Ethernet, cloud managed. If I stopped right there, that is Hyperfabric that's available today, that's orderable, shipping today. What I've done is I've taken that and brought it down one more level to now the UCS servers. This UCS design here is basically an HGX device. Eight GPUs, H200 class with Nvidia Bluefield 3 NICs for East west and a Bluefield 3 DPU enabled NIC for North South. We also have the option for in cluster storage supported on a UCS device, but it's through Vast storage. We have the entire cluster.
You have networking, you have compute, your GPU-based compute, and you have your storage all in one cluster that is managed via the cloud. It will take you through that entire life cycle. That design, build, validate, operate, lifecycle. I'm going to get to that as well in just a second. I think I have time. Also, I want to point out I'm really talking about hardware stack here, but we also support the entire NVIDIA AI Enterprise stack. So NVIDIA AI for Enterprise along with their NIM containers are fully supported here. Okay, a quick step through. We're logging in. Actually, you could log into Hyperfabric today if you wanted to without the AI portion. If you have a Cisco login, you could go in and log in and play with the tool. I'm going to show you the AI extension.
What's the URL for that? It's hyperfabric.cisco.com I believe it is. You asked me a hard question there because I actually just have it auto populated. What would I do? The first step I do and what I'm showing you on the left are actually fabrics you could build today with Hyperfabric and then you would just go the AI extension. Because that's what I want to build. I go through that. The first thing you would do is decide on the number of GPUs you want. The count of GPUs is going to be the basis for everything we build. It's going to decide the back end network. If you decide on storage, it's going to decide on the storage network. The whole thing is going to be configured from that. We're going to do it based on the count.
It might be hard to see, but we're actually starting from a small to medium to large. These are just delineations for counts of GPUs on how we build out the network fabric. You'll see if I break it down, you'll see. Start at the top, back end switches, management, leaves, storage, leaves, GPU servers, storage servers. You would go in and configure all of these through the management tool. Day one. This is how you would build your design. Okay, for this demonstration we're just going to select medium AI cluster, which is up to 96 GPUs. You do it based on GPU count. It would then explode each of those boxes that we saw there. At the top you see backend leaves. You would go in and configure all of your back end leaves. The tool will actually help you.
It'll go down all the way to the optics count and the type of optics. If you wanted to actually do cable planning, it would actually lay out the cables as well. We're trying to put everything together to make it easy, as easy as possible. I don't want to say the easy button for AI because I don't think there is such a thing for landing AI clusters today. It's just not, it's not reasonable to say that. What we're trying to do is give you access to the quickest time to value because you're landing really valuable assets. Those GPU servers are worth a lot of money. You don't want them sitting for three to four months not being operational. That's the whole key behind Hyperfabric AI. You'd go through, you'd configure each of these. You can double click down, go into optics.
You can actually even assign rail groups if you're familiar with the concept of rail groups. It's basically GPU to GPU affinity. You actually wire that into the leaf switches. You could do that. We allow you to do that. You would go through and do that for starting at the back end leaves. You would configure management leaves, storage leaves, GPU servers and so on. What is this going to do? At the end of the day, it's actually going to spit out a bomb that you could then order, right? You would order it and then of course the nice Cisco packaging, it would show up. It gets rid of all the spreadsheet backend work that is typically operators. If you want to do that, you are welcome to do the spreadsheet. That's actually what we're trying to get away from.
We're trying to actually put that into the tool. Like I said, it's not going to make you count those number of optics, number of cables. It's going to do that for you. It's actually going to map out the cable connectivity, the topology. For instance, if you wanted leaf spine topology, which is pretty familiar to all this, that's pretty simple. It would lay that out. If you wanted a rail based design, the connectivity is different and what it's going to do, it's going to get shipped to you. Cisco Professional Services or Partner Professional Services will help you bring this up. Once these switches plug in, once they have network connectivity, they validate to cloud. They authenticate into the cloud based management device.
Do the UCS servers say they are part of this group and then they will validate versus the BOM that you configured. If you miss wire, the management tool is going to tell you that you've not configured this to the design that you put into the tool. It's a pretty powerful design. I want to spend just a minute on our relationship with NVIDIA, because it spans both of our products. About a year and a half ago, at Cisco Live EMEA in Amsterdam, a year and a half ago we announced our partnership with NVIDIA. It was originally around Hyperfabric AI and it was us to do the kind of things we're doing, putting management agents on the NVIDIA HGX-style servers as well as the BlueField-3 NICs, because we control those through the management controller in the cloud. That has evolved.
At GTC this year we announced a Cisco Enterprise reference architecture that was validated by NVIDIA. We have two of those. We have one based off of Hyperfabric AI and we have one based off of Cisco Nexus. You have your options in how you build this. It follows the very similar small, medium, large configs. Those are based off of NVIDIA reference architectures that we have done and validated ourselves. That is a Cisco reference architecture for enterprise that we have created ourselves. What does that mean from componentry level now? Cisco Networking. Right. You have your choice of Cisco Networking, whether it's Hyperfabric AI, whether it's Cisco Nexus. You have Cisco Compute. You can use the dedicated compute service that I use for Hyperfabric AI or any of the other ones or you could bring them.
Bring UCS and the combos there. Absolutely. You have that partner storage for me and Hyperfabric AI I'm building today solely with Vast. You could bring partner storage into the cluster. It would go through a north-south interface as opposed to an in-cluster storage leaf. We have great partners across the board. Right. Both Vast, VAST Data, DDN, and then NetApp, Pure. We have a number of IBM. Yeah, exactly, absolutely. The NVIDIA software stack runs across all of this. You have the option to run NVIDIA AI tools across all of this or your own tools or if you wanted open source Hugging Face-based tools. You have all of that in flexibility between Nexus Hyperfabric AI, UCS, and our storage partners.
Just in terms of what Murali and I have talked about, two different operational models, one private cloud on prem, one cloud based public cloud, one pretty much bespoke. I'm going to create that cluster for you. You can add, delete, change to it, but I'm really going to dictate back end network. I'm going to dictate the storage in it, I'm going to dictate the UCS server. One much more flexible where you can pick whatever type of server, whether it's GPU based or not, and connect it in through Nexus. We both leverage our own Cisco ASICs, which is pretty nice, and then our own Cisco Optics. Absolutely. That gives you everything from front end, back end storage network. Pretty nice. Okay. Onto something I think really exciting. We have a customer and a partner here with us today. Please welcome to the stage Michael Israel.
Michael is the Chief Information Technology Officer for the Kraft Group and the New England Patriots. We're just going to do a quick Q and A with Michael. Thanks Michael. Michael, how has Cisco Live been? Awesome. Awesome. I'm ready to go home. You, we all are. Michael, I think there's been a lot of announcements around the data centers and particularly around data center networking. From your perspective, how has data center modernization supported the scale and speed of NFL operations? Especially someone, especially during game time, but also post and pre game as well. From your perspective, from infrastructure, from an NFL perspective, anything that's happening on the field is actually controlled by the league itself. Same thing with Major League Soccer. Everything we broadcast in the stadium our network controlled. Our IPF network is running point on that.
We are having to deliver product at higher definition, faster, more efficiently throughout. It's expanded beyond just what you see on scoreboards in suite Experience IPTV throughout the stadium. I have over 3,000 TVs throughout the stadium that we have to distribute content to. In that context, what role is AI playing particularly? I believe that when we think of, when we think of a particular game time experience, we're thinking about the stadium, but you probably have factories and other kinds of environment. What role is AI playing in your complete end to end operation side? Outside of Gillette Stadium, the Patriots and the Revolution, too, are soccer team. The Kraft Group also operates paper based businesses.
I have a recycling plant in Connecticut, I have 10 cardboard box manufacturing plants along the northeast, and a commodities business that moves products all over the world. Actually, my networking team, some of who are represented here in the audience, they're trying to keep pace with what we are doing from a business perspective. Right now we are evaluating where can, where is, not where can, where is is AI impacting us. Where can we take decisions out of the hands of humans and look to basically make our operations more efficient. It could be something as simple as transcription services where I have three people that all they do currently is transcribe press conferences, bringing AI in, we essentially eliminate the need to do that and we're bringing forward processes in which we can automate a lot of these pieces. The ticketing, purchasing.
If you're looking to buy tickets, do you really have to speak to a human to do that? Can you go through a bot to do those pieces and pick which seats you want in the manufacturing side? If I have 400 orders in a queue, what sequencing should we do to make sure that that paper, it's a roll of paper, is used to the best so that we don't have waste. We are really sitting with all of our stakeholders evaluating how they do business across our enterprise. My networking team is having to keep pace with everything that we're doing. It is a rather interesting time right now. Amazing, amazing. Cool. I get to ask a question. Yeah.
Prior to the show, we did a couple of talks with Michael and I found out some very interesting things and I was wondering if you could share some of the things that we had talked about before. Like you were, you were talking about how you do things in security, in waste management and tracking of food orders of objects left behind in the stadium. All these use cases that are very fascinating that you're doing today. Now one of the most interesting things we found is that most organizations have security cameras. We have over 550 cameras in our stadium. And that's dormant data that just expires after a certain period of time and gets deleted. We are intercepting that data with an AI tool to look at video anomaly detection, to look at what's happening in our situation.
If somebody leaves a bag behind, we can immediately detect that, bring it up to our operations control room and detect where that person, what was their journey, what cameras did they pass, lead it back to their car and say, oh, he or she was not alone. He had three other people with him and look, they all have bags. That has never happened, thank God. It can happen. We also do things like monitor a garbage can. If a garbage can is overflowing, it will detect that and create a ticket in our custodial system and dispatch someone to empty that garbage can. After 500 visits to a restroom, we can track how many people are going to a restroom and do the same thing, detect and then create an action based on it. We do the same thing in our manufacturing facilities.
We can detect the type of raw materials coming off of a truck and provide guidance to our forklift operators. Put that in the warehouse, put that on a conveyor belt. We can track how trucks are being loaded. Rolls of paper weigh in excess of 3,000 lbs. If they're loaded incorrectly, a trailer can tip. If we can detect a trailer being loaded improperly, we can create an action into our ERP system and prevent the truck from leaving. All of these pieces are meant to make us work better, make us work efficiently. More important, increase your satisfaction without even realizing what's happening at the stadium and you're doing at scale. Like for one of these events, not just people in the park, but outside the park.
Like, can you share with us like what kind of scale, like what kind of population of people that the cameras and sensors are. We have a large concert will have an excess of 60,000 people in the stadium. We're pinpointing down if there's a fight that breaks out, if a pipe bursts. Any type of. I describe it best as like we all grew up with Sesame Street. What in this picture doesn't belong. That's essentially what these types of systems are doing. You're training the model to say I'm looking for this. If anything else enters that you do. This is all with data that's just dormant right now. We're just reusing straight security camera footage. Doesn't matter if it's a, it doesn't matter what the manufacturer of the camera is. We're just looking at that data coming in.
That's not just in the stadium, that's outside the stadium. Outside the stadium as well. You could have a few 10,000 people even outside the stadium. Yeah, we learned that the hard way last year with a large concert. I was on my radio and someone said, we have 62,000 unique clients on our Wi-Fi. There's only 52,000 people here. How can that be? There was a TikTok challenge going on in the parking lot with 10,000 kids that jumped on our Wi-Fi network. It's amazing. Yeah, that's funny. In terms of operations, teams, perspective, one of the things that we discussed a while back was you're dealing with architectural changes, you're dealing with all of these at the same time.
What would you tell a CIO who has to buy into a modernization project as these evolving architectures constantly create new challenges? What would you tell them about why and when? In what scenario would you go for a modernization project? Especially maybe around the network? If you're thinking about a stadium, we span up when we have 60,000 people or 30,000 people. We're used to those pieces. You're not necessarily used to that in your corporate environment. Our corporate environment sits adjacent to the stadium. If you're bringing AI workloads in, all of a sudden you're getting spikes. All of a sudden you're getting items that are happening that you're not used to. My team is having to look at our corporate network and say, what's happening here? Why are we starting to see changes?
The other big challenge that we have, and we've talked about data governance over the years and all these pieces, if someone has the ability to search for something, they're going to find it. Is that what they're finding going up to the public cloud? We're doing a variety of things right now to make sure, one, we don't constrict what's going on and give people the tools that they're asking for, but make sure that everything that we have is buttoned up so that people aren't getting access to data that's being shared externally that shouldn't be. It's a very big piece of what we're doing right now. It's one thing to move forward with AI, but we're doing it with risk in mind and with a security conscience mind as well. That's amazing.
We are really honored to have New England Patriots as a Nexus and a Cisco customer and across multiple paradigms. You mentioned IPFM, the media fabric, but also two Nexus architectures as well for AI and other environments. Thank you for taking the time and I hope you had a great Cisco Live. Thanks for having me. We certainly had a great time. Thank you everybody in the audience. Really appreciate you. Thank you. Thank you, Michael. Just a reminder. Yep, I'll just leave this up, Michael. Thank you. Hey there, everybody. Welcome back to the Cisco TV studio. We are broadcasting to you live from beautiful San Diego Cisco Live 2025. I'm Steve Molter. We have just come out of another fantastic Center Stage session. We were just talking about how we build secure AI infrastructures.
We heard from Murali Gandluru, VP of Product Management, and from Jake Katz, our VP of Product Management for AI Solutions. As our organizations continue to accelerate AI adoption, let's face it, we have to do it. This is not an if. We all know this, this is a when, this is a how, right? Networking complexity, security concerns, these are not just potential barriers to successful deployments. Our AI workloads have got to have specialized networking architectures that can handle the massive data flows while we also maintain enterprise-grade security and observability. It's a big task, right? What we just heard is how modern AI networking solutions can address these critical challenges. We saw how simplified lifecycle management and intelligent automation can reduce our operational overhead, how it helps to guarantee optimal performance for our AI workloads and for our people.
We heard about some great approaches to provide unified visibility and control across our complex AI networking environments. So cool. Again, another great Center Stage session. In just a moment, we're going to be going out to Rob Boyd and he will do a behind the scenes interview with the man of the hour, with Murali Gandluru. As soon as we see him there, we're going to send you back over to that, but before we get there, a couple of very important things. Number one, we have a big afternoon still ahead of us on the broadcast. We've got one, two, I think, I believe two more Central Stage sessions coming your way. We've got some great interviews, we do have some videos of some great captures earlier here in the week that we want to share with you. Do not go away.
Stay with us here on the live stream and remember, keep reaching out to us on social media. The show still has a while to go here and we want to hear your thoughts, your feedback, we want to hear what is inspiring and exciting you. Keep posting on whatever platform you choose using Cisco Live or at Cisco Live. As promised, we are going to head out to Center Stage. Rob is with the man of the hour. Hey there, pal. Yes, absolutely. Morali, is that correct? Moody. Is it? Yeah, we are live now. Sorry, things asked around here. No, do not be sorry at all. This is high pressure because we have a lot going on very fast, but at the same time it is all family and it is all good.
You just came off the stage and you look amazingly fresh for someone, at least the way I generally come off of a stage. Tell me first, what is your, what's your full name and what is it that you do for Cisco? My name is Murali Gandluru and I'm responsible for the data center networking business at Cisco. It's a really. There is an area that's changing a bit right now. Yeah, absolutely. If you've been here and not living under the rock the past few days, Cisco Live has been all about the data center. The AI ready data center had a host of announcements, very, very exciting times.
I feel like, and this comment's been made several times by Chuck, G2, and many of us in various places about how we're needing to execute and provide a direction for everyone at the same time. Everyone, that includes us to a certain extent, is still figuring things out because we're kind of learning in real time as we're also expected to deploy and act in real time. I'm extremely impressed with the number of things that we've come out with this week in providing foundational answers for everyone. What is your perspective in terms of how customers are ready for the type of solutions that Cisco's provided? You're absolutely right. We're all—the pace at which innovation is happening right now is mind boggling. Essentially, if we look at our solutions, both infrastructure for AI as well as AI for infrastructure. Start with infrastructure for AI.
You've got to think about what your data center first of all is going to look like, what the ratings have to be for wattage and that defines your compute, your network, what kind of network connections you have to build out, what kind of halls, strategies you have to have and then what kind of power and cooling you need. Right. All of that on the AI for infrastructure is how do you manage all those, how do you get the maximum visibility and how do you get insights out of those? This and I guess what even prepins that is the notion of what do you need your AI ready data center to do and some people like kind of the odds options feel so broad in many respects and the ideas are so good.
You want to execute and not lose any time at the same time. You don't want to actually be caught off guard if you start investing in a wrong direction. I feel like one of the first things people need to understand is that our traditional workload-based data center with virtual machines and everything like that is still needed of course. That same data center is not ready to take AI workloads. They're somewhat two different beasts, especially from power, cooling, density, and all the things we're dealing with. Do you feel like that message is understood and we're moving beyond that now? You're absolutely right. You know, that was a very insightful point there because we're talking about front end, back end storage management environments and by the way, front end, well, backend environments are understood to be GPU completely based. Completely.
Front end could be traditional workloads, could be a hybrid of traditional and GPU workloads. Your paradigms of your network, how do you build out that network, scale out, scale up, all kinds of this. You're right, you have to have a strategic thought on what outcomes you're looking for and who the customer base for that enterprise is as well, their internal customers. All of those have to simultaneously be gone through while you're starting to plan out your infrastructure. You're absolutely right. Sounds simple, of course. I mean, I do feel like what we're trying to also portray here, and I feel like we're succeeding at this, is there needs to be a level of confidence that we do have a lot of knowledge and a lot of execution.
That's already happened because last year at this time we were talking about what we generally expect a lot, which is what's coming, here's what we're planning, here's what we're doing. Now all of a sudden it's like we don't just have a few big things to talk about, we have quite a few big things. I can't imagine what's been going on behind the scenes. Busy lately. Very busy. The good thing about this Cisco Live in particular is that these announcements were of things that we're actually shipping now. It's amazing as in addition to stuff innovation that we have also highlighted that's coming. We've got a range of things, you know, like liquid cool switches that are displayed there. We've got smart switches with DPUs built in and security services turned on.
I mean we've got a whole bunch of AI-based offerings for job monitoring, visibility assistance. All kinds of things that help our customers. Cisco has a massive business customer base and to navigate through these changes. This is a complex world. Cisco is here to provide a common operational model, a common operational outcome for them that helps them through this. Do you find yourself doing a combination of education and awareness both because when we move fast, I mean everybody's moving fast, but at the same time we are still receiving kind of at the same speed. Sometimes our input cues are getting hit. I feel it kind of heavy. You must be going through a lot of education. How are you? That's what my weekends are for. I'm just going. Taking Coursera classes every week.
Yeah, I like the Coursera classes though too. Right. There's a lot of places. There's no shortage. Heck, I've been playing with how AI test me on different things going forward because I'm trying to figure out how can I remember some of these details to share with them. It is complicated. The nice thing is that we are able to digest things very quickly through the new tools that we have. Cisco has provided a really fantastic internal infrastructure for its employees actually to be able to use AI tools confidently in a secure manner, in a predictable manner and be able to bring outcomes that they ultimately then learn from as well as deliver to customers and partners and other stakeholders. Absolutely. What would be the most ideal thing, a single thing out of a.
Probably a lot of them that you would want someone to take away from the conversations you've been having, and if they remembered anything, what would that be? What I would say is that we're going through a momentous time here with the data centers, AI ready data centers. Cisco is here as a trusted and reliable partner to navigate this journey. We certainly understand that enterprises have a lot going on and providers alike, actually. We are here for our customers and partners alike. Yeah. I think you guys have done an outstanding job. This is the part where I have to kind of prepare for the fact that we are, we're going to be shutting all this down here shortly.
This moment in time that you and your teams have worked so hard to create, obviously they continue because we have a lot of work to do going forward. It's going to be exciting to see, you know, in a year and kind of look back and compare because it's hard to imagine what else would be there. Thank you so much. Thank you. Thank you for having me. Absolutely. Appreciate your time here. Appreciate you on the stage and bringing in the partners and the customers, customer stories. With that, I'm going to go ahead and throw it back to Steve. Thank you so much, Rob. Our thanks to Burley as well. Rob, do me a favor, hang out close. I want to come back to you. I want to, I want to kind of chat with you in a couple of minutes.
I'll be right back with you. Just don't stray too far away. I love what we just heard from Modality there, because it was really a good summation of what Cisco does and how only Cisco can do what it does in terms of combining that power of the network along with security, observability, collaboration. He just sort of put that together. How do we get people and technology working together on the same page? I think that was a fantastic story from Michelle. Yep, absolutely. Okay, thank you so much again, my friend. I want to now head on out to Michelle, who is at AI Data Center Networking. She's going to take a deep dive into some new AI capabilities while we still have a couple of minutes left here in the afternoon. Michelle, can you hear me, my friend? Actually, I'm not hearing from Michelle.
Okay. All right. We're going to keep it here. In fact, Rob, do you still have ears on out there? I don't know if Rob, you can still hear me at all? I think I could go to him for a moment. No, maybe not. As we just heard from, morally, what's so important is how do we put the physical and the digital worlds together with our infrastructure? So much of what we hear about as we make that press into AI, as we start to push further and further toward it, is how do we revolutionize the infrastructure to be able to support all those different capabilities? I think what Mordly's team is doing out there on the networking side is so incredibly powerful. Hopefully you've been tracking that through all of our different sessions as we go. I can see Michelle up there.
Is she still talking and chatting? Are we still waiting and hanging out here for a moment? Great. I tell you what we're going to do. We're going to change gears just a little bit and go to one of my favorite areas on the show floor. This year, the Cisco Store is always super popular. They sell out of things left and right. When you're here at the show, you can walk through the Cisco Store, you can shop, you can pick up some great gadgets, great goodies, good hoodies. One of them sold out literally three minutes into the conference. It was that popular. What I really love about the Cisco Store, which by the way, this year is right next to registration out in the front, so everybody can easily find it and trip right into it.
It's more about brand activation, it's more about technology demos. Yeah, you can buy stuff. How do we look at the full Cisco partner technology stack, especially as it pertains to the power and the future of retail in a Cisco Store tech lab? How do we put those different pieces together? Let's check out a short video and see what's happening over in Cisco Store. I am here at the Cisco Store. That is not your typical store. I mean, of course you get to shop and get cool things, but it's also powered by Cisco and partner technologies. I think we probably should see what I mean, right? First, we need the shot. Okay, so I'm really sad. It just goes to show, if you snooze, you lose, because there were sweaters and hoodies that went with these sweatpants.
Since I learned my lesson, let me just get the pants. Now I won't have a jacket, but that's okay. I have to get something from the McLaren partnership line that we have. It's absolutely super cute. They have the women's shirts. Oh, they don't have my size. I'm super sad. Think this looks good, but maybe I should try it for real. We have these really great smart fitting rooms that are, like, the best. And they're open. I love it. What is a Cisco store visit without a Splunk T shirt? I'm gonna go ahead and get this one just because it's super coated here. I think I need a hoodie. Oh, I got a ponytail. I can't even wear my hat. It would be nice if there was a bag that I can throw this stuff in. There's a bag and it's already opened.
You see something you like, grab it and buy it, because you cannot guarantee it'll be here later. So that I don't feel that way again, I'm getting this teddy for my baby. She's gonna be really excited. Water bottle. Gotta get it, gotta get it. Socks. I gotta get these now. You know, this is really cute. I know I already put one water bottle in, but I gotta get this one, too. Why? Because. Why not? Oh, look, there's Kaylee giving us store tour. Remember I told you about all that cool tech that's in the store? It's all powered by Cisco. Like I said before, that means Splunk. It's there at the core. We're using a Meraki camera to count people, count items with a partnership with another cool company I just learned about. You really. I think this is a smart store.
Like, would that be safe to say? Because I'm always cold, you see all these hoodies I bought? We have sensors throughout the store. In addition to the Meraki stack that's running the network that tells you how cold it is. You can walk in and see 20 degrees and we walk right back out. It's not that cold. I'm just saying if you needed to, we got a blanket, like I didn't even know. I'm gonna go tell everybody else that there's blankets here. Now if you are looking for a size and it is not in the store, look no more because we have a kiosk here to help you shop. I'm actually gonna see if I can find my baby, my daughter, that shirt I was looking for, let's see if it's on the Cisco store.
You're gonna go type in and oh, they even have more inventory online. I'm gonna go ahead and do this. The beauty of being in the store and doing this is that now you get free shipping to your home. Who doesn't love free shipping? That's the advantage of being here in person. Ordering something that's not in stock. That was so much fun. I am exhausted and out of money. You guys can still come here and shop till you drop before they run out of all this cool stuff, because if I get back in here, I'm going to take the rest of it. Don't forget, there's also pop up booths everywhere for you to still shop for good causes and donate to some charitable opportunities. See you soon. All right. How much do you love Lauren White?
Is she the cutest thing on the planet or what? She has been a brilliant addition back into our Cisco TV host broadcast team. I just adore that woman and I think she spent her entire month's paycheck down there in the Cisco store. Again, Kaylee and Brian, they do an incredible, incredible job with that environment. Now we have double checked and we are ready to head back out to the show floor where Michelle is at AI Data Center Networking. Hello, my friend. It took a moment, but we're together now. Hello, guys. Steve, I can hear you and I agree. Lauren is the cutest person I've ever met. All right, we're down here on the show floor. I'm with my new friend Karishma. She's a product leader in data center networking. How are you feeling today? Feeling great.
Super excited and proud to show you guys the Unified Nexus Dashboard. Incredible. We have a great demo for you. I want to start out with talking about what are the benefits of Cisco Unified Nexus Dashboard. Can you walk us through that? Absolutely. The Unified Nexus Dashboard is a single pane of glass to manage, monitor, multiple architectures, multiple platforms, multiple domains. You can do all of this for your LAN fabrics, your SAN media fabrics. With the Unified Nexus Dashboard you also get the ability to monitor your NXOS fabrics along with the ACI fabrics as well. Just going to walk you through that now. Oh, let's do it. What is the Cisco intelligent packet flow and how does it contribute to better completion time?
All right, so the intelligent packet flow is part of the AI suite that we have in NXOS and Nexus Dashboard. As you can see in this demo, I'm looking at a fabric level where there's a lot of congestion that's seen. Now when you get into the topological view, you're able to see exactly which links are having congestion. The Smart Assist is intended to show us exactly what's going on and what needs my attention. This is the anomaly that can be seen which says that there is severe connection congestion due to ECN and PFC frames. The impact of this is that it's going to victimize servers from the leaf, which will in turn impact my jobs that are running. Now. The recommended solution is to enable dynamic load balancing which can be done at a flowlet or per packet level from Nexus Dashboard.
Now here are some metrics, some really good data where you can see congestion and more importantly you can see the links between your leaves and spines are actually not efficiently utilized, which is where our intelligent packet flow can help. Now I go ahead and enable that using Nexus Dashboard. My congestion is healthy. I go back into my network, the topological view is showing all the links are green, which means my congestion problem has actually been resolved. If I go back to the same stats, I see the congestion score is back to zero. And guess what? My links are now efficiently utilized and that in turn really, really improves the job performance. Karishma, one last question. How does AI enhanced job visibility and troubleshooting improve network visibility? Great. I'm going to walk you through what we're doing with Nexus Dashboard.
Here, this is an integration with Slurm and I have my cluster running. It's an AI ML routed fabric. You can build a routed or a VXN fabric using Nexus Dashboard. This is the one with 2,000 GPUs, 256 servers, four pods, and this is all Rails based design. You can also have PLY based design. Now in my cluster I have four fabrics running out of which as we can see back in network is actually having a problem. Again, Smart Assist will tell me exactly where I need to look into right so two of the jobs are impacted and there are packet drops seen. Now this is a brand new AIML job dashboard which will give me visibility into all the running AIML jobs, the start time, the runtime, and more importantly, if there's something wrong, it'll point me to the anomalies.
There were two jobs impacted. Smart Assist automatically filtered those two jobs and I can get into more details for each of the jobs. We again have very detailed analysis of how my job is doing. A lot of metrics of the switch as well as the NIC. Here we're looking at a lot of drops. Specifically in one interface of the leaf I have a bunch of CRC errors and my module temperature, meaning my optic is actually running hot on that particular interface. Right, so these are some of the issues which are actually in turn degrading my AI/ML job performance. Likewise, Smart Assist will point me to exactly those problems. If you were to double click into any of these problems, you can look at the topology. It, you know, picture speaks a thousand words, right?
We are looking at a very zoomed in version of how my leaves are connected to the servers. I have again four pods. We are looking at this specific server which has 8 GPUs. The problem here is the leaf one to the GPU zero, this link. We also have a problem in the GPU 3. We have a full blown anomaly dashboard which will show me exactly where the problem is at. Right, and what's the impact of these problems? For example, my transceiver temperature is running hot on E11. That's the problem. The impact here is that it's actually causing packet drops and how do I fix it? The nature of all the problems seen in Nexus Dashboard is intended to really tell me what's broken, what triggered the anomaly and what's the, you know, the impact of that particular problem.
Karishma, thank you so much. This is just another great example of the demos that we have on the show floor. There is still time to come down and check this out, but for now back to you, Steve. Thank you, thank you so much. Really appreciate it. Michelle. Michelle and Karishma did such a great job, so bravo to her. We get to this last day, this last afternoon and you would think that people would be, oh, they've been talking nonstop for days and they've told this story so many times and you can tell by watching and by listening to Karishma. This is the dedication, the passion of our Cisco engineers, our designers, our developers. They care so deeply about what it is that they do that we can get to the last afternoon.
They are as on fire as they were at the opening of the show. That is what it means to be here in the space, to get that engagement time with the people who build these products, who care so deeply about them, in order to create a better reality, a better workplace, a better life for all of us. Something about Cisconians that just cannot be matched anywhere else in the world. This brings me to an interesting point here. I want to talk about the things that you just do not get to see when you are here on the streaming live broadcast. It is not just what we bring you from the world of solutions or center stage or the keynote deep dives or the incredible leadership keynotes themselves. It is the things that happen peripherally around this particular event.
For example, last night, we haven't talked about it that much. We had an incredible party right over here at Petco Park with the Killers headlining. Great food, great drink, everybody having such an incredible time. I was up in one of the skyboxes getting an opportunity to look down on this massive group of Cisco people coming together to celebrate, to enjoy. That's what you get when you're here in the space. On Monday, we showed Star Wars out at the Rady Shell out on the bay, on the harbor. Incredible location, beautiful. Heather Henderson talked about that this morning as we were in our chat with one another here in the studio about how exciting it was to add that. I wanted to mention one other cool thing that was happening and that was the IT Leadership Conference, which was happening literally right next door at the Marriott.
People came just for that. If you're new to leadership, great. You want to advance your leadership career, great. What's cool about the IT Leadership program here at Cisco Live is that it's built around educating and around connecting leaders, both in the IT and also in the business world. IT puts those together. There are people who come to that part of Cisco Live alone and they get a much deeper understanding of today's technology evolution. They do it through core objectives of the program. Developing leadership skills, the things that we need to be able to thrive in a digital future. Speaking directly with Cisco executives and with industry thought leaders. You got to be in the room to do that. Get those expert insights. People can attend exclusive sessions with those executives. It's about the networking.
How do we build connections with more industry peers in the IT Leadership Hub again next door at Marriott, so we know how to use the technology to get to those desired outcomes. The leadership program is over there and that includes like TED Talk style segments in their IT Leadership Studios, covers all sorts of hot topics for leadership within IT to look at the future of the industry, including AI and security. Maybe at some point you do not just come to Cisco Live. Maybe you say, hey, I want to be a part of the IT Leadership program as well. It is a great peripheral program that you can check out. Those are the things that you do not always see here on the broadcast. We want to bring those to you as well.
I see up here in my monitor that Rob, I was able to grab you for an extra couple of moments. I can't believe you're actually free to talk to me here. Oh, you too? Yeah. I mean, look at this, Steve. We are, as I understand, I think the show floor closes it in 45. Yeah. Like to like 35 minutes. You see how there's, I mean, yeah, there's less people, the energy level's a little bit different. It was funny. I remember one of the interviews we saw is the, in the, on the center stage earlier, which is just over here. Just give people some, a little bit of a direction. Watch out, we're swinging around wildly, guys.
The notion that, you know, they'd obviously, some of the speakers had obviously enjoyed the Killers last night and they'd maybe, I don't say they imbibed. Maybe they were just up late and they're having trouble with their sleeping patterns. Either way, what is amazing to me is that there are still some very, very in-depth conversations that are going on here. As you look here from the future, this is where we spent most of our time. I tend to think, I don't know how often you feel this, but it feels like we spend so much time on the. Is IT Showcase, Steve. Is that what we call it? Exactly. Showcase. Yeah. Okay. Make sure you got to correct me there on this one.
These conversations are continuing and I feel like this is the entire conference, but it's not, as you mentioned, the IT management side. All this stuff that gets built up, this temporary village. Is that a good way to put it? This village, it's set up, everybody comes together, we have the NOC and the SOC and all that gets built. We have that infrastructure for an event that is now going to all be shut down, will be done with in the last, you know, just in 40 minutes, all these conversations will move back online probably. All the people that could not be here with us will continue communicating with you. A lot of learning now has to be absorbed. I was thinking back, you and I were talking earlier.
Do you mind if we chat earlier a little bit? Steve? We should. You know what? You were going to ask me a question. Let me add something to what you just said. Please do. So cool. I learned something years ago in the events industry and the exhibits as well, and that is that a lot of leaders, a lot of executives, a lot of top brass, C suites, they love the final day of an event. A lot of people think that, you know, kind of the energy drops off. Like you said, less people, maybe a little less dynamic. That's kind of the whole point for a lot of leaders. They can come in at the last day, they can get directly to those demos, they can see exactly what's going on, they get lots more time.
Instead of everybody piling in and it becomes a struggle or a fight to go from a one on one to a one on many demo, all of a sudden they get that person exclusively to themselves. I learned long ago sometimes the final day is the most powerful day of an event for sharing information. I can't remember which city we were in, but there was a point in time, at least maybe a decade ago, we were set up and we were supposed to interview a celebrity that was coming off the closing keynote stage, something like that. The way it was planned was that interview started at the same time the show floor was closing, which wouldn't be a problem here because our studio is not in the show floor. They'd hustled the speaker up. Can we say the name? Yeah. William Shatner. Right.
I was going to talk to William Shatner, which is fantastic. Right? Oh my gosh, what a privilege. We get there and all of a sudden the show floor closed. First there's the PA saying it's all closed. That first interrupts the mic voice of God. Then it becomes all the people that work behind the scenes that are just ready for us to give them the go signal and they drive out. It was like a Tonka truck, you know, extravaganza of some sort. It's all these cherry pickers and big machines and they do not give a hoot about the fact that, hey, we have live mics here. We're trying to do an interview. Do you see who? With me, they do not care. It's all going to come down.
We had to carry it on and it was a nice long form, which was cool because we got to talk about a lot of stuff that was not covered on the keynote stage. I'm curious, what is your most. Have you been. If we have the time, have you been thrown into any unfamiliar situations? More than I can begin to count. I do remember that very well because we deliberately planned and I do not know how it happened. I think it was a change of the show hours or something like that. I think we had like three more hours of broadcast. Literally, I remember, wood is crashing to the ground around us. They are hacking things apart with sledgehammers. All the beep, beep, beep of the cherry pickers. One of my favorites years ago was not on a Cisco event.
I want to go ahead, and I want to say that outright, but I literally was standing next to an exhibit that burned to the ground. They had an electrical fire. What was it that burned? An entire exhibit. Imagine where you are right now. You're in the center, center stage area. You're close by, you're kind of imagine there's a spark underneath the carpeting. The entire thing went up in flames in front of our eyes. They had to clear the show floor. It was on day one of the event. We came back and literally the whole thing was ash because they couldn't really clean it up. They just cordoned it off with yellow tape like there had been a murder. On we went with the entire show because the show must go on.
I'll tell you, when it comes to Cisco Live, we have made things like that happen again and again and again and again. Were things perfect? No. Do they look perfect to the attendee who comes to Cisco Live? You better believe they do. They would never, ever know. Who do we thank for keeping things looking so perfectly despite all the struggle going on underneath the things? You mean who's in the background making it look as beautiful as it looks right now? These are our friends at George B. Johnson who support us. DPJ is in charge of that show floor. Our friends at Sparks Freeman as well. Cisco relies on them very heavily to make an event look so beautiful. We talked this morning with Laura Simmons, Heather Henderson Thomas, Cathy Doyle.
This phenomenal team that takes the executive leadership role that comes down from Chuck, G2, and Carrie and Oliver and the whole rest of the team that come up with the vision. That comes down to the team of Heather and their folks, and they execute what's going on behind you right now, Robin, it looks spectacular. We are not done here. We've still got a little bit more content coming your way. I do not mean to imply anything other than just fun and excitement. I just do want to say while I've got a chance here, I am very, very enthused about where this goes next. I'm excited about all the study and I get to do, but the conversations that continue, you know, as we absorb things throughout the years, we work directly with customers.
All of us have different roles as we do this. And you know what, that's really fun because now we actually get to get to see. I was actually surprised. More stuff is shipping now and available now. It's not a matter of waiting until things are available now. It's more about just getting your hands dirty. But you know, again, as we look out here, I'm just going to say, yeah, get this in your head. This is what it looks like this year. All these people go away, our community dissolves and all the people that we come to know and love and visit every year about this time. We get ready for the next year as we go back to whatever we have to do normally. But Steve, I'm going to go ahead and throw back to you in the studio.
Rob, thank you so much. Nobody better to know that stuff than you. We are going to head back out into center stage once again. We're going to talk about the Cisco data center, built for AI, ready for everything. We're going to throw it out to Jeremy Foster and Dan Wendland. Away we go. The Cisco family. We're going to go through a couple things. First thing is the two big market transitions that are happening inside of the data center today. Then we'll talk about how can you avoid building silos in your environments. Right. You shouldn't have to treat your virtual environment different than your AI environment, different than your container environment, for example. Then we'll talk about how Cisco Unified unifies these platforms and then a new way to control networking at a stack level from within the platforms.
Now, there's a lot of feedback we get from customers, but I think these are three really big common themes. When we are listening at events like this or meeting you at your office, you say, hey, we have way too many tools. We have way too many things that we have to integrate every time we want to take a new challenge on something like AI, for example. It's also really challenging to understand as we take on new IT projects with different types of infrastructure. Then what is the ROI? How do I build this environment? Of course, security is top of mind for everybody these days, for all the reasons that are out there, including AI itself. I said there was two big transitions happening in the data center.
The first, the first one we've already lived through, but it's still out there and we still maintain and live in these environments today. That is, last big one that we've lived through is virtualization. We've gone through the whole cycle of virtualization and now we're in this big AI cycle where LLMs have taken off, we're moving to agentic. Of course, after that we'll move to physical. The point here is that these two things are still in the data center. I mean, there's a lot of folks in the audience. You probably still have some Unix in your data center. You probably still have some mainframes in one or two of your data centers. These virtual machines, these large virtual environments don't just go away because I happened.
It is something to keep in mind because you're being asked to operate both of these environments simultaneously. What we want to do is build a system that allows your data center to be ready to take on both of these challenges simultaneously without creating new operating models. How are we going to do that? The first thing we're going to do is unify the overall architecture itself. If you take a look at UCS and what we do from a computing standpoint, we do not care what the form factor of the server is that you purchase, we're going to treat that form factor the same exact way in terms of how we manage it.
If you're using a big server to cover an SAP type of a workload, or if you're using a small server to cover an edge workload, you can manage it all through Intersight and manage it all with the same templates, policies, and programmatically the same way. We're going to build security into everything we do. Dan's going to talk a lot about that later as well. Make it all scalable. We have customers who can scale hundreds, if not thousands of servers in a matter of minutes or hours. We make this all infrastructure as code, and we do this not just across the compute pieces, not just across the pieces from a security standpoint, but across all these things that you see here. Because what we really have at Cisco, that's a big advantage in the data center.
Is this big broad portfolio where we can help you with optics, compute, networking and all the various things that you've heard about this week and bring it all together for you and do so with really high quality. This is a new study that just came out from Tech Channel where they talked about UCS and just how the quality and the downtime that you're incurring from using UCS versus the competition is much lower. When you look at downtime in the data center, we know that's one of the most expensive things that happens to just a few minutes of downtime can cost hundreds of thousands of dollars. This type of stuff really matters. One of the things I'm most proud that the team we work with for UCS is always focused on is quality.
I think it's been great to see these types of things hitting the wire and hitting the news. I think the other big calling card for what we do from a computing perspective as well as from a Nexus perspective is management. We have to have the industry's best management so we can help you scale, so we can keep things secure and most importantly, keep things in a best practice estate. Deploying infrastructure is not easy, right? And getting to day zero, whether it's a tough process or an easy one, once you get there, that's fine. What are you going to do when you live through the next five, six, seven years of that lifespan of that equipment? That's where you're putting in the work. That's where the automation and the change control and those things really matter.
Innersight is the element manager for compute and we work with Nexus Dashboard, which is the element manager for the networking side. These element managers won't go away. Yes, we'll wrap this into Cisco Cloud Control, but the important thing on this slide is what's across the bottom. These are the things that we're building capabilities to do to make sure we're lowering your total cost of ownership, making things move faster, and overall bring down the amount of downtime you're going to incur in the data center through the operations lifespan over that five to seven years. We don't do it alone because inside the data center partnerships are absolutely critical. I think we focus a lot on working with all the folks that you see here on the screen and then some.
Whether that's our storage partners like Pure, NetApp, Vast, depending on the use case, we can help build you a validated design that includes compute, network, optics, the whole stack leveraging any of these folks that you see up here. What this ultimately means for you all as customers is a lot of choice. The second thing is it means that we also can add a lot of value when we start moving towards things like what are we doing with our secure AI factory? Because we've been doing these types of things for a long time, even before our NVIDIA relationship. We have different flavors inside of the secure factory. If you step back and say what is it?
At its core, this is when we take our partner-based storage, something like Vast, Cisco Compute, Cisco networking, Kubernetes layer which is, which is going to be there and part of the NVIDIA's AI enterprise software which sits on top. We put all this together and wrap it in security. What you get, because this is built on top of a validated design, is that you can pick up the phone and call Cisco to get support. End to end is a great solution and it's the foundation of what we call an AI pod. You can buy that, you can deploy that with a partner and you can start off exactly in best practices state with some new AI info infrastructure.
There's another use case called Hyperfabric where we can actually take that one step further and manage that from the cloud for you from a networking perspective all the way down to the back of the server and the smartnic that sits inside the server. We can manage your fabric, make sure we're looking at things and avoiding challenges like congestion that you might have in a training cluster for AI type of use case. Last year we launched AI Pods here and we were very focused on inferencing use cases for the enterprise. I just wanted to call out, we can take that whole stack and we can now support the whole spectrum of things that you would want to do from an AI perspective.
We've got all the right equipment at the bottom and we've done all the work to be able to support inferencing, fine tuning as well as training type use cases with these AI pods and those types of designs. It's not just about the full stack that we're building from a compute networking layer. What happens above that is really, really important. If you think about where I started at the beginning, you have this virtualization transition, you have this AI transition and we thought it might be great to get Dan's perspective on some things we could do from a security and networking perspective by taking that Kubernetes layer that sits across both of those and it's kind of that common substrate that we're going to work with across both virtualization and AI. Dan, awesome. Thanks, Jeremy.
I'm excited to be part of Cisco and excited to be here talking to you about some of the work we're doing with the UCS team. To understand why we created Isovalent, I guess, what was it, eight or almost nine years ago now, I want to double click a little bit on this Kubernetes layer. All right. The important thing to understand is that effectively Kubernetes is a black box to all of your traditional networking and infrastructure devices. What do I mean by that? For those of us that have been in the networking industry for a long time, we've always known that an IP address is a meaningful form of identity on the network. I can know look for an application, I can know what IP address it is. I can program that into my firewall, I can program that into my load balancer.
I can use that to help troubleshoot connectivity problems, help troubleshoot connectivity problems. That means your traditional firewalls, load balancers, network observability mechanisms do not work for Kubernetes workloads. As your business runs more and more enterprise critical workloads in Kubernetes, this is of course a real problem. This is ultimately why we created Isovalent. Right? We created Isovalent to bring in rich virtual networking, firewalling and micro-segmentation, load balancing, network visibility, encryption and even runtime security to basically extend that intelligence into this Kubernetes layer. We did it with a technology called EBPF and Cilium, and Cilium, because of the power of EBPF, became the de facto standard for Kubernetes network and security. It was adopted as default in Kubernetes offerings from all three major cloud providers and was the only graduated Kubernetes networking project within the Kubernetes ecosystem. Right.
As a business we took that and built the Isovalent Enterprise Platform. Right. And delivered it to enterprise customers. What does the value of this platform mean for your business? Right. We'll dig into two things. First, we'll talk about and dig into a bit more for each team in your organization. What does it mean to have Isovalent extend that intelligence into the platform layer? The second thing is about enabling workload portability. Workload portability is a topic I hear more and more in customer conversations as they become aware that, listen, I need to have the flexibility to run these workloads where my business requires them. I don't want to be particularly beholden to any one place where I'm running these workloads.
We'll have two really cool demos, one about on the virtualization side and one on the AI workload side that dig into that in detail. First off, I want to talk a little bit about how Isovalent's ability to kind of break that black box open, what that means for different members of your organization. First off, for the security and operations team, they really struggle with Kubernetes environments because let's imagine they see a suspicious connection leaving a Kubernetes environment, the first thing they want to know is what application did that. Right. Isovalent gives them the ability to map that connection back to an individual application identity and workload. That information can be sent to the SecOps team. SIEM like Splunk Enterprise Security, they can use that for threat detection, incident investigation, et cetera.
The network security team now has a fully Kubernetes Identity Aware firewall solution, right? That can plug into things like HyperShield and Security Cloud Control, so that you have a single pane of glass for programming firewalls, both traditional device based firewalls, as well as, you know, the Isovalent Kubernetes Identity Aware firewall. On the developer side, Kubernetes environments can be really tricky. You have these complex microservices apps that are making API calls to each other. Remember, Kubernetes is spraying these containers over a whole set of workloads. If the app is slow or not working, it's really hard for your app teams to say they can't just SSH to a VM and run a TCP dump, right? How do they get that information about the health of their connectivity? Again, in an Identity Aware way?
Isovalent delivers that both in terms of application dependency maps, connectivity flow logs, and even Prometheus compatible metrics. We have also been working with the Splunk Observability team to make sure that if you're a Splunk Observability user, all of that pops as kind of value out of the box. With Splunk, we've also had two really exciting announcements in terms of what Isovalent means for the network operations team. First, we announced fully approved reference architectures for how Isovalent and Nexus and ACI can create a seamless interconnect for traffic that's coming in and out of your Kubernetes cluster. Kubernetes likes to kind of think of the physical network as being very dumb and simple and just all traffic leads to one segment in the network. As many of you all know, the real world is a lot more complicated than that.
This allows us to take particular applications and direct them this way or that way. It allows us to preserve identity in a way that can integrate with external firewalls. It really helps that kind of virtual and physical network play better together. The second thing from a network operation side is something that was just announced yesterday, which is the Isovalent load balancer. Typically in an on-prem Kubernetes environment, you need to run a third party load balancer in front of this environment and it's actually quite inefficient because as we said before, those third party load balancers don't really understand Kubernetes identity. They have to send the workload to some random Kubernetes node and the Kubernetes node actually sends it to the actual workload. We can do so much better with Isovalent and that's what we just announced today.
We have a very efficient layer 3, layer 4, layer 7 scale out software driven load balancer that can be accelerated by something called XDP, which allows us to use the NICs to provide even faster and more efficient load balancing. All of this value comes with the simplicity of Intersight management to install and lifecycle and troubleshoot and the confidence you get from Cisco validated designs. Each one of these has a really cool demo associated with it. We do not have time to show you, but there are a couple Isovalent demo booths right over there that I encourage you to check out later. The area where I will dig in a little more and give two quick demos is on this kind of vision toward workload portability. I think this. I guess I have 15 minutes to finish. All right.
I don't think every speaker gets an announcement like that, right? No. Anyway, the high level vision is obviously no matter where you're running your workloads today or where you want to run them tomorrow, Isovalent can provide a consistent network fabric to connect and secure those workloads and it can make it easier for you to move those workloads from one location to another. As we'll see in the demo. That's true for on prem infrastructure, Kubernetes and virtualization platforms. Right. It's also true in the cloud. Right. Because everything we're talking about here operates at the Linux kernel layer. That means you have consistent controls even when you're running on a cloud hypervisor.
To make it even more special and cool, right, all of these AI clouds like CoreWeave or Grok, right, many of them are actually using Cilium already, and they're all built on top of Linux. That means we can again connect and secure these AI workloads in a consistent manner and let you run those workloads wherever it makes sense for your business today or tomorrow. Let's double click into two demos, first on the virtualization side. I'll walk through the demo scenario first and then we'll see the demo. This is a VM workload portability demo where we'll show seamlessly moving 20 web servers from a VMware environment to a Red Hat OpenShift virtualization environment.
Now, what's typically complicated about this type of migration from a network perspective is you might have to take these workloads down, change their IP address, reconfigure firewalls, reconfigure load balancers. In many cases that will end up meaning you have to take the actual workload and application down, which obviously limits your flexibility. What we will show is that we were able to do a bulk VM migration, all while keeping that application workload up. At the end we'll see we still have the load balancer and the database on the VMware side, but we've moved all of our web servers over to OpenShift virtualization. Let's get that started. Here we see the VMware environment with both the web server actor VMs as well as the database. Is this playing? I don't think it is. Let's see. Oh, there it is. Okay. Right.
We have all of the web servers and the database and VMware to start. We'll then see the Red Hat OpenShift virtualization doesn't have any VMs in it yet. It's important to say Red Hat OpenShift virtualization. We're not containerizing these workloads. This is actually Kubernetes running as a VM. And we'll kind of be able to monitor this application that's hitting all of these different actor VMs and show that the connectivity remains all the way through this bulk VM migration. This side you're opposite. On this side we'll see the application. On this side we'll see the migration beginning to start. We'll see the VMs in VMware go down, we'll see the VMs in OpenShift go up. But all while it's happening, the website remains 100% healthy. We're seeing over time, more and more of those requests are being handled on the Red Hat side.
Now we can see that all of those workloads have been moved over to Red Hat OpenShift virtualization. They still have the exact same IPs and they're still using that exact same load balancer as before. Meanwhile, on the VMware side, we've now powered down and those VMs are no longer running there. One of the really cool things too as a bonus is that now that you've moved these workloads over to a platform running on Isovalent, you get rich network visibility into the communication between your workloads. This is something called our Hubble UI. You can see the load balancer talking, all the workloads, the database. You also get really powerful micro-segmentation capabilities. Here, imagine a scenario sometime in the future. One of those web server VMs gets compromised and the attacker tries to SSH to the database server.
You will have not just full visibility into this, but actually the ability to block that attack thanks to the micro-segmentation capabilities via the Isovalent platform. You will see that we fully log all of that. With our visibility, this could be reported to your SIEM and your security team could do an incident investigation and go remediate the attack. Hopefully that gave you kind of an exciting peek into one of the two scenarios. I'm going to get back to the other cool thing, which is that what we showed was a Kubernetes-based service solution there. Right. We're actually now working on extending this to any Linux-based hypervisor. Obviously Cisco has a great partnership with Nutanix. We're also targeting OpenStack-based hypervisors. We're working with Platform9 to do that. I think they've got a cool demo of this working at their booth. Right.
Ultimately our vision is again connect and secure consistently and eliminate friction from you moving workloads from one silo to another. That was the VM based workload portability example. I'm now going to show one of the more modern applications. These are workloads running in Kubernetes as containers. In particular, we'll do an AI cloud bursting scenario using Kubernetes both on prem and in the cloud. To start with, we have a UCS plus OpenShift plus Isovalent cluster running a bunch of AI workloads. It's almost at capacity. You now have a new team that comes in that wants to run a text summarizer. A text summarizer is an AI inference app that basically takes a big blob of text and returns a summarization of that text.
Right now of course, you've got a little bit of capacity left in your on prem, so if that team just wants to play with it, they can go deploy a couple pods, you know, using Ray, which is a common AI framework for deploying work AI workloads onto Kubernetes. Of course Isovalent will connect and secure these workloads. What if this team now has a big experiment they want to run? They want to try summarizing all of Wikipedia. That's going to take a really long time. With two tech Summarizer pods, this is where we bring in another startup called Alotl, who's done a cool open source project called Supernova that allows Ray to spray workloads across multiple Kubernetes clusters. In this case, we're spinning up a cluster in Microsoft Azure using their Azure Kubernetes service and we're running Isovalent in there.
Of course, again, Isovalent will connect and secure those workloads as well. One thing's missing here in Kubernetes: each cluster's network is essentially an island, right? These two sets of Ray pods can't talk to each other and can't act as one unit to go solve this problem. This is where Isovalent cluster mesh comes in. Isovalent cluster mesh basically creates a seamless span of connectivity, load balancing, security, and observability, effectively making multiple Kubernetes clusters look like a single Kubernetes cluster from a network perspective for any application running on top. In this case, that application is Ray deploying the text Summarizer pods. All right, let's see this in practice. Here is the text Summarizer app running. You can see you're putting in a bigger chunk of text. You get a short summarized text back.
For this kind of basic ad hoc usage, those two pods are sufficient. If we peek under the hood, we see this is an OpenShift cluster. We can go into the Ray namespace and we can see that there's a two pod text summarizer cluster there. This is the on prem deployment in OpenShift, right? You can take a same look at that through Headlamp, which is a commonly used UI for managing AI workloads. On top of Kubernetes, we see those two pods. Now we have our big idea, right? Let's summarize all of Wikipedia and let's scale up to a larger number of text summarizer pods. Here we now see Supernova configured with both an OpenShift cluster and an AKS cluster. We're going to use Cilium and enable cluster mesh. These things operate as a single span of connectivity between those two environments.
We're going to scale up using Supernova to deploy up to 16 replicas. Right. Now this means that we have a whole lot of text summarization capacity. Right. We can go back to Headlamp and see Headlamp is kind of giving the simplicity of there being a single, a single 16 node cluster, even though in reality we know there's two different clusters underneath and that 14 of those 16 are actually running in the cloud. Now this gives us all of the power we need to churn through all of those Wikipedia articles and complete our experiment. Cool. Hopefully those were two kind of fun and exciting examples of how workload portability can really help you make changes inside of your business to deliver more value to your application teams. With that, I think I'll welcome Jeremy right back up.
All right, go ahead. Oh, that was great, Dan. Really appreciate it. I thought the flexibility that you're bringing in choice for customers to run their applications wherever they want to is. Yeah, it's a super exciting world we're living in. You talked about security, we talked about eliminating silos. You just showed it to folks. Very cool. The migration without disruption, really important. I think it's going to be great for our overall ecosystem as we continue to build out solutions that span the hardware, the software and Isovalent as well. Eliminate complex with visibility. Right. That's no matter what you're looking at. Your console, our consoles, we want to give visibility across the entire clusters that we're building.
You talked about it from a hardware perspective with things like our storage platforms and choice of storage partners as well as software layer, which I love the fact that Isovalent brings us into that step above the hypervisor type of world that we can start bringing our solutions into and being able to provide that control there. I also want to thank you all for being here and hanging in until the very, very end of not only the last session here, but sounds like we only have like 10 minutes left before they're going to ask us to get out of the room. Security guards come in. Right. If you want to learn more, there's a ton of information. Cisco University, AI Pods and you can take a snapshot of this here. Real quick. I also encourage you to check out isovalent.com labs.
There's a bunch of great hands-on labs that you can just spin up and play with this stuff. Whether you're new to Kubernetes or you're a Kubernetes expert, there's kind of content at all levels there for you to learn more about what Isovalent can bring from a networking and security perspective. There's a really cool download on an EBPF book. Oh, that's right. In fact, the author was right over there. I read it on a long plane ride. All right, with that, don't forget to fill out your session evaluations. Thank you again and enjoy the rest. What's left of it. Let's just go live. All right, thanks folks. Hey there, everybody. Thank you so much for being with us here on the live broadcast. We want to welcome you all back to the Cisco TV studio.
We just wrapped a fantastic session around Cisco data center built for AI, ready for everything. We got some fantastic storytelling from Dan Wendland and from Jeremy Foster about the kinds of pressures that IT leaders are under today. You know all about this, right? Rising costs, fragmented tools, that race to be AI ready. It is all forcing us to fundamentally rethink our infrastructure strategy. That is why Cisco is out there redefining the data center, right? We are taking this unified approach that you just heard about. It simplifies complexity, it accelerates innovation, it puts you in control. Others are out there building boxes. Here we are at Cisco building that foundation for the future. As Dan and Jeremy just walked us through, it is about intelligent cloud managed operations that let you deploy and monitor and optimize your infrastructure from anywhere.
You can do it at any scale. Another great session. We have so much great content still headed your way. We want you to stay with us here on the live stream and again, make sure you keep reaching out to us on social media, any platform you like, whatever your favorite is. Share your posts, your thoughts, your ideas, your inspirations, what you loved. Maybe something great that you heard from a speaker or from one of our leaders. Just make sure that you include ciscolive and ciscolive so our incredible social media team sees every one of those posts. Right now, Rob is out at the AI ready data center in the world of solutions. Hey there, Rob. Hey man. How is it going over there in your part of the world?
We're having such a great time up here and I'm telling you, the energy is still dynamic, as I just said to everybody viewing so much more still to come this afternoon. I'm glad you're still our eyes and ears back down there in the showcase. I'm going to introduce, I just want to preface this real quick. This is Chris O'Brien. I'm going to talk to him in just a moment. I just want to say the show floor is shutting down here in a couple of minutes. I think we're going to get the very thing that we talked about with regards to how maybe not as fun as it was 10 years ago or however many years it was, but I've already started to see how they're going to be maybe moving a lot of stuff around us.
There's probably going to be some public address announcements. To our audience, deal with it. We're all going to be okay. We're going to try and deal with it here as we're doing it. Chris, you get the challenging final interview of the day here from the show floor as they're literally shutting it down. The best right now, AI pods. What is new this year? Can you kind of explain how you guys are demonstrating the power of these things here? Yeah, sure.
You know, I guess about six months or so ago we launched AI pods and it was really born of our Cisco Validated design program, which means we took, you know, Nexus Networking, Cisco Compute, and then we worked with our ecosystem partner channel partners to really develop a complete AI solution for inferencing and we launched the AI Pod concept. What we've done this year at Cisco Live is actually extend that. We now have AI pods for training, AI pods fit for fine tuning or optimizing models. I think the key point is, Rob, is that what we're actually trying to do is allow our customers to build an AI infrastructure structure that's fit to the purpose they need, right? They can flex it and size it properly.
What we're doing here on the show floor is we've got our Splunk Observability Cloud, right? I mean, nice dashboard views. You know, you can see the whole AI Pod. You know, you can see the NIMs, you know, time to first token, the database. It shows you at an all-in-one view. You get a great view. I think the other thing it does is it actually lets you see all the elements that are really involved in building a true gen AI solution. You know, from the LLM to the networking to, you know, the applications itself, I think, you know, it's all shown here on the AI Pod. This is live. This is based out, you know, we're in San Diego. Yeah, yeah, I told you that would happen. Yeah, I did. That would happen.
You know, we're here in San Diego. This pod is actually sitting in North Carolina and the players are building cars, you know, racing against each other, you know, and AI is not only doing computer vision, watching them build, but commenting on how it's coming along and then ultimately judges them on their work. Yeah, tell me how that works in terms of. Because you're talking about you're building on a pre built model that has to be fine tuned for a specific use case. But there's some difference between what the package is that we kind of provide from a turnkey situation to the application actually being put together for the customer or by the customer. You've kind of done that here.
Can you explain what's going on behind us here with it looks like they're building something with the Legos, maybe break down the steps and how those become data input sources for the AI? Yeah, sure. You know, the way this is built is we have three different models in play here. We're using Riva, NVIDIA Riva text to speech services. Sorry, long week. That's all right. You know, a diffusion model as well as a Gemini judge, a model to judge their work. As they play, every two seconds it's taking a picture from the camera, sending it to our model, model processes it and then provides input on how they're moving along. Ultimately at the end of the game, we push a button to judge and we send it to a different model that actually looks at the end result.
That model is actually trained with a number of different Legos to say this is what a car should look like. This is what a plane this way. It is just like any enterprise IT, right. They might get a foundational model, but you really need to tune it or you need to augment it with your own data to really make it relevant to your business and try to achieve those outcomes. In our case, we are trying to let people have fun. You know what I love, so Pat, if you swing down here at the end, this is Amy. I know Amy and she is highly competitive. If you notice, even though I am talking about her, she is not breaking stride. I have seen her digging through here. She is trying to get them out. Darn.
Don't care about audio here. We're gonna find the right piece. Yeah, we're gonna get it out. We're using obviously the Vision application and it's feeding in. We have to process this. That is exactly what has to happen for our customers. I feel like. Do you feel like we're actually able to set a foundation that says is let's simplify as much as we can for you on the base so that you can then customize on the edge, so to speak? Yeah, no, I think, actually I think it's not just Cisco that's doing that too. If you look at what like NVIDIA does with their blueprints, we work closely with them on our Cisco validated design and on AI pods. Yeah. You know, ultimately on the Secure AI Factory. It's really about simplifying the journey at every layer. Right.
At the infrastructure. You know, in your AI deployments. Right. Have things that are, you know, fully vetted and supported by large vendors such as ourselves. You know, allows the customer to worry about, well, what am I going to do? And maybe that's the biggest issue. Right. What is the use case you want to address versus, you know, we're trying to remove the how and, you know, and the complexity. I was just talking with someone earlier about that is the fact. I think the, the hard part is, is that we're all trying to keep up as best we can. And then there is this element of I'll have you come over here if you will a little bit so we, we can face camera. No, that's okay. This is relaxed. This is the last bit of the show. But you know, it's.
How can we help customers maybe decide on what that low hanging fruit is for? Things that will add value to the business. I've worked with several partners that are assisting with customers on this type of thing because it's usually about, you know, is it demonstrable? Can we measure it? Can we execute quickly, you know, without maybe invoking a bunch of challenges? Challenges with, you know, convincing different silos of the business. That is still maybe a challenge to overcome, you know, that may require. This is all kind of driving new shifts and even how we run our businesses. I think. Yeah, it is. I mean it's certainly given new opportunities. I think, you know, part of the AI Pod story is actually standardizing how you approach AI. Meaning you shouldn't create silos within your data center. Right.
You shouldn't create silos, hopefully not creating because every time you customize something costs you more and more over time in terms of people and training because you got to maintain that thing. If we can allow them to standardize, if they adopt this approach, if they adopt Cisco, you know, you manage your network through Nexus Dashboard, you manage your compute through Cisco Intersight. If you're an NVIDIA, you're using the NVIDIA, you know, NVIE software. We put them all together. We don't introduce silos, we just allow them to introduce solutions. Actually my impression this week is that we've been doing a good job of starting to connect our own silos because the way we're rolling up to single management platforms.
Yeah, no, the G2 effect has been in full force this whole week and I didn't realize, I didn't, you know, and I bet I've talked to people to verify this because I don't have any connections. I'm not, you know, privy to any information anyone else doesn't have. I'm not sure about that but I get a few opinions here and there and the overall feeling I've got is that wow, this is real. I'm very impressed with. Yeah, I can't imagine what the stress is behind the scenes with everybody trying to get ready for these shows. It's already been stressful historically because it wouldn't it be. Then on top of the volume that, that we are now churning stuff out as we all work at AI scale, so to speak.
Yeah, I mean from the compute business unit, if you look at our portfolio from a GPU perspective, the dense, you know, now we have a dense GPU platform. Now we have a, you know, a PCIe enabled eight GPU platforms. We are now able to handle inferencing and training. You know, just our portfolio, portfolio has grown but now you're asking more, you know, in terms of management and software and there really is a general motion, I think across Cisco to have cross architecture across solutions because that's the reality of the situation. On our team, you know, we work with our partners, the NetApps, the Pures, the Red Hats of the world, NVIDIA, of course, to really bring, you know, true solutions, full stack solutions. That's what it's going to take. I'd love the examples you guys have done.
Chris, thank you so much. I appreciate your time. We're going to go ahead and depart with now though. Let's just go back to the studio with Steve. I appreciate that, Rob, as always, and thank you for pushing through here with the end of the, not the broadcast day quite yet, but certainly the end of the show day down there. Chris, fantastic job. Thank you for holding it all together, keeping the energy high and the passion deep as we wrap things up out there on the show floor. Fantastic information. We're so glad to have you with us here in the studio. I'm Steve Molter and joining me right now, now for this next segment, I've got Bill Gardner, Senior VP and GM of Optical Systems and Optics here at Cisco. Thank you for coming in. Spend a couple of minutes with us.
Thank you for having me. Happy to be here. Yeah. You've had a great show. Has it been wonderful for you? Some awesome show. Have you stopped running even for a minute, or is this it? Is this relaxing thing you've done? This is the first time I've sat down in a while. Yes, we're glad to provide that opportunity here for you. All right, so Bill, here's what I want to do. I want to kind of start at a foundation, right? Explain to us the role of high quality optics in enabling AI workloads. Because I think a lot of our Cisconians all over the world, they don't necessarily understand, they don't know exactly how optics are contributing to AI. Let's start there. Okay, that's a good place to start. Let me just actually show what we're talking about. This is a pluggable optic.
This one happens to operate at 400 gig, but we have a full portfolio of optics. Optics, everything from 1 gig all the way to 800 gig for AI. What we know is that especially for training applications, there are thousands of these optics that are deployed in the training application. Even if we consider inference applications, which are generally going to be smaller AI infrastructures, there is going to be at least hundreds of these optics deployed in these infrastructures. What that suggests is that optics becomes a very relevant part of the AI infrastructure and we really need to pay attention to this. As we start offering these optics to our customers and as customers start consuming our AI infrastructure, optics becomes a big part of that. Very cool.
A lot of that is behind the scenes, meaning even though it is creating benefit, a lot of people do not know that it is out there. I think, I feel like we have so many of those layers within the infrastructure. As we continue to make that push toward AI, whether it is on the agentic side or whether it is in the data center, there are all of these pieces that are vital to creating the infrastructure that people may not know that we have to work with in order to create that bandwidth, that power that we are going to need to run all these new applications. Very, very true, very true. I often consider optics in some cases like an accessory that people are not really thinking about. It becomes a critical part of the infrastructure for AI.
When we look at AI and we look at optics in general, more specifically for AI, there are really three things that we have to think about. One is latency, one is power, and the other is reliability. Reliability is probably the most important thing that we really have to think about. I love that. I love that. Let's continue more on the reliability. Right. What kind of challenges when it comes to reliability do people look at when they deploy their AI workloads? What are they up against and how do we solve for that? We have, I think, gotten some really good and unique insights from our hyperscaler customers about the need to have very high reliability in the optics and the networking infrastructure. That is part of an AI infrastructure.
The reason is that they may deploy thousands of GPUs, which are connected with thousands of optics. These GPUs operate in parallel. They're processing data in parallel. That's the power of a GPU. If an optics has a burst of errors or a link flap, the AI workload will have to stop, back up and restart. That is a huge, huge problem for our customers because it's waste. They've paid for these very, very expensive GPUs, and now these GPUs are either idle while it is backing up to a safe point, or they've wasted hours of workload time in trying to process a workload and then having to stop it and restart it. The reliability issue for an AI infrastructure is much more critical than a typical classical networking application.
In that application, you can think of Gmail or video or a voice call that's taking place over an Internet application. If there's an error in that, TCPIP tends to bridge that for us and hide that. As a user, you don't really see that networking in the classical sense is pretty forgiving in the face of errors. In the AI world, it's a very intolerant atmosphere for any types of errors that we might see due to optics. That really drives a need for very high reliability in the optic. Yeah, blame the Network takes on a whole new, a whole new meaning in that case, doesn't it? If it hides the problem and uptime is everything. Talk to us a little bit about Cisco's approach to optics, but specifically how does it align with the growing demand for AI?
The workloads we were talking about a moment ago, especially next gen data center requirements, because it's been such a core talk, especially from the keynote stage and then all across the conference this week. We, I think, are very cognizant of this need to have very high reliability in the optics that we deliver to our customers. There are really three things that we do to address that. One is that we use a technology called silicon photonics. Silicon photonics is basically silicon technology like CMOS that we use in our switches and routers, but it's switching photons instead of electrons. What happens then is we can take discrete optics components and replace them with a photonic integrated circuit. That helps us to drive much higher reliability, almost silicon class reliability, very high reliability. That's one thing.
The second thing is we design for reliability. That means that we know different components have different tolerances. We design for all of the worst case tolerances to be combined. We make sure that at design time we're designing for all of the worst case conditions. That includes things like temperature variation that might occur in a data center. It includes things like voltage variation that we might see in a host, a router or a switch. It includes things like different timing that we might see on that host interface. All of these things can contribute to, in a classical network world, an error that might occur. We make sure that we've designed for all of those to be worst case. Finally, we take the optic through a qualification process.
It is the gold standard of the industry in terms of the testing that we do to make sure that the optic meets our expectations in terms of quality. For the optical interfaces, the electrical interfaces, are they compliant with industry standards? We test it in an environment that is a worst case environment. We vary things like temperature and humidity and voltage and timing so that we know that in the event that we find this optic in the worst, worst case situation, it's going to work for our customers. That's very cool. Nobody else in the industry does that. I love that story too. When we can talk about the unique Cisco story, let's put a button on all this. 10 seconds.
What do you want people to remember and what do you want them to think, when they think about optics, I want them to recognize that whether it's an AI application or a classical networking application, Cisco is going to be delivering the highest quality and reliability for optics in the industry. So when you think optics, think Cisco. Amen. I love that. Bill Gardner, thank you so much. Thank you for taking the time to join us here in the studio. We truly appreciate it. We're going to leave with that shot. I love it right there. Thanks very much, Steve. Thanks. We have got one more great Center Stage session on tap for you. Connecting and securing critical infrastructure with industrial networking in an AI native world. We've got Samuel Pasquier with us, our VP of Industrial IoT on deck.
He's going to lead a fantastic panel and we've brought in some great partners and customers. The Walt Disney Company, Planet Farms has come in on this one. We're working and we're living in this era where AI and automation are redefining industry landscapes, operational landscapes. They're right at the heart of that transformation. Critical infrastructures, manufacturing, utilities, transportation. We're about to hear all of it. We're going to send it out to Samuel at Center Stage right now. Enjoy. We'll see you here on the flip side. Thank you, Emmy. Thank you all for coming. It's very nice to see all of you and we are very excited to share a lot of very good news with all of you today. It's all going to be around AI.
You've seen the team of Cisco Live this year, and we have quite a few things to share with you. When we talk about AI, you have seen, I'm sure, this curve of adoption of AI. The thing that is very fascinating is that now it's really the time around physical AI and what does it mean to be physical AI and how can we, Cisco, help you to do that? What we have done is we have been into industrial network. You know, Cisco's it, but we have been in industrial network for the last 20 years. We've done a lot of things. Recently we decided to ask you, our customer, what is top of mind for you? We did a survey, we asked thousands of our customers, what are the big trend?
One of the big trends was obviously that AI will have the greatest impact on your infrastructure, your industrial infrastructure. We really took that to heart and see, okay, what can we do to help you to deploy those AI use cases and what are those? With that, let's start a little bit with more the vision of Cisco. Now, where do we think things are going to go? We obviously think the assets endpoint are going to get smarter and smarter, but we also believe with the amount of resources that those endpoints will need, you're not going to be able to run all of that on each endpoint and run that on the limited resource capacity that you have.
We think slowly the software will migrate from running on the plant floor or on the field will run in a location that is more natural for software, which is the data center. Where you have more elasticity, more CPU, more memory, more GPU to be able to do all your learning and your AI. If you look at those two, what do you have in the middle? In the middle you will have the network. If you take an analogy, we want to believe that if you look into the future, what will happen is the asset into your plant floor will be the arm that you have on your body. The brain will be where you run the software. That will be the data center and the network will be the nervous system. That is where we believe things are going to be.
Obviously we, Cisco, are investing to make sure you have the best nervous system, you have the best critical infrastructure, industrial networking, to be able to do that. Let's look into what are we launching this year at Cisco Live. When we talk about AI, there's really two things. There is what we call AI tooling, which is how we, Cisco, are using AI technology to make your life easier and provide more value to you. That is one aspect. The second aspect is how we can provide you the tooling and the infrastructure for you to run AI use case. Let's start with the first one, AI tooling. What do we do with our technology that make your life easier? The first things that I'm going to cover around that is around security as your things, your infrastructure is getting smarter.
You need to connect more and more assets. The key thing that what we've heard from you is you want to do security. To do security you need one of the key technology to put in place is to be able to do segmentation, to be able to separate who is talking to who so you can limit and reduce your attack surface. That's something that a lot of you have been telling us you want to do. A few years back we invested and we created a tool called Cyber Vision. What we do with Cyber Vision is we run directly on the network. We can inspect all the different assets that you have and we can give you an inventory of those assets. The good news is it works very well and we can find everything.
The bad news is that you get something like that, you get something for example, that's a real example of one of our customers that show all the assets that they have on their plant. This is complicated. There's tons of thousands of assets, millions of flow. That's what we can produce. Obviously if you have to take this insight to be able to put segmentation to create your segmentation policy, this is very complicated. That is where comes AI. With AI, we use algorithm, we use clustering algorithm that we can look at all those assets that we can discover. We can understand who is talking to who, can understand who they are. Cyber Vision now can recommend you group that you can use for your segmentation. That is where we leverage AI clustering technology to give you recommendation on how your network should be segmented.
This is a brand new functionality that is available now in Cyber Vision. I hope you like it. That is one example of how we at Cisco leverage AI technology to make your life easier. Let's go to the second bucket, which is what do we do to make you able to deploy AI use case. First, let's start to understand when we talk about critical infrastructure, what are the AI use cases that we have seen? We have seen quite a lot. The first one I would say is machine vision. We have seen our customers using more and more camera to improve automation in their plant. Camera are used to do visual inspection for quality, they are used for traceability of things. All those cameras are connected, they are connected now to the network.
The challenge and what we see is we have seen an increased demand in POE, power over Ethernet, in industrial infrastructure. We've seen an increased demand in bandwidth on those switches. That is something that we want to address. The second thing that we have seen is an increase of AGVs and AMR in your infrastructure. I even talked to a customer earlier today. They have a data center where even in the data center they want to have AGVs that can run around to do things instead of humans. We see a need for more customers to connect those smart assets through a wireless technology. Those wireless networks need to be highly resilient. This is also something we need to address and what you will need to address if you want to deploy this kind of technology.
The third thing that we have seen is a new concept of software defined automation for the manufacturing manufacturing wall. What it means is instead of having a big PLC to a small PLC to run your plant. What we've seen our customer wanting to do is have bigger PLC, less PLC that have more compute power to run the automation. We believe if we look into the future, the PLC are slowly going to start to be virtualized and running in a big bigger form factor, either regularized PC or directly into a computer room. What it means on the network is we start to see the need in your industrial environment to have fabric to be able to connect different assets. We have seen the advance in AI robotics.
The robots today come with a big controller that run in a box next to the router, next to the robots. We see our customer wanting to disaggregate the software that controls the robot from the robots and that something is going to be moving out of the plant floor to run on a server. Once again you will need a network that has a requirement to have low latency to be able to enable motion timing into your infrastructure. Last but not the least, it's all about data and collecting data. We have done a few that are collecting tons of data on the assets they are building. Last year I was on stage with one of our customers, they were collecting 10,000 data points per battery cell they are building. That gives you an idea.
The requirement on the network is how to connect this plant floor all the way to the cloud infrastructure. To summarize, what do you need? You need an initial network that is performance, that is resilient, that has low latency, that has built-in observability and is able to enable those new use cases where we see happening in the industrial world. Today I am very happy to be the spokesperson representing an entire team at Cisco, the industrial IoT team that has been launching two new big innovations. The first one is around three switches, industrial switches. We are going to go a little bit in the detail. The second one is all about industrial wireless. Let's start with switching. We are launching today at Cisco Live 19 new industrial switches to help you connect your industrial environment.
We have DIN rail form factor, we have table mount form factor, IP67. We have this new small one. I set 100 ruggedized to connect robots. We have small DIN rail, we have rackmount, 19 new products that meet those requirements. Flexibility of power net, more bandwidth, more capacity. Fabric built in, observability built in with ThousandEyes. The products that we are launching, we have now the biggest portfolio ever in industrial switching. We are expanding the portfolio to cover all those new use cases. We are very proud of that and we invite you to go and see it on the World of Solutions where you can see live demo of all those products. That is on the industrial switching, but obviously we also have news on industrial wireless.
What we are announcing today at Cisco Live is a unification of Wi-Fi technology with your Cisco ultra reliable wireless backhaul technology. What does it mean? It means that today what you have is you can have a Wi-Fi network in your plant and maybe an industrial network for your AGVs and AMR. Starting today we are announcing that with your Cisco wireless controller you can now run Wi-Fi and URWB together on the same single AP at the same time. You can do Wi-Fi to your endpoint and you can do URWB to your AGV and your assets that are on the move. This is very exciting for us. By the way we have that live in demo in the world of solutions. Very excited to see that. To summarize what we are launching, we have three things.
New functionality into Cyber Vision to simplify your life and help you to implement segmentation into your initial environment. Nineteen new industrial switches built to help you enable your AI use case. More POE option for machine vision. More low latency, more performance fabric enabled. ThousandEyes built in a lot of new functionality. Finally, Wi-Fi and URWB coming together. We are super excited about it and you know I can keep talking about it, but instead of me talking about the technology, we thought it would be much better to bring customer to talk about it. I'm very happy to bring three person on stage. The first one is Dan. Welcome Dan. Dan come from Disney. You may not know, but Disney is an amazing technology company. Thank you Dan. Second person, Massimo.
Massimo is coming from Planet Farms and thank you for joining us, Massimo. Finally, Keith, my coworker from marketing. It's okay, Chris. We will try to show you how from Disney to Planet Farms, our technology can enable their business. Let's go, Keith. Perfect. Perfect. We look forward to getting started to thank you all for joining us. You know, first of all, I'd like to say to Dan and. Where's Wendy? Where's Wendy? One year anniversary yesterday. This is the kind of customers that are willing to come to our event. I think it says a lot. We appreciate it, appreciate you. Thank you very much. Dan, this first question is for you.
A lot of people are familiar with all the Disneyland and Disney World, but there's so much that goes into the operational complexity and safety of those parks. We'd love to just get a little bit of color on how you tackle it. You know, some of it's about theming. When you take an attraction and people always ask me, like, why do you call it an attraction? It's because it's really a sum of some parts. You see one of our ride vehicles here in testing and then you see it once it's themed. We take the ride components, the show components, make an attraction. I think theming is very difficult. The technology behind the scenes, we really run one of everything in our attractions. Lighting, audio, special effects, animated figures, just to start right there.
Getting those all to work in time, at the same time, all the time is a challenge. Fantastic. Thank you. I think the next question is for Massimo. I think we're going to start with a video. What you're going to see is, I'll tee up, this video is Planet Farms has some amazingly innovative agricultural technology that they'd like to share, starting with the video. Three, two, one. Sam, terrific. I think the question that, you know, probably most people have on their mind is, you know, we're all a little bit familiar with agriculture, but what are you guys doing that's so different and innovative? We can say that Planet Farms is really changing the way that we think about agriculture in general. It is also setting new standards for the vertical farming. It starts with a radical change of perspective.
Attention Cisco Live exhibitors and attendees. The world of solutions will close in 15 minutes. We're still open. We can still from horizontal scaling, where basically in order to grow more, you add more land, to vertical scaling where production goes upwards in layers and in climate controlled environments. Another thing that Planet Farms is doing is a shift from a supply driven agriculture where basically you can grow whatever you can and wherever you can, is basically limited by the climate side cycles, by the weather conditions, by the geography, to a demand driven agriculture which is basically free from all these limitations. One thing that we do is that to grow everything in a clean room, which is not sterile per se, but has a very low level of airborne particles, and that basically allows us not to use any pesticides.
No consumption of soil, no consumption of water, 95% less of water compared to regular and traditional agriculture. This is the first plant in large scale that we built just right outside of Milan. It is industrial scale and it is fully automated. This one is just right outside of Comb Lake and is twice as big. Two hectares of production in vertical fashion. It represents the state of the art positioning, planning farms as one of the largest players worldwide. What comes out of these production plants is something fairly unique. The client is the first one to open the bag and touch the product. Everything else is completely automated, from seeding all the way to primary and secondary packaging. The product is not washed, ready to eat. This is something that is so new and so novel, even the regulation needed to follow.
We had the two work on derogation law for quite a while. The Italian law and the European law needed to be changed. Perfect example of how technology can drive good policy making. Massimo, you're telling us that now the salad is growing, growing in a factory with robots taking care of it to feed humans? Yeah. The other thing that I'm saying is that it is incredibly tasty and incredibly good. Yeah, I'm sure. I'm sure no one is touching it. Fantastic. Fantastic. That's terrific. Dan, this next question is back to you. We hear a lot about IT OT collaboration and the impact of that on modernizing these critical infrastructures. Love to hear some of your best practices that you are working on with your team. Sure, sure. Bringing the technologies together that are OT and IT require a breadth of knowledge.
We have many teams that work on these attractions together, some from a heavy IT background and some from a heavy OT background. To get all the technologies to work together is easily obtainable. How you monitor those systems required some thought. How I look at something from a network engineering perspective might not be how someone wants to look at the network who is actually operating the ride. Some of that came in us collaborating and being very transparent between the groups. I think probably the best thing that we did was we got everybody together, all the engineers, and we all went and took training together. We picked very specific training. Some was a little bit more IT, some was a little more OT and made them all sit in a room and ask questions and learn at the same time.
You really got to hear where people were coming from as they're learning together. It's more like a partnership than convergence. Is that true? The first time you do it, it's not going to seem like a partnership. It gets there. It gets there. Yeah, definitely, definitely gets there. That's what the most important train together. You know, Dan mentioned there were a couple of dozen Disney folks here at the show. Who in the audience is here from Disney? They're lying. No one in the back. I see one over here. They're all at the bar. No, they are. Thanks for coming, Brad. There we go. Perfect. Perfect. Massimo, this next one for you.
I'm happy we got this far into this session without saying the letters AI, but everybody really wants to know from both of you, I think a little bit about automation and artificial intelligence and how they're being implemented today to improve operations. We use AI, machine learning and automation throughout the entire process at production and growth level. The way that we manage the level of lights, the nutrient flow, the irrigation, we decide with machine learning when it is the best time to harvest or if there are any, let's say, problems during production. We take this concept further. We use this kind of techniques also at the entire corporate level. We take AI and machine learning also where typically they are difficult to filter through.
The legal department, the production department, we have a system that basically integrates a collaboration side of things with the automation side of things. Robotic process automation, intelligent process automation. We do also a large use of vision machine learning. All the production site is governed by 3D cameras that collect terabytes of data daily. We use it to feed the machine learning models so that they can make predictions to help better decision making at the later stages. We have literally hundreds of these robots. We have built them over the last five years. You can see this as an early embodiment of what we now call agentic AI. We have been sort of pioneers in this area as well.
Basically what we can say is that this is what Planet Farms at the end is all about, is agronomic knowledge that is driven by a digital platform in order to make food which is tasty, good, healthy, sustainable and also reliable and as local as you can get. You are really saying like, you know, when people eat salad, they think it's farming, but here it's really a technology producing food. Right? Helping you to as technological as you can get, you can. We have four different tech departments in the company. Without compromising on the quality, that's pretty amazing. Absolutely. The quality is the first thing that we look for.
One of the next questions, and Dan, we'll start with you, but this was really then for both of you. One of the announcements of the show, I think Samuel shared, was the integration of Wi-Fi with ultra-reliable wireless backhaul into the same access point, same management system, something we've been super excited about in terms of industrial applications. Dan, we'll start with you about, you know, how has this helped your operations, and then we'll ask the same question for you. There's a lot of requirements on the ride vehicles themselves. I know you're going to show a slide of hostile environments, but you don't have the vacuum of space, so there's that. Leveraging both wireless and wired on a lot of our ride vehicles at the same time. This is the Millennium Falcon Smugglers Run, and it's a full motion simulator.
It shakes around quite a bit. Anything with an RJ45 connector doesn't stand a chance. We used a hardened version of the switch with the M12 connectors here. You know, for a product person, it's always amazing to see the picture, maybe in nature. I have a picture of this in my wallet. Oh, man. Actually, no, not the. In your backpack. This is test track here. We just worked with the team last year. You announced it, right? I can talk about it just like just today you did. It's the first time. You can tell I was listening. Yeah. The product is already at Disney. That's fantastic. Yeah. We have one of these. Brad has the other one. Yeah. This is under the hood of the test track ride vehicle.
Makes me tear up how happy I am to see that there. The IE3100. So successfully going around the track in Orlando. Of course, to your point, Keith, one of our most technically complex rides is Star Rise of the Resistance. Leveraging Herb on the ride vehicle and on the wayside. If you have not ridden this, make it a point. That sounds fantastic. I think, Massimo, same question to you. How is this technology used in your operations? We want to see the hostile environment. Yeah. We use the URWB technology throughout the entire production site for the AGVs in mobile mode. Here is the famous all style. Really pull. Peaceful. It is not so peaceful for the waves that must go through all this vegetation. Consider that the AGV with the cameras that must go through all that.
It is a very dense grid of steel that those machines must travel through. Let me say that those are quite big, 30 meters long. We have 16 of those in our largest plant. There is also water then. No lasers, no, no lasers. Eventually, at the cutting stage, full of humidity and potentially low temperatures. Obviously, we have done this before. We use the URWB and we had the usual typical problem of latency fluctuation disconnections. We can say that now with the Cisco solution, we are giving the IT operations teams some peace of mind. For that we thank you. I'm sure the team is very happy to see again the product in their real environment. It's always amazing. It was quite.
We designed the product peaceful and also we can never think about all the space it's going to go. That's the meaning. Thank you so much. Perfect. The last question we have, Dan, is for you. At Cisco, you know, we always get questions about our roadmap and I bet the audience here would love to know about the road map of Disney and what we can expect to see next. Sure. I got a video. It is time. Hello Disney family. Hello, D23. This is a magical day. The ultimate Disney fan. Eventually. Welcome to the new Disney Experience Showcase. I've got a lot to share. V23, a new nighttime parade. The first Coco ride to a Disney park. There's a famous house you may all recognize. A major new Lion King attraction. A whole new location on Pandora.
Excited to bring an Indiana Jones attraction to Disney's Animal Kingdom. Will there be any snakes? Two new cars? Attractions. Let's get her done. For the first time, an audio animatronic figure of Walt. It's Billy Crystal. The first Monsters Inc. Land. First suspended coaster ever in a Disney park. This isn't Blue Sky. We are gonna do all of this. We are doubling the size of the bench. We're adding four more cruise ships to our fleet. We are building a Villains land at the Magic Kingdom. Do you guys mind singing something with us? Plain White T's, everyone. Jabuzzi. Susan Egan, Rita Ora. Are there any gamers in the house tonight? Coming to Fortnite is Dr. Doom. This is just the beginning. There's that. You might have heard about this as well. Building a new park in Dubai.
When it comes to the networks, what's new for what's next there? There's a tremendous amount of data to be had with all of the ways we monitor, correlating that data, making it useful to not just the networking team, but to the business as well and predicting failures. I think that's going to be something we focus on this year. That's fantastic. I will let Samuel express my appreciation as well for you both participating and for all of you in the audience if you have a chance to visit. Maybe not now because we're time but the industrial section that has a robotics demo based on AI and quality inspection. Other great things here of the show. I'll pass it to Samuel to thank our guests. I want to say thank you to Dan and Massimo.
You know, sharing your story and how we use our technology. It's amazing. Like I mentioned, you know, as product manager, as engineering team, we think about product, we design product, but it's hard sometimes to even visualize where it's going to go. So it's amazing. I hope everyone has a chance to try the food products by robots. Right. Using technology. Definitely. We want to see that. Thank you, Dan. I think everyone, you know, I'm glad we're not in Orlando, else we'll lose all the room. Tomorrow they will be at Disneyland, they won't be at Cisco Live. It's very, very nice to see how you use technology. For all of you, like Keith was saying, please visit the world of solutions. If you're online, go online.
We'll have plenty of video and we will have a lot of announcement coming for the rest of the week. Please read those, go in the detail, ask us questions and have fun. Thank you very much. Thank you. Hello my friends, Cisconians all around the world. I can't believe it. This is the final time that I get to welcome you back to the Cisco TV broadcast studio for Cisco Live 2025. We are headed into our final segment here from Cisco TV. We have had an amazing time with all of you this week. We've just come out of another fantastic center stage session. We've got two final value driven pieces that we want to share with you here today before we wrap up our show day. First we're going to go to an interview with Samuel Pasquier who just led out that last center stage session.
He sat down with Jorge Ramirez from GM. They had a chance to talk about the industrial IoT space and the great innovations and product advances Cisco has been able to give to GM to help them reach their better outcomes and their goals. We also have a terrific conversation between our very own Lauren White and the fabulous Jeetu Patel, star of both of our keynotes here this week. One of the most inspirational leaders we have. Stick around for that conversation right now. Let's start out by sending it out to Samuel and Jorge. Hey Jorge, very nice to have you with me today. You have been a prominent leader in creating and pushing for IT/OT partnership at GM.
Can you share with me a little bit some of the challenges you have seen and the strategies that you use to go over those challenges? Yeah. First of all, thanks for having me here. Yeah, I mean, challenges for us in the manufacturing space, a lot of it has to do with legacy. Right. We are trying to blend legacy and new equipment. Right. Also, part of the strategy is how do you bring in new technology that not only addresses the new equipment, but helps me address, you know, the old legacy equipment that is sticking around? Yeah, that makes sense.
You know, I know you have been using some of the Cisco technology, so how did that help you to solve those IT partnership challenges that you have to face and how you're trying to bring this technology on your floor? Yeah. One of the things we're doing is we are leveraging a lot of your applications to give us visibility into the OT space. Right. Prior to this, we really did not know. Right. We knew we had problems, we would go reboot a switch, but we really never solved the issue. Now we leverage Cisco Technologies to not only investigate the switch, but actually see what went wrong so that we can fix it permanently. More visibility. More visibility. Absolutely. Awesome. That's fantastic. Obviously, the security landscape is evolving. It's never ending.
What are the major initiatives that you have to protect your factory and your plant floor? Yeah, I think the hardest part for us right now is our challenge. I should say, is just being able. Our plants are so big. Right. And trying to protect that entire landscape can become challenging. For me, it becomes an issue of, you know, how do I. How do I recover these sites quicker if and when, you know, an event should take place? It is very important that, you know, we work in partnership to try to understand the switching strategy, the technology, the visibility, so that we are able to deploy technology that will ultimately help me also, you know, maybe segment my plants a little further so that when it does happen. Right. I'm able to recover quicker versus trying to recover 6-10 million sq ft of equipment.
Maybe I can do it in 2 million sq ft. That makes a lot of sense. I know our team has been working together around Cyber Vision deployment. Can you explain how did Cyber Vision help you to do that? You talk about segmentation. What role does it play to help you in implementing your security posture? Yeah, for us, really. It's just giving my engineers the tools that they need to be able to do their job and do it well. Right. That's where I see stuff going. Right. How do I take the technologies, not only, you know, not only the hardware, but now the software applications, and give those tools to the right engineers with the right upskilling so that they're able to be more resilient. Right. As we're looking at the environment. Right.
Cyber Vision is one of those tools. Tools that now we have visibility into the plant floor. Now we have visibility into the switches. It's just very simple. It's a very simple equation for me. How do I arm, you know, my network engineers with the right equipment to enable them to be able to do their job more effectively. George, I have to ask you. You know, we are at school I today, a lot of people are talking about AI and AI and machine learning. You know, what's the impact into your space? You know, when you look into the factories, the industrial floor, how does it change the game for you? Yeah, I mean, I was really pleased to see some of the technology that you guys introduced yesterday because it instantly started. We need to get this.
We need to figure out how to do that. It's work that, once again, I'll reach out to the Cisco team to say, how do we enable this in the OT space? Right. In the OT environment? Because once again, AI is something that's coming. It's not going to go away. It's here and it's only going to evolve. I figure as AI deploys, it gets more mature, gets more standard, gets more defined. It's only going to enable us to be more efficient and more effective. Right. And quicker. Right. Some of the stuff that we do, the mundane stuff that we do, I think AI is just going to instantly wipe that out. To me, it's further than that. Right. How do you take then, that technology to make you more efficient? Right.
Whether it's programming a robot or whether it's trying to defend that very robot that you just programmed. Right. There's a lot of work within the company. Right. Within General Motors that we're doing to try to leverage the technology to the best of our ability. Right. It's always great to see companies like yourself that are actually now embedding the technology right into your applications. That's fantastic. One last question. What's your wish? You know, I ask you a lot of questions. Do you have a wish for Cisco? For Cisco, absolutely. First of all, I think one wish is that we continue our partnership. Right. I think it's very important that between both enterprises, you know, we work to do to come up with solutions that is only going to help us.
Then the other wish is that every Cisco employee buys a General Motors product. Thank you. Thank you, George. Thank you so much. You know what? Let's go and configure this GMC Sierra Ultimate. I think it's the time to do that. Right. All right, we have time. Thank you, George. Thank you, Samuel. I am here with G2, President and Chief Product Officer here at Cisco. G2, it is an honor to be here with you today. It's good to see you. How are you? I am doing great, G2. We have heard a lot of great, and Cisco has made a lot of big moves over the past year in AI and security like open sourcing your foundation model and leaning into agent based systems. How have these decisions really shaped us? What would you say?
I would say that 18 months ago, even 12 months ago, no one thought about us as an AI first company. I don't think there's much of a debate right now in the industry on whether or not we are relevant in the AI revolution. Like everyone thinks of us as a critical infrastructure provider for the build outs of data centers, for making sure that they can actually have the move to agent. Ok. From that perspective, really proud of the team and the body of work they've done. I think we had yesterday probably the biggest payload announcement that we've had in the history of Cisco in a single event. The beauty about that is we're just getting warmed up, like the teams are ready to go and you know, you'll see much, much more exciting stuff continue to keep getting announced. Exciting.
I love it. Speaking on that, throughout this week we have heard for the first time that security is an AI accelerator that is, you know, it's one of the most counterintuitive things because historically over the past 30 years, security has always been something that you trade off with productivity. You know, you either have security or you have productivity. Right now what's happening is if people don't trust AI systems, they're not going to use them. Security becomes a prerequisite for AI and it becomes an accelerant for adoption. It's one of those things that every company is kind of wanting to make sure that the adoption of AI goes in the right way because there's so much capital expenditure that's spent on it.
We become one of the kind of core elements that powers the adoption of AI in the world. Yep. No, that makes a lot of sense and it's exciting because we need to, we're at the forefront. I love being part of a company. That's the reason that that's the case is these models that these applications are built on, they are by definition what they call non deterministic which means they're unpredictable. The applications you're building have to be very predictable. How you build predictable applications on an unpredictable foundation is you create mechanisms to make sure that you can de-risk what could happen that you don't expect to have happen. This notion of us being able to have full visibility but also validate a model to say, is it working the way that you want it to work?
When it isn't, have the runtime enforcement guardrails in place, extremely important. That's basically what we've been able to pioneer in the industry. I'm super excited to see the early feedback we're starting to get from customers on this as well. Amazing. As we wrap Cisco Live, what's next? What should our customers keep an eye on? For Cisco, as we continue to innovate in AI security and enterprise technology, I think we have to, you know, infrastructure is still too complicated in the world and I think managing this infrastructure is hard. The IT administrators and the security professionals, they have to kind of operate as a multiplayer team to keep their environment up and running. These tools that are provided are single player tools.
What we want to do is make sure that we get into this agentic era also not just for the end user, but also for the IT administrator. What we announced yesterday was a very seminal kind of announcement around AI Canvas, which is this ability for us being able to generate user interfaces and dashboards that can really help an IT administrator collaborate with other people and say, how do I keep my network up and running? How do I keep my security breaches addressed? How do I be proactive in my environment rather than just reactive all the time? Can I make sure that this gets dramatically simplified in the way that I manage?
I think we, yesterday was a big step and we will continue to keep innovating because these products, the beauty about them is they get better the more they get used because they learn and then they actually adjust by themselves. That is the beauty about AI is, you know, the more you use them, it's like, you know, it's like wine. It ages over time in a really good way. Gets better with time. I love it. I love it. We have to hear from you in terms of. It's been an incredible few days. We're all excited. What would you say out of all these amazing moments would be the moment that stood out most to you?
For me, it's always the enthusiasm that customers have on how much the hard work that the teams are doing is changing their lives and how much they've actually been excited about using these technologies. That's probably the most gratifying of it all. If you were to say, what are the technologies? Look, we had a full refresh on our routers and Wi-Fi network devices as well as our switches. We had a canvas that was launched. We had a data center kind of set of partnerships that we've done. We've had partnerships with NVIDIA. We've actually had a great announcement, set of announcements with Splunk.
Hard to pick one thing, but I think that the composite thing that I would pick is we are now definitively a platform platform that interoperates and works in harmony with one another rather than just a bunch of point solutions. That transition took us a couple of years, but we are now on the other end of the transition and that's going to create value for customers in very different ways. When that platform is powered by AI, the momentum you'll see is very, very different. I am super excited. Thank you so much for your time, Jeetu. It's been an absolute pleasure. Thank you for having me. Thank you. What a great way to go out on this fantastic broadcast. Jeetu Patel, who kicked everything off for us Tuesday morning in the opening keynote along with Chuck Robbins, Kerry Palin again on Wednesday.
It is this access to leadership. Lauren, I mean, we are here with everybody. I should just figure out that all five of us for maybe the, what, is this the second time or is this the first time we have all been together? I cannot remember Tuesday. First, second, second time, whatever. All right, here you go. Lauren, you had that chance to sit down and talk with G2. What is it like when you get those one on one opportunities with our leadership? Everybody puts them on this pedestal, right? Oh, they are untouchable. We cannot be around G2. And there you are hanging out with G2. What is it like? You know what, for me, it was like, it was just another reminder that we have an incredible leadership team and it starts from the top.
Like it's not like a surprise that G2 is that way because all of our leaders and G2 just leads by example. Just like Chuck, just like all of the amazing leaders we have. It's really cool to be able to speak to him and see that, that warmth and feel like, you know what we're really like. They're, they really care about their people. It's not just for show. This is real. Cisco was really leading the way. Couldn't agree more. There's that old expression, right, that the fish swims or stinks from the head down. Man, does this fish swim when we're here at the show. Right? I'm so glad that I actually get to see all your beautiful faces here.
Before we make the official sign off to Cisco Live 2025, first of all, I just want to say the four of you are a joy, a delight. You are brilliant. You are so good at what you do, and I've had a phenomenal time with each of you, and I'm just grateful that we get this last moment here together. Let's do what we always do. Let's talk about that, like, one moment, like 10 seconds per person where we just talk about what we love the most. Michelle, since you're furthest away, we're going to start with you. It was my first Cisco Live. This was incredible for start to finish. I just want to say the one Cisco story, the single portfolio really showed through for me.
All right, I just really enjoyed getting to know Michelle and Z as new friends, adding to our family here. So glad to have Lauren back with us as well. Just keeping the energy high, keeping us all honest. I don't know. There's too much to talk about, I think that's enough. I think, of course, I'm gonna say security. Security, security, security. It was everywhere. It's everywhere. Infused in the network. That was my favorite part. I love that. I love that. All right, Z, you know, the one quote that I want to take home with me is one that Carrie had on yesterday. It said, AI was about IQ, but we rap. AI doesn't have EQ and it's about people. And I have three words for us. We are family. If I was able to sing that, I would sing that for us today.
You, by the way, were right there from the start with that. Thank you, Z. On behalf of myself, the entire team, this spectacular crew. I'm not going to name everybody by name, but unbelievable. The producers, camera operators, audio. I mean, folks, literally. Master control and lights and everything that they do to make us look and sound so great. We're grateful to all of you. We are grateful to all of you for being a part of the broadcast from all over the world. Thank you. We will see you at the next Cisco Live from APJC. Bye bye, everybody. Bye.