Good morning, everybody. Welcome. My name is Tony Kerrison. I'm the Head of Group Technology Infrastructure at Deutsche Bank. I also run technology in the Americas. I'm very pleased and grateful to be joined today by Jeetu Patel, President and Chief Product Officer for Cisco Systems, and Mark Patterson, EVP and Chief Financial Officer. Welcome, both of you.
Thanks.
Let's kick off then. Maybe have an introduction from each of you. Talk about your roles. I know, Mark, you've only been in the role a short time, but you can talk us through what you're doing at Cisco Systems.
Sure. Yeah, I've only been in this role 90 days, but I'm an old-timer at Cisco Systems. 25 years, a bunch of different roles and responsibilities. I think I've worked with every geography, every customer market that we have at Cisco Systems. Stepped into this role about 90 days ago, kind of went back to my roots, if you will. Started in finance, was CFO of a startup that Cisco Systems bought back in 2000. Just really, really excited. Great team. I think we'll get into it today, but certainly in the time that I've been at Cisco Systems, we've got probably more opportunity ahead of us and excited about that.
I'm Jeetu Patel. I took over the role as Chief Product Officer about a year ago, and I've been at Cisco for five, but probably worth noting that, you know, we used to, for the longest time, we ran Cisco in multiple business units, and one of the byproducts of that is you sometimes felt like a holding company because there were multiple different products that hadn't gotten integrated well. One of the things that we decided a few years ago was let's make sure that we pull these things together. A year ago, we pulled that together so that we could really focus on creating an integrated platform rather than just individual products and the piece parts.
That's actually gone remarkably well from the standpoint that the top priority of the company is to make sure that we integrate the technologies together rather than just have each one of them kind of operate independently.
Jeetu will be very humble, as you'd expect, but the pace of innovation at Cisco Systems has picked way up since he's been in this role. The whole focus on really making it a platform and an advantage in terms of our scale and the way the products work together, it's been really good.
That's great. Thank you. Cisco Systems has been a standard setter in the enterprise networking role for many, many years. As you look to the future, what's Cisco's vision for its role in the next year of innovation?
Actually, if you take a step back and say what is happening in the industry right now and why are we in a particularly good position, the traffic patterns because of AI for the network are going to be very different than what they've been in the past. Largely, most of that is going to be incremental traffic that's going to be added on. It's because of this movement to agentic AI, where you'll actually have these agents that autonomously conduct tasks and jobs on behalf of humans, and they're going to be working 7 by 24. Currently, the duration of autonomous execution is about, I don't know, 20 minutes. You ask deep research to do something, it takes 20 minutes for it to come back.
I think over time, as you have two hours and two days and two months and two quarters, you'll actually start to see very, very different levels of business outcomes. What that also does is it has a very different level of kind of sustained inference volume that's needed. The first big area is this market's going to be infrastructure constrained, this power computing network. The second big thing is it's also very sensitive on latency. You have to make sure that it's very fast. Agents are going to have a very different level of security exposure and risk than what we've had before. Rather than having security as a separate appliance from the network, which then adds latency, we feel like if you actually bake security into the fabric of the network, you'll actually lower the latency.
When you have a packet that gets forwarded, that packet should also be inspected on the same device. We've now got this capability called a smart switch, which has not just an NPU for processing the network packet forwarding, but also a DPU for data processing that can do the packet inspection all within one device. Those two dimensions, the fact that there's going to be massive traffic surges that are going to happen because of agentic AI, and two, is security will need to be something that needs to be done at line rate, will fundamentally demand architecture shifts that we're pretty well positioned to go out and have because none of our networking competitors have a security stack. None of our security competitors have a networking stack. We happen to have both of those, and that gives us a huge advantage.
Very good. Mark, again, congratulations on the new role. How does your approach differ from your predecessors in managing the finance function?
I always like to joke about it'll be completely different. In reality, you know, Scott did a great job. He was a great partner. I worked with him. Jeetu and I both worked with him for, I think he was with us five years and really played a big role in sort of our move to software and subscription. I think as I look to, you know, Cisco Systems' future and my focus, you're going to get, you know, strong financial discipline, transparency, things you'd expect, you know, how we return value to our shareholders. You're also going to get, I was previously Chief Strategy Officer.
I think really leaning into this tightly coupling of finance that we were talking about this morning, how close the team is with Jeetu's team to really just make sure that we're funding, you know, the things that matter and the big opportunities that we have ahead. Also, I think, you know, we've got to drive productivity. There's no end to the list of things that right now that we can invest in and the opportunities. We're going to lean in on AI and driving productivity to be able to really free up some resources for the things that we really need to invest in going forward.
Yeah, I was going to ask you about what you've seen in AI and running the company, and your perspective of that.
Yeah, and Sami, who runs our investor relations, he's got a team that actually is pretty cool that we have an AI engine that essentially pulls in all the competitors' and, you know, peers' reports and earnings, as well as all our financial data and previous quarters, scripts, and all the analysts' sort of expectations, comments, etc., and listens along with the call, suggests answers. Pretty soon you won't need me at all. He probably doesn't think he needs me now, but it's pretty cool. I think, you know, some of the things Liz is doing in customer support, two-thirds of our calls now are handled through AI. Jeetu's doing a number of things, obviously, to really lean into AI in terms of coding and some of the efficiencies there. I think all the functions are leaning in.
We actually gave a discrete, and Jeetu knows this painfully, but we gave a discrete savings number for every function that we said, look, you should be able to save X amount this year on AI, and it should ramp by the time we exit the year and really look to do more of that next year. It's not necessarily to lower headcount, lower OpEx, but it's to be able to actually give him money for silicon and some of these opportunities that we need.
Right. Jeetu, the network's essential for AI infrastructure as you were just talking about there. What's your vision for the ideal enterprise AI network over the next five years?
I think what you'll see is, firstly, we do a pretty meaningful amount of work with four key constituents of customers, and that really helps us out. For example, we serve some of the largest hyperscalers. As you folks might have tuned into our earnings call, you saw that we actually exceeded our target number for the year, and it was over a couple billion dollars in orders. That's the first constituency. The second one is NeoClouds. The third one is service providers. The fact that we serve those three really helps us take all of the innovation that happens over there and take it to the enterprise. You can start to see a pretty, you know, kind of advantageous position where the learnings that we get from the hyperscalers, who are the most sophisticated of the sophisticated, can then be taken to the enterprise as well.
The way that we see it is the network in the future is going to be a secure network. I think that's probably one of the biggest areas in AI that you'll have to, and we talked about this a little bit, where if you can bake security into the network and then make sure that that's actually inspecting every bit of traffic from the point of time that a piece of process originates on an endpoint, goes through an encrypted line, and terminates on the host, we have full visibility end-to-end. We're able to make sure that that's something that's embedded into every part of the network in a hyper-distributed mode rather than just the old-school perimeter firewalls that used to be there. What we've done is completely re-architected the environment.
That is probably the thing that sets us apart compared to any of the networking providers or the security providers. Anything to add?
No, I think you've said it perfectly.
Just to build on the point about the hyperscalers, do you see that customer mix changing over time? I mean, clearly you've got, you're selling a lot to the hyperscalers today. Do you see that evolving, changing at all?
Right. Today, if you think about the volumes, there's a lot of data center build-out that's happening with hyperscalers, right? What we anticipate is that there's going to be a re-acceleration also in the enterprise because at some point in time, when your volumes get large, you want to make sure that you're building out your own data centers. Also, for sovereignty reasons, you're actually going to start seeing data centers getting built out all around the world. We are working closely with the different kind of governments as well for sovereign data center build-outs, whether it be in UAE, in Saudi, so on and so forth, Indonesia. I do feel like there is going to be a point in time where you will have a certain percentage of workloads that will get re-accelerated in the private cloud.
Today, the majority of the workloads for AI happen to be in the public cloud with hyperscalers. Enterprise classical workloads are in the private cloud. Now, what's happening is every company we talk to is starting to think about them wanting to modernize their data center footprint so that they can get ready because they're going to need to make sure that they re-architect everything from power requirements to the compute requirements to the network requirements within the data centers. They need to modernize their workplaces, whether it be a campus, branch, factory floor, store, home office. All of that will have to be done with a level of uptime resilience that's needed, what we call digital resilience.
The acquisition that we made of Splunk is, frankly, truly completes the portfolio because what it does is it allows us to make sure that we can take network telemetry and security telemetry and correlate that data together so that we can be way better at compressing the time for investigation during an outage and spend most of the time on response and remediation rather than spending the first four hours trying to determine whether or not there was a network outage or a security breach or was that an API overage or a bug in the application. We're able to go out and correlate that data pretty effectively. The way that we've messaged this to the market is there are three big problems we solve: re-architect and rethink data centers so that they can become AI-ready, make sure that you future-proof your workplaces, and have digital resilience.
In each one of those areas, we have massively upgraded the portfolio, including in campus and branch, which is our largest business. We've got a full lineup of the portfolio that's refreshed at this point.
I completely agree with your point, by the way, because we've gone through exactly that right now. We've got a relationship with Google for the public cloud side. Now we're going to look internally because the density and capacity that we've got is not going to scale the level we need to.
At some point in time, even a small fraction of the margin, when you get to enough volume, people want to make sure that it's a lot of millions of dollars.
Yeah, yeah, exactly. Mark, you were going to say something?
I was just going to add, we just came from our global sales meeting the last couple of days that we've had. It's interesting, being here 25 years, it's interesting to see how things have evolved over time. We used to have a lot of discussions about routing, a lot of discussions about switching, discussions about security, et cetera. These areas that Jeetu's pointing out in terms of the AI data centers and the digital resilience and the campus and future-proofing your workplace, those conversations now, they run across everything. You're seeing security being talked about in everything, the networking pieces being talked about in everything, the role that observability plays, Splunk, et cetera. The portfolio is really coming together, I think, in a much different way than it has in the past.
Yeah. If I could just add, there's a piece that's really important to understand. If you think about what the constraints are going to be for the future, I think there's going to be three areas of constraints. We're going to be massively infrastructure constrained with the amount of, you know, to satiate any kind of bandwidth of any kind of volume requirements of AI. The second big constraint is going to be a trust deficit. People have to trust these systems, and you have to make sure that security is a bigger and bigger problem that most CIOs are facing right now and most CEOs are facing. The third one is a data gap.
We happen to have, you know, a front row seat in every single one of those areas because we are the core critical component for infrastructure that's low latency, high performance, power efficient for the network. We actually have a full security stack that not only provides AI for cyber defense, but also secures AI models themselves. Thirdly, Splunk allows us to have machine data, you know, kind of fabric that's very, very unique compared to what anyone else in the market has. Those three constraints tend to have a direct benefit to Cisco, and that's what gets us really excited.
Interesting. Why is Silicon WAN so important in your value proposition to customers?
Right now, if you look at the market, there's essentially not that many players that create their own network ASICs. There's Broadcom, and then there's us, and then there's a little bit of NVIDIA on the scale-up side. As you start thinking about the hyperscaler business, for example, we would not have a hyperscaler business if we didn't have our own silicon because what that does is provide an offset for the market, and it provides choice so that they don't get beholden to just one vendor because that actually gives them more pricing power. We become a very essential component.
The other thing that you should keep in mind is this is where, you know, we get pretty excited about it because our silicon, if you think about a switch that has three components to it, the silicon, the systems, which is the physical box, and then the software, the margins are largely in the silicon and software, not in the system, right? The fact that we make our own system and some of our competitors are just resellers of other technology allows us over time to have, you know, extremely advantageous positions in the market. I think there's a choice in the market that the market desires that we bring to the table. It also allows us to have custom purpose-built ASICs all on the same platform that gets better performance over time.
Yeah. Yeah, I think from a financial standpoint, I mean, clearly, the margin stacking that happens with other players is certainly a benefit to us in terms of just, you know, accretive to our margins over time. We're going to build Silicon WAN into all of our products. Also, just having more control over the supply chain, the innovation cycle that goes into that, etc., is a big thing for us. You know, I don't know, this may surprise Jeetu, but I actually like to carry one of the chips. I just happen to have the G200 chip with me right here. I just did that for you because it's so important to me. I remembered to get it from under my pillow this morning.
You know, one of the things I have to say is Mark is a really funny guy. From the time he took over as CFO, he had just become very serious. It's really nice to see him.
He said he was really serious the last couple of weeks. I said, shouldn't the CFO be pretty serious? He said, you're right.
Fantastic. Switching gears a little bit, can you briefly describe your partnership with NVIDIA? How does that help in your go-to-market strategy and your products, your thinking?
I think it's a very strategic partnership. You should know that we're very plugged into the AI ecosystem at large because we've also been strategic investors in a lot of players in the markets. Like we are investors in Groq and Anthropic and Cohere and Mistral and Scale.ai. That really helps us. We just invested in our thinking machines with Mira Murati, so on and so forth. We've been very tightly integrated into the AI ecosystem. NVIDIA is, of course, one of the most strategic partners that one could hope to have. The thing that we've seen as a buying pattern with NVIDIA is a lot of customers pay attention to the NVIDIA reference architecture. They have a reference architecture they publish, and that's how the buying decisions get made, based on the recommendations on the reference architecture.
We happen to be the only non-NVIDIA silicon provider that happens to be in the NVIDIA approved reference architecture program, right? That really helps because our switches work with their NICs in their reference architecture. That's how the build-outs in the enterprise happen. We've got a pretty strong partnership from that perspective. They have this notion of an AI factory, and we have built something called a secure AI factory where our security capabilities get added to every layer of that stack. If you think about a product like AI Defense, which is our product for doing model security, that's actually part of the secure AI factory and we integrate with the NIMS framework.
What that allows us to do is the customers can rest assured that if they're using NVIDIA technology, they can use our network for interim, kind of cluster communication, as well as what they now call scale across, which is across data centers. You'll start to see more and more patterns of usage where there's going to be more and more emphasis around across data centers. You might actually have a training run. As that happens, we have our optics technology that can go ultra long haul even for going out and connecting data centers. All of those technologies that we have, being part of NVIDIA's reference architecture, really helps validate the criticality of it. We've had a great partnership with Jensen, and we meet with them regularly. It's been fantastic so far.
Great. Good. You talked about agentic AI and the traffic and the increase in chatter and everything like that. What do you think companies need to do in terms of getting ready for that in campus and branch networks? How does Cisco Systems help get through that?
Yeah, so first, it's important to understand the traffic patterns, right? If you look at a chatbot, I ask a question, I get an answer back. Those typically have very spiky traffic patterns. The utilization spikes up, but then it comes right back down. When you think about agentic and there's a 7 by 24, you know, kind of operation, the traffic patterns start to get much more sustained and persistent over time. Our current infrastructure is simply not built to go out and accommodate that level of traffic pattern. You add to that, you know, physical AI, where you're going to need some more edge-based computing and edge-based networking. That's only going to compound the requirements. If you think about AI in three phases, you started from a chatbot, you go to an agent, and then you go to physical as the three most logical phases right now.
Phase II and phase III , your infrastructure requirements go up quite precipitously, both in campus branch as well as in data centers. What we need to do is make sure that, and by the way, it starts from power. If you have, you know, large systems, you need to make sure that you actually run very different kinds of power in the data center as well. It starts from power constraints, it starts from compute, and then the network. What you're starting to see is re-architecting of data centers to accommodate for this new additional volume of usage. You're also starting to see re-architecting of campus branch networks because everything from Wi-Fi to routing to switching needs to get rethought. One of the big areas that customers are, you know, really excited about the work that we're doing right now is around management simplicity.
Look, we used to have a very broad portfolio. One of the things that we were criticized about in the past is the complexity that it took sometimes to manage these architectures. What we've done is we're converging these architectures together. If you think about Meraki and Catalyst, now they're one physical hardware box, the one physical license model, and they're one physical management plane in the cloud with enterprise class capabilities. What we've done is we've created a product called Cisco AI Canvas, which is going to be out in the October, you know, timeframe, October, November timeframe, where what that's going to allow us to do is get an agentic ops framework. You will basically have an agent that can generate a UI and can correlate data across multiple different domains.
If I have an outage for troubleshooting, the agent will automatically proactively detect, saying, hey, there's a pattern of usage that's not right. There's a ServiceNow ticket. You can just paste the ServiceNow ticket into a chatbot, and before you know it, the agent has actually given you all kinds of dashboards, pinpointed where the issue is, told you it might be a security breach or it might be a network outage, dynamically created that dashboard that you can actually bring other personas, like if you're a networking person, you can bring a security person into a collaborative workspace and coordinate with the agent. By the end of it, you have the troubleshooting done within a fraction of the time of what it would have taken otherwise.
That kind of notion of management simplicity, troubleshooting simplicity with AI, and a fully refreshed portfolio across data center and campus branch is what actually creates the unlock in our minds.
Yeah, that's a big change for my role in managing these environments. It's a phenomenal difference.
The only thing you didn't mention you might want to talk about is zero trust and identity, and the importance of that as you move days.
That's right. What we've done essentially, and by the way, all of these products that we've built, we're starting to build our own purpose-built models. We have a foundation model we built for security. We've built something similar called a deep network model for networking. These are like PCIe caliber responses that you can get back. Our deep network model actually performs better than a trillion parameter, you know, general purpose model. To Mark's point, a lot of companies are focused on this notion of zero trust implementations, which is like least privileged access. How do I make sure that the individual connecting the application only gets permissioning to the degree that they need to and no more? Right now there's a huge issue around over-provisioning permissions.
As you move into the agent world, this problem is only going to get exacerbated because now the agent is going to need to have an identity that is different from the human identity, even though they might both be using the same device. If I'm using an agent today, the agent might use my user ID and password to be on my laptop, but you don't want that. If I've given my agent permission to say, go ahead and use my email, it's okay if the agent sends an email to my team. I don't want to have the agent send an email to my board of directors. You're going to need to have some kind of fine-grained control that needs to be in place as well.
That entire kind of picture of zero trust kind of network access will need to expand to universal zero trust network access that encapsulates not just humans connecting to applications, but agents connecting to agents and IoT devices connecting to IoT devices. We have a universal ZTNA framework at this point that we've introduced in the market. That's starting to see some really good competitive takeouts that we're seeing with our competitors as well. This was a product, by the way, that we built from the ground up over the past two, two and a half years. This was not something that, you know, we kind of retrofitted something that we had. We built a product from the ground up, our secure access product, and that became the core foundation of it. That's now fully integrated with our firewall capabilities and our, you know, kind of segmentation capabilities.
All of those things pulled together are managed under one management plane. This notion of platformization where people want to have fewer products because security is a highly fractured market and tied into the network is something that we are able to do better than any other player in the market at this point.
We have to trust our agents then.
Only selectively. Trust but verify.
Okay. Good. Mark, switching to you. Cisco Systems had a solid quarter despite some challenging conditions. What does this tell you about the operational discipline and your business model resiliency within the company?
Yeah, first off, it was great to have my first quarter be a beaten race and have a good quarter to come into. We're seeing really good balance across geographies, across the technologies, et cetera, seeing a lot of momentum. I think you're seeing very good financial discipline. We showed earnings growing faster than top line. We also guided for the same thing in Q1 for earnings to grow faster than the top line. We also guided the same thing for FY 2026 overall. Right now, we have a lot of tailwinds, I think, in the business. Most of them are kind of right in our wheelhouse. A lot of tailwinds relative to certainly web scale and what's happening with the AI infrastructure build-outs as well as the enterprise space. This campus refresh and the way that security and networking are coming together. Overall, I think very, very good.
Any thoughts on tariffs and what that's going to do?
Yeah, so I think, you know, obviously what we wanted to do and have always done is just be very clear about the assumptions and really lay that out in terms of what underpins the guide for us. It was really a minor impact in terms of Q3 and Q4. It definitely hits us. You know, we've assumed that the tariffs that are in place today will remain in place. We've also assumed that, you know, the USMCA exemptions and the semiconductor and electronic components exemptions will stay in place as well. If those things change, then certainly we'll be able to react. We've got a very global supply chain. This is one of those areas where I think our scale is actually an advantage in the way that we can adapt and move.
If you look at just the first set of tariffs that Trump put in place on China in his first term, we've mitigated over 80% of those tariffs by just being able to make the moves that we can with our scale.
Does it help having your own silicon?
Tariff-wise, I'm not sure that it really plays there, but given everything that we mentioned earlier, it's certainly a big thing. Our cost basis, again, will be different from those that are procuring it from other players too. That could potentially help us.
Great. That's the end of the questions that I have today. We've got a few minutes left if we can go to the audience and see if there are any questions. One over there. Get a mic, please.
Thanks for your time this morning. It's really fascinating to hear the journey and the story that you guys are telling. I was wondering if you could elaborate a little bit more on the culture change that it's taken to move from what you said, kind of like a holding company to one that's a little bit more incisive and rowing in the same direction.
Yeah, it's really interesting because one of the big lessons we learned on this is that Conway's law is true, and you tend to ship your org chart. Org charts matter. If you have three or four leaders and you have to have a committee to go out and make a decision, the decisions get slowed down. Things that were taking us, and Mark was at one of those meetings when he was Chief Strategy Officer and I just took over the job, we did an offsite with the top 50 leaders. There was one project on AI that was taking like, you know, nine months and we weren't able to get it. Literally within a matter of two weeks, we were able to ship a product because the objectives got aligned and people actually had very clear marching orders and we were able to just go out and execute.
One of the things we convey to our team internally is, you know, operate like the world's largest startup and operate at speed with scale. We've got the benefit of scale, but if you start operating with speed, everything changes. The reality is, is most engineers, what do they want to do? They want to work on meaningful projects that get to, you know, kind of scaled adoption. That's the most rewarding thing that engineers want to do. The challenge is that as companies get large, the systems get complicated to go out and move with the level of agility. One of the challenges that we had is we were a very, our mental model was very much of an acquisitive model where someone would always come and ask me, "Hey Jeetu, it seems like this is a really important area.
What are we going to buy?" The thing that we've changed now is we've said, "If this is a really important area, what are we going to build?" If, by the way, we won't be shy to deploy a balance sheet if we find something that can accelerate our vision, but the strategy should not be around acquisition. The strategy should be around building products that people love, that they can talk to their friends and family about, and make sure that there's a level of asymmetry in the market rather than just playing catch-up. That change in culture has truly galvanized our company in a way that even I had not imagined within a very compressed amount of time. We were at the sales kickoff. This was the highest rated sales kickoff we have had in like 25 years.
There is a spring in the step in the employee base because they're seeing us winning and they're seeing us actually doing great, building great products. We are kind of at the front of a trend that is being defined and the architectures that are getting rethought. We are defining those architectures rather than trying to chase and follow them. When a company has the size and scale of Cisco Systems and you start doing that, you can, you know, essentially we've gotten our swagger back. That's the thing that's the most exciting in my mind.
Got one minute left, so if there's a very quick question. Thank you very much, Jeetu. Thank you, Mark.
Thanks, Tom.
I appreciate the pleasure of having you on stage with us today. Thank you.
Thank you, sir.