Good afternoon, everyone. For those in the room, if we can go ahead and get seated, we're going to get started here. Thank you all for attending and joining us for the A10 Networks Investor Day. My name is David Schroeder. I'm the Vice President of Corporate Development here at A10. This call is being recorded and webcast live, and may be accessed for at least 30 days via the A10 Networks website. Before we begin, please refer to the safe harbor statement on Slide 2. Today's discussion includes forward-looking statements subject to risks and uncertainties that could cause actual results to differ materially. Additional information can be found on our most recent Form 10-K and Form 10-Q. Unless otherwise noted, financial measures discussed today other than revenue are non-GAAP. Reconciliations to GAAP are available on our website, a10networks.com. We last conducted an Investor Day 3 years ago.
Our goal today is to provide not just an update and new goals, but to establish a clear map of the next phase of our evolution. We'll discuss how today's A10 is strategically aligned with durable and secular catalysts, how we continue to measure our business, where we compete, and why we win, and how disciplined execution continues to drive consistent performance. Let me quickly walk you through the agenda so you know what to expect. We will begin with a strategic and operational overview led by our President and CEO, Dhrupad Trivedi, where he'll walk you through how A10 has evolved over time, how our strategy has translated into measurable execution, and how we are operating today at scale to best serve our customers.
Following that, we're excited to host a fireside chat that focuses on one of the most important structural shifts impacting infrastructure today: AI and the future of the data center. Joining us for this discussion are Madhav Aggarwal, A10's Staff Machine Learning Engineer, Dr. Rene Meyer, CTO at AMAX, and Sean Pike, our Head of Information Security here at A10. Together, they'll discuss how AI workloads are changing traffic patterns, infrastructure requirements, and security considerations in and around the data centers, and what that means for the ecosystem as a whole. Next, Michelle Curran, our Chief Financial Officer, will walk through how our strategy and execution translate into sustained financial performance, including our operating model, financial discipline, and long-term framework we use to guide capital allocation and value creation.
We'll conclude the day with an open Q&A session, where we welcome your questions and discussions until 3:00 P.M. Pacific. You can use, for those of you joining the webcast, the webcast portal to submit your questions ahead of time. With that, let's go ahead and get started. It's my pleasure to welcome to the stage our President and CEO, Dhrupad Trivedi.
Thank you, David. Good afternoon to all that are attending here in person and online. I'm pleased to be here today to go through the next section of our Investor Day. For those that may not be so familiar with A10, let me start with a snapshot of who we are as a company. So A10, as a company, we are listed on New York Stock Exchange, and we deliver secure, high-performance networking for critical infrastructure needs. We recently reported our full year results, and for full year 2025, we delivered revenue of $290.6 million, Adjusted EBITDA of 29.6%, and $0.90 adjusted GAAP, non-GAAP EPS. We are based in the Bay Area.
Company was founded in 2004 by leading engineers in the segment, went public in 2014, and today we have about 7,000 customers with a very strong global footprint and more than 200 patents that are at the core of creating the innovation engine and a technology-driven business. Before we jump to what is ahead, I want to go back to what we said three years ago at our Investor Day, because one of the important parts of our culture is the holding ourselves accountable and demonstrating results. So three years ago, at our Investor Day, we laid out these outline or this framework to create a consistent strategy and a business operational engine that would deliver against them. At that time, the company was quite removed from these goals. First, we set forth the idea of having a Rule of Forty framework.
The reason for that was simple: our business is technology-driven and we need growth, but we need to have sustained profitability to actually invest in innovation at a level that our customers expect and the market requires. So we set forth a goal that said, on a combined basis, our growth plus EBITDA percent would reach 40, and you will see the progress we have made. Second, we set forth a target of 10%-12% revenue growth. To put it in context, this was just post-COVID, and so this was an ambitious number, if you will, and quite a bit ahead of most of our peers at the time. And more importantly, we set a goal that our product revenue would grow faster than our total revenue. And the reason that is important is our product revenue is a lead indicator of future support and service revenue....
But secondly, product revenue is a great indicator if we are developing the right technology that create customer value for us. Third, we saw that many customers had a kind of issue of time, was security and the volume and scale of cyberattacks. And even though we had started on this path, we set forth an ambitious goal to have security-led revenue reach 65% of our revenue from where we were at that point. And lastly, to get the right balance and a sustainable business, a goal of non-GAAP Adjusted EBITDA margin of 26%-28%. Goals are interesting, results are more interesting. So let's see how we did. For fiscal year 2025, we were able to achieve 40.6, on adding our, you know, 29.6 and eleven point something on revenue.
Second, in that period, we also achieved 11% revenue growth overall, and 20% product growth, which we see as a lead indicator, which is what continued to help us improve our belief in the business for 2026 when we discussed it on our call. Third, we reach security-led revenue to be about 72% of the total. Our goal is to maintain it at least 65%, because ultimately, we are selling a portfolio of solutions that includes multiple products. So we believe this creates a strong relevance with our customers and actually helps us be relevant to their future roadmap as they think about things to do in the future, such as AI. And last, we were able to achieve non-GAAP Adjusted EBITDA margin of 29.6% for the year. The progress that we made obviously didn't happen in vacuum.
It needed us to continuously work on disciplined execution, agility of our strategy, selecting the right markets, and then just your daily disciplined execution on what we chose to do. So we want to talk about what we are going to do next. But before we talk about what we do next, I wanted to zoom out to give everyone a perspective of what we did in the last three to five years that has allowed us to continue to progress, on our business. And to that, first, we are going to play a short video, and then I'll continue with that. Okay? Thank you.
The data center evolves, the challenges remain. The challenges remain. In the early days, infrastructure was centralized, on-prem, and built for scale. Applications lived in one place. Success was measured by performance, scale, and uninterrupted availability. When everything lived in one place, performance and uptime were table stakes. Then the data center changed. Applications spread across cloud, hybrid, and distributed environments. Dependencies multiplied. The attack surface expanded. Performance, security, and availability could no longer be treated separately. Infrastructure had to perform everywhere, protect surfaces, and integrate across complex environments. Today marks a true inflection point. AI fundamentally changes how apps behave and how traffic moves. Inference at scale drives exponential east-west traffic, real-time decisioning, and latency sensitivity measured in milliseconds. Security pressure moves deeper into APIs, models, and application logic. Performance and security converge into the same problem at unprecedented scale.
Across every era, the core challenges remain: reliable performance and security that scales. A10 Networks has been there at every step, enabling a secure and available digital world.
What you just saw captures the evolution of the data center across multiple eras, from centralized infrastructure to distributed architecture, and now to intelligent AI-driven environment. To understand where we are going, it is important to start where we began. For much of the first two decades of this evolution, scale was physical, performance was engineered, and availability was existential. That was the environment in which A10 built its foundation and earned the trust of thousands of customers. From our founding in 2004, our focus was simple: help customers run infrastructure that could not fail. In service provider network, that meant enabling the internet itself. We addressed the IP address exhaustion and traffic growth to millions of users through products such as carrier grade NAT. In the enterprise environment, it meant delivering highly reliable application delivery, ensuring performance, uptime, and efficiency for mission-critical networks....
Across our customer base, the value proposition was consistent: scale without disruption, optimize infrastructure investment, and protect availability under any load conditions. That focus on trust. By executing in some of the most demanding environments, A10 grew to more than 3,000 customers globally and became embedded in network expansion plan for many of the largest operators and enterprises. As traffic volumes increase and architectures mature, risks evolve. Availability alone was no longer enough. So towards the latter part of that era, we expanded into DDoS protection, extending our role from performance and scale to protection and resilience. That combination of infrastructure depth, operational scale, and customer trust positioned A10 for the next phase of the data center. As the data centers moved away from centralized environments, architectures became more distributed, more software-defined, and increasingly hybrid. That evolution changed what customers expected from infrastructure providers.
Success was no longer defined by a single deployment model. It was defined by the ability to meet customers wherever their applications live, and for us to be where they were going. For this period, A10 continued to drive a deliberate transformation. Through COVID, supply chain shock, and macro uncertainty, we rebuilt the company financially to the disciplined, scalable model you see today, while expanding our insertion point beyond the traditional data center and into edge interconnection and application layer environments. We virtualized our portfolio to support hybrid and cloud environments, giving enterprises flexibility without forcing them to make architectural trade-offs. We moved from selling individual products to delivering integrated solution, where performance became inseparable between scale, reliability, and security.
As this continued to develop, security became more inseparable, and our acquisition of ThreatX strengthened our ability to protect modern applications, even at a time when that exposure was growing. Through this evolution, our customer base expanded to more than 7,000 customers globally, and they adopted more and more distributed or hybrid architectures. This was A10's transition from a best-in-class, hardware-centric infrastructure company to a sustainable technology platform. We were able to deliver our solutions in a consumption model that worked best for our customers in any mix of on-prem, virtual, or container-based solution. That transformation matters because it positioned us for what comes next. We are now in the intelligent era, an era defined by AI-driven workloads, machine-to-machine communication, real-time inference, and exponentially increasing east-west traffic.
The concept of a core will continue to blur with the edge over the next several years as AI redefines network architecture and protection points. Sovereign AI is in fact emerging as a meaningful and structural driver of infrastructure investment. Governments and regulated industries are building AI capabilities within national borders outside of hyperscale public cloud. These environments require secure, high-performance networking and favor purpose-built architecture. That dynamic aligns well with A10's global footprint and experience for 20 years. Infrastructure is no longer just transporting applications. It is supporting autonomous systems, decision engines, and AI models that operate at machine speed. Looking at A10's evolution across every era we have walked through today, the core challenges have remained remarkably consistent. What has changed is the intensity and velocity. Performance requirements are tighter. Security risks are bigger and more dynamic. Architectures are more distributed.
The demands are higher, but the fundamentals remain the same. While architectures now span on-prem, cloud, and hybrid, and edge environments, customers continue to demand the same fundamental outcomes: low latency at scale, high throughput with always-on reliability and uptime, security embedded in the traffic path, and world-class customer support and close technical partnership. These have, in fact, defined A10 from our founding. What has changed is not the nature of the challenges, but the scale and intensity at which they must be solved. Our relevance today is actually greater than ever before. We built our reputation at the core of the network, where performance and availability are mission-critical. As that becomes more distributed, our participation has also expanded beyond the traditional core into much broader portions of the network. That increases both our strategic importance as well as growth opportunity.
Let me walk you through what A10 delivers today, how our portfolio aligns to these requirements, and how the rise of AI is further accelerating our strategy.... A good way to visualize what we do today is to see that A10 serves customers across three core solution areas. Each one reflects a different stage of infrastructure maturity, but all are unified by the same performance, scale, and security foundation. Our legacy networking solutions support some of the largest and most demanding networks in the world. This is where A10 made its name by consistently delivering best-in-class solutions that directly drove customer success. This includes foundational infrastructure that powers service providers, where scale, throughput, and reliability are not negotiable. They also support significant large enterprises. They generate significant traffic and require continuous innovation to actually operate efficiently. In fact, most of you are probably using these networks today.
Looking ahead, we are extending upon this legacy with AI predictive performance and analytic solution. A10 has deep expertise and long history of operating at scale and understanding network traffic in tremendous depth globally. This solution leverages this know-how by examining and looking at traffic patterns, system behavior, and network signals to help customers anticipate performance issues before they can affect availability, SLA performance, or downtime, and at the same time, be able to design their network capacity more efficiently, all of which have direct benefits on uptime, CapEx, and OpEx for them. Next-generation networking reflects how customers are evolving. Hybrid architectures, virtualized deployments, and cloud-enabled environments. Here, customers want flexibility, hardware where performance demands it, and software or virtual appliances where agility matters. This is where solution selling becomes essential, delivering consistent outcomes across very different architectures. Importantly, this segment increasingly supports AI infrastructure today.
As AI workloads generate higher volumes of east-west traffic, API-driven communications, and real-time inference, next-generation networking becomes essential to preserve low latency, scalability, and consistent performance. As AI environments scale, load balancing regains strategic importance. This is no longer simply about distributing web sessions. It is about optimizing high-value compute resources. In AI deployments, precision traffic management directly affects latency, GPU utilization, and ultimately, power efficiency. These are areas where A10 brings decades of expertise with an architecture that is purpose-built to deliver deterministic performance and scale in demanding environments. Looking ahead, we are extending these capabilities with our AI prompt and traffic routing solution, designed to support the next phase of AI-driven infrastructure. This solution is intended to help customers manage performance, enforce policy, and maintain reliability as AI workloads scale across multi-tenant, hybrid, and cloud-enabled environment.
It builds on the next-generation networking strengths and across increasingly complex architecture. Finally, while security now runs across everything we do, we also offer dedicated security solutions designed to operate natively in the data path. These capabilities extend beyond point protection, enabling always-on defense. As applications become more distributed and API-driven, protecting availability and application integrity becomes inseparable from traffic delivery itself. Security is embedded directly into the data path to provide always-on protection and intelligent threat mitigation without introducing additional latency or friction. This is increasingly critical in an AI-driven world, where machine-to-machine traffic, API, and real-time inference dramatically expand the attack surface. We are extending our security portfolio driven by AI applications. This capability reinforces our approach of embedding security directly in the data path, supporting safe and scalable deployment of AI workloads across modern infrastructure.
All three solution areas are built on a unified architecture with shared operating system that has been organically developed and perfected over a long period of time, and a natively integrated security platform, all managed through a unified control plane. That consistency allows customers to operate at scale while managing performance, availability, and security as a single system. Together, these solution areas reflect how customers actually operate today in production. As we look ahead, our next-generation networking and network security portfolios represent a primary growth markets, already participating in the global AI infrastructure build-out. So take a step back. Why do customers choose A10? First, we help customers provide fast response and low latency that simply cannot tolerate delays, whether they are running on-prem or in the cloud. We meet them where they are and solve their problems.... Second, we protect infrastructure investments.
Our solutions are designed for them to continue to run their assets without having to constantly rip and replace them, while providing next-generation roadmap and technologies that give them continuous breakthroughs in their economics of delivering that performance. Third, we support seamless migration to cloud and between non-cloud environments, making it neutral so that our customers are more focused on their business outcomes versus trying to continuously change their IT infrastructure. Fourth, we help customers secure and deploy new AI applications. In terms of cyberattacks, we continue to help defend from new, complex, and highly voluminous attacks now enabled by AI. And last, we continue to simplify IT ops with more automation with our new platforms that we have released recently, and continue to work with customers' workflow optimization to further improve how they can manage these networks better.
Finally, what is important, as these are not theoretical use cases, these are customers using these products at hyperscale today across multiple environments in the ways I described. The trust from these customers is what has allowed us to continue innovating in the right areas that create direct value for them. If you look at the next page, you can see that trust reflected in the breadth of organizations we work with, from global enterprises to service providers, to some of the most recognizable brands in the world. These are customers with zero tolerance for downtime. They choose A10 because we help them meet performance and security requirements that grow more demanding every year. That experience becomes especially important as AI workloads are layered into their existing environments already.
We talked a little bit about what we do, and the next two slides, I'm gonna talk a little bit about how we fit in the bigger picture and what happens with AI. When customers build new infrastructure, whether in a centralized data center, distributed cloud, or the edge, a few foundational layers come together. At the bottom, it starts with compute, storage, and power. On top of that, switching and routing provide connectivity, and above that sit applications and workloads. What ties all this together, and where complexity really begins, is traffic delivery and security. This is where performance, availability, and protection have to work together in line and at scale. This is where A10 operates. We sit directly in the data path across core and edge environments, enabling traffic and application delivery while embedding security capabilities without adding latency or friction.
Importantly, this also explains when A10 becomes relevant. We are often specified early during design when customers are modeling capacity, traffic flows, and resiliency. But our solutions become critical during activation and scale stage. When applications go live, traffic ramps up, and availability and security move from design assumptions to operational requirements. That relevance only increases to support AI workloads. Now, let's expand on what actually changes when that happens. In an AI infrastructure, compute becomes GPU dense. Switching and routing carry dramatically more east-west traffic. Applications become API-driven and machine to machine, and security shifts from perimeter to continuous inspection. And in the middle of that, traffic requirements multiply: higher concurrency, low latency tolerance, and real-time inference sensitivity. But AI also introduces something new. Above application and workload sits what some are calling layer eight or the inference layer.
This is where prompts are routed, models are accessed, and responses are governed. It introduces new requirements around prompt security, usage control, and intelligent routing. So what are we doing? First, we are deepening our role in the traffic layer with our AI prompt and traffic routing solution. This will extend upwards, impacting all of the layers above. Second, we will expand into the application layer with our AI predictive performance and analytics solution, applying our traffic intelligence to proactively optimize AI workload before performance degrades. And third, we will help to extend security upward with an AI inference security offering. As requirements multiply, A10's role expands, and we believe our decades of traffic intelligence and machine learning expertise positions us to best serve our customers in their AI infrastructure journey. Now, let's step back and take a very quick look at what's driving that change at a market level.
When AI infrastructure is discussed, most of the attention goes to compute, GPUs, and training clusters. From our vantage point, what we are seeing in real production environments is that AI becomes a traffic problem first. AI workloads drive nonlinear traffic growth. As the use of AI moves from current infrastructure build-out toward enterprises using it for business gain, the needs evolve based on the needs for data management, privacy regulation, and network security. In fact, as I mentioned earlier, sovereign AI will likely be a major secular driver of growth in the next few years, and we are squarely positioned to support it with solutions across public cloud, private cloud, and on-prem solutions. A single user interaction with an AI-enabled application doesn't result in just one request. It can trigger dozens, sometimes hundreds, of internal calls. Those requests are persistent, primarily east-west, and they live inside the data center.
At the same time, APIs become the dominant interaction model. This results in a fundamental shift in traffic pattern. Industry research suggested that by end of last year, AI workloads represented roughly 30% of all data center traffic, significantly higher than what it was a few years before. It means infrastructure is being stressed in new ways, because before you ever hit a compute constraint, you hit a traffic constraint. Which brings us to the second reality that AI introduces: the economics of latency. Once traffic volumes increase the way we just described, the next constraint shows up very quickly, and that constraint is latency. AI inference is fundamentally real-time. Whether it's a chatbot, a recommendation engine, fraud detection, or an AI-enabled workflow, users expect immediate response. That changes the economics. In inference-driven application, milliseconds matter. Latency doesn't exist in isolation.
It compounds across service chains, and an AI request might touch an application, call multiple APIs, invoke one or more models, and return results back to the stack. Every hop adds to the latency. And when you get even small delays across them, performance degrades quickly. That's why throughput has to scale without introducing bottlenecks. You cannot solve this problem by pushing traffic off to side systems or for out-of-band inspection. Performance has to be delivered in line, directly in the traffic path. And what this means in practice is that infrastructure decisions now directly impact application responsiveness, user experience, and ultimately business outcomes. Our own customer research shows that vast majority of organizations view low latency as critical to delivering real-time AI experiences, and they are actively investing to reduce it. So as those workloads scale, latency stops being a technical metric and starts becoming an economic one.
When that happens and traffic increases, something else changes as well. AI expands the attack surface. As AI becomes more embedded into application, the attack surface expands dramatically. These new intersection points create new types of threats and new ways that you can be breached, and they are targeting application flows, APIs, and business logic directly. Industry data reflects this shift. If you see the numbers here, Gartner estimates 90% of web-enabled applications now expose more attack surface through APIs, and the impact is real. More than half the companies experienced such attacks last year or last two years. We are also seeing that play out in terms of DDoS attack, which have, like, surged about 350% and exceeding 30 TB, which was unimaginable even three years ago.
Which brings us to the broader point: AI is changing the fundamental requirements for infrastructure, and those changes are what we want to explore next from an industry-wide perspective. To explore what this means beyond A10, I would like to turn it over back to David, who will lead us into a fireside conversation with industry experts. Thank you. David?
Thank you, Dhrupad. Thank you, Dhrupad. What we've just outlined are structural changes driven by AI, not point solutions or short-term trends. To dig deeper into how these shifts are playing out across the data center ecosystem, we wanted perspectives from people who study, design, and operate AI infrastructure every day. So for the next segment, we'll move into a fireside conversation focused on how AI workloads are changing traffic patterns, performance requirements, and security considerations in and around the data center. So I'm pleased today to introduce the panelists. First, we're joined by one of our own, Madhav Aggarwal, who leads AI-focused technical strategy here at A10 Networks. Madhav works closely with customers and partners on how AI workloads are changing traffic patterns, performance requirements, and security architectures across the data center, and is key in the development of, of our AI portfolio. Joining him is Dr.
Rene Meyer, CTO at AMAX Engineering, a leader in AI-optimized data center and infrastructure solutions. Rene works closely with customers deploying AI at scale, offering a first-hand view into how power, performance, and traffic requirements are shaping modern data centers. And finally, Sean Pike, A10's Chief Information Security Officer. Sean works closely with customers to protect high-volume, mission-critical environments, and he applies the same rigor internally at A10. He brings a practitioner's perspective to today's conversation on how performance and security come together in AI-driven architectures. Together, this group brings perspectives from product development, real-world deployment, security operations, and AI-driven infrastructure. So with that, let's get started. Thank you guys for joining us. The goal of this conversation is to zoom out.
We wanna talk about what's actually changing in real environments, what trade-offs are being made today, and what capabilities will matter most as AI moves from experimentation into production at scale. So I'd like to begin with the macro view. I'll start with you, Madhav. When you zoom out, what do you think the market is most misunderstanding about AI infrastructure today?
So I guess there are a couple of misconceptions, but the top priority for me is that we're still associating AI growth with the growth in compute. It's no longer a linear function of the number of GPUs that a company is buying. There's been so much talk about H100 buying, NVIDIA's Blackwell schedule, but honestly, with the amount of growth that we've seen in model inference, with models becoming more sparser, with the launch of the DeepSeek-R1 model, the MiniMax 2.5 models, it's essentially that inference on these large models is actually now like insanely sparse, which means it's no longer a GPU bottleneck, but an inference bottleneck from an infrastructure standpoint. You need to optimize the entire infrastructure around your model, so the inference layer needs to come up holistically. And this sort of correlates with...
I've seen some market sentiments that the compute is gonna grow 10x every year. I don't think that's true. Like, my best estimates are that it's gonna grow 50%-60% in peak quarters, which is in peak periods, but it's definitely we're gonna see a lot of growth in compute, but not at a 10x multiple every year.
Thank you. And René, I think you've got a unique perspective. You know, I think one of the challenges that we highlighted earlier and that's emerging is this power problem, right? So any metrics you can reference that help us kind of illustrate that infrastructure challenge from a power perspective?
Yeah, power is an interesting one. So everybody thinks, well, we need to save power because power translates into cost. I think the reality is that data centers typically are not so cost-concerned about power, but what we see is a challenge is the power sparsity of the grid itself, right? So, so what happens is, you want to deploy AI servers. AI servers typically consume a factor 10 more power than general servers, and now you cannot afford additional cooling costs anymore, right? So the drive of power sparsity translates into a change in the data center and how you operate a data center, right? These servers consume more power. You want to save power. You cannot waste power on the cooling infrastructure. So that leads to a transition from an air-cooled infrastructure to a liquid-cooled infrastructure.
So power from a, you know, like, deployment perspective is a definite challenge in the data center because simply in the data center, power is not available to the rack, but the bigger problem is that the power in the grid becomes more and more constrained, and people transform into liquid-cooled data centers to save cooling power to deploy more IT workload.
Got it.
Yeah, and I, I'd agree with that. So that actually informs part of A10's infrastructure strategy today. We've run into the exact same problems. I think when you're thinking about on-premise data centers of old, folks are moving away from that and obviously moving quickly into data center, for a large degree to that exact reason. We just can't get the power at the street that we need, so we need that fed in from in a different way. So we need to be able to power at scale in a way that we haven't before.
Makes sense. Another one of those infrastructure challenges that Dhrupad highlighted is really around the security challenges. Wondering, Sean, from your perspective, what are the biggest security challenges emerging for customers that are hosting their own data centers?
So I like to think about this in three different ways. One of them is people, one of them is complexity, and the other is scale, and if you give me just a moment, I'll go through each of those. But thinking about complexity, I know Dhrupad highlighted some of the AI-related attacks that we see, so things like polymorphic malware, for instance, that's sort of AI-driven, and it changes itself so often.
Not even thinking about those things as it relates to complexity, what we're seeing more and more of is just scaling complexity because of the way that we're using AI internally, whether that's, you know, some sort of an agent, like a, like a ClaudeBot , for, for instance, or, or OpenClaw, or things like just building out additional infrastructure to support our AI initiatives.
Mm-hmm.
So then you take that and take sort of the, all the scale that comes along with that, and you're now 10X-ing the efficiency of all of the folks that work in your organization. So instead of developing, you know, five new products a year, maybe we're working on eight or 12, or maybe it's 40 in some organizations.
Sure.
Now we take the people that are responsible for monitoring and controlling all of that infrastructure and security, and you think about, you know, at A10, what we demand from our infrastructure and security folks is they have to understand the organization better than anyone else.
Mm-hmm
So you've got a certain scale of that type of a person, and now you're asking them to understand and control an infrastructure that's 10x or 100x what it used to be. It becomes a massive challenge for us.
Moving quickly. Yeah.
And adding to Sean's point, I guess it's moving from static rules to now more like static pattern matching, to now semantically interpreting the traffic that also, like, is coming through your network, right? Semantic AI understanding of the traffic that is going through your network.
Yeah, absolutely. Seeing a move away from kind of that traditional SIEM solution, kind of looking at those patterns and moving into more AI-driven tool sets.
As these enterprises are developing their own AI applications or AI-as-a-service is deployed, Madhav, how should these enterprises think about latency in the data center? Maybe you can touch on it from a consumer standpoint, and then maybe from the AI vendor standpoint, both of those-
Right.
Rene, feel free to add to your perspective as well.
Right. From a consumer standpoint, latency is more about, like, it directly correlates with quality of service. So, there are a couple of metrics that we measure for AI latency, and one of the most crucial ones is the time to first token. That essentially means how long you're waiting before you actually see the first piece of text from an AI model, and that really governs your attention span, it governs whether you engage with the model and in what form you engage with the model. So for this, we're roughly looking at less than 500 milliseconds. The other one is sort of the time between tokens, so it's inter-token latency. And then finally, it's about how many tokens you can consume in a batch so that you can serve millions of users concurrently.
So these three metrics sort of govern latency in the data center from a consumer standpoint and from an inference standpoint.
Yeah. Okay, maybe I chime in here. So we see latency in two directions. Like, if you look at a GPU cluster and how a GPU cluster is built, the intercommunication between GPUs, it's ultra, ultra fast, right? Like, so if you compare that with an HPC cluster, each node per HPC cluster has a high-speed network link. Now, each card has a high-speed link. So the density and the performance of that network inside a cluster, especially if you do training or fine-tuning of foundational models, is extremely, extremely high. The other thing is, you know, like, how you consume the service, right? Like, if you have an AI application, there, I think, what is most important is the quality of service, right?
We know this all, you know, we use ChatGPT, and yeah, we cannot really wait. We want to have, like, an instant response. And there, what we see is, well, the more requests you have for an instance, you know, like, the slower is the performance. I think, like, then bringing up performance and quality of service, quality of response time, leads into smart load balancing of those applications.
And I think there's an interesting challenge with AI agents as well, right? Latency is becoming even more critical because an agent is probably making... Let's suppose it's making 50 requests-
Mm-hmm.
-right? Each request is probably querying 12 other data lakes, like databases. So there's additional latency of about, like, in seconds, of all these additional requests, so it becomes even more critical, and I think that networking, routing, load balancing piece becomes even more fundamental with agents.
Yeah, and we touched on this a little bit earlier, along those similar lines, but how, compared to the traditional cloud applications, how do these AI workloads change the traffic patterns inside the data center? I know we talk about east to west, I know we talk about north to south. Maybe you can dumb this down a little bit for those not as familiar with those flows and talk about what's changed there.
It's primarily gonna be east-west traffic, especially with agents. I think this traffic is blowing up exponentially. Further, with larger models that are being orchestrated across nodes, across GPUs, there's a lot of GPU communication and synchronization. So you're almost seeing traffic in terabits, and I guess load balancing becomes even more fundamental with all these challenges.
Yeah, makes sense.
Yeah. I think what we also see, you know, like, we had, like, the first version of ChatGPT, like, we ask them something, and it comes back with an answer. Now, we have a thinking model, right? Like, so under the hood, thinking models are essentially distributed agents who run in parallel but also run sequentially. And, like, each segment of latency that you introduce by calling an agent sequentially adds up, so you want to bring essentially the response time down, to, yeah, have good performance with this more advanced AI models.
We're impatient, and we're very, you know, critical of, of the facts. We want it, we want it accurate in, in a very timely fashion. You know, Sean, along those lines, security inspection, when you're, when you're wanting this kind of accurate picture, but you're also wanting to make sure that security protocols are being followed, that introduces overhead. And so how do, how do these teams and your team specifically keep security always on, but without making performance unpredictable?
Yeah, so this goes right back to what I was saying before. When you think about the amount of volume that is increasing, and then you're also looking at different types of traffic. I think we talked a little bit about changes in traffic patterns, right? More east-west traffic, maybe more, I would say, distributed traffic... But it's not just that, it's also the type of traffic, right? We did talk a little bit about how the modern or the sort of the old-fashioned SIEM is changing a bit, so that we're moving away from kind of simple pattern matching. That's all part of this. We're now digesting much more data than we did before, and we're also looking at types of behaviors that we haven't seen before, and some of that behavior isn't consistent.
So it's a little bit difficult to get a good read on some of that. So at the end of the day, you've got a couple of choices here. One of those is to massively scale your team to try to do their best. But the real answer here is going to be, we're gonna end up fighting more fire with fire, right? So you're seeing more development in this area, specifically in AI, and I think we'll just continue to see that grow over the next few years. Security's always been a bit of a laggard in this. Not a laggard in the technology space, but it's always kind of bolted on after, right, in general. We actually talked about this earlier.
We said, "Oh, well, you know, when cars first came out, they didn't have seat belts," right?
Mm-hmm.
And then somebody said, "Well, the, maybe it's a good idea to add a seat belt." Kind of the same thing here, right?
Mm-hmm.
We're saying we're now developing stuff at scale, and it's time to secure at scale as well.
I think you have to understand it before you start securing it, so the-
That's right.
There's certainly a learning curve there. I-
Maybe, one thing, like, and I think you cannot underemphasize that enough. So AI essentially changes how company information becomes available, right? Like so, typically, you have information silos, access is tightly controlled. Now, if you have custom AI systems, so essentially custom systems, large language models, acquire key information of the company, like processes, trade secrets, everything, and at the same time, the AI system make that available to everyone if you let people query this, right? So security is coming into a completely new realms, as in company core information is at stake. You know, if you cannot protect it, everybody, employees, can access this information.
And I think it's at least from what I see, it's overlooked, but it's such a, you know, like, such an opportunity, but also such a challenge to control the information flow and guarantee information integrity, and yeah, containment.
So I'll say one other quick thing here because I wanna tie it back. I mentioned OpenClaw a moment ago. That's exactly the issue with those kinds of systems. Security researchers talk about it as sort of having the trifecta, right? Which is, it's scalable, it's almost always overprovision, meaning it has too much access, and it's automated. So it's the perfect tool for attackers to sort of go after.
Mm-hmm.
And with agents, like, if you have hundreds of agents that are working in parallel, then security becomes even more fundamental because the blast radius is very, very large.
Yeah, that's right.
Let's suppose you have 50 steps. You fail in Step 37, you're pretty much compromising a chunk of what you previously computed, and also probably a, like, a bunch of the infrastructure.
Right.
So stepping back from security, there's this perception that AI infrastructure is largely being driven by a few small group of companies, these large U.S. hyperscalers. From your vantage point, is that an accurate picture of where the market's heading? Are we gonna see a more distributed global build-out across these enterprises, sovereign environments with all of the regulatory concerns, service providers, co-location? I guess, generally using a baseball term, you know, what inning are we in?
So, definitely not at the late innings of the build-out, as some people are attributing to. I would say we're somewhere late in spring training. So that's for a baseball analogy. And the reason I say that is because people are buying GPUs, they're building their data center off of all these compute components, but slowly they're realizing that there's a security component in the posture. There are a bunch of other components. It's evolving towards more efficient inference, which means you're optimizing for compute per dollar.
Mm-hmm.
You're also optimizing for, like, the cost per million tokens. So it's, it's slowly evolving in this, towards this trend.
I think most of the AI workloads are actually hosted in the cloud because it's so easy to consume. And, like, essentially, you have access APIs, and you have very robust, generative AI systems underneath, right? And there are several drivers, as you mentioned, like token cost driver, but then also, like, data security concerns, regulatory, you know, requirements. So there is definitely a trend to move this AI workloads to on-prem. Now, the challenge is on-prem, you have access to not the same type of models, right? Like, so either you build your own, you look into the NVIDIA, like, infrastructure stack, or you have open-source models, and those models, they need to build up to the same capability to what you have in the cloud. So I think the industry-
Mm-hmm
... is in the process of figuring this out, how you can essentially deploy models with equivalent performance on-prem, and I think once that problem is resolved, I think,
Or-
... the industry will transition.
Are we in a different inning in the U.S. than we are internationally, globally? What, you know, talk about outside of the U.S., you know, where are we on that journey, the infrastructure build-out?
So, inside the U.S., like, it makes sense, right? The U.S. has sort of a power advantage. We are seeing these massive amounts of spending. But it'll be wrong to attribute all this growth to just the U.S., because there's significant amounts of investment being pumped in other countries as well. Specifically, if you look at Japan, they have a $10 billion AI spending program.
Mm-hmm.
You're having a bunch of growth in the Middle East. UAE has their G42 AI Alliance program, and Saudi Arabia also has a lot of, like, spending and like, investment in AI. So I guess this will slowly... Like, you'll see these trends emerge elsewhere in the world as well, and I think it's only gonna exacerbate with time.
Yeah, and Sean-
I, uh-
You know, I think Dhrupad touched on sovereign AI, you know.
Yeah
... and maybe you can touch on the security concerns. How does that differ globally, the impact what inning we're in there as well?
Yeah, yeah, sure. So, so one thing I did wanna say, so Madhav, you know, mentioned a number of countries that are sort of ahead in AI, but I think it's sort of the list, right? That you mentioned. So I think globally, a massive part of the spend, obviously, is in the U.S. and the other sort of territories that Madhav mentioned. So still, the rest of the world has, you know, a pretty good long way to go. I think what we're seeing in terms of... I think what you're driving at is sort of regulation around-
Mm-hmm
... all of this.
Yeah.
We're definitely seeing an uptick, just in questions we're being asked from customers, things that we're, you know, sort of hearing, coming out of U.S. federal and other governments as well, in terms of just: How are we using AI? What is it gonna look like in the future for us? Where's our AI gonna live? Who's gonna train it? Et cetera, et cetera. All of those are, I think, taking shape today.
Mm-hmm.
Certainly, some countries are a little bit further ahead than others. And it's not just about where the data is moving to, right? I mean, obviously, there's maturity around data residency and data privacy, but we're just starting to see maturity around, what does the model actually look like?
Mm-hmm.
How is the model actually using the data? So more to come on that, but I'd say today we're in nascent stages there as well.
Yeah, and adding to that, I guess when I'm speaking to customers, at least, what I'm hearing from them is that they're running GPUs at about 30% utilization, which means from someone who's running it more efficiently, you're about a third as efficient as them, right?
Mm-hmm.
About 90%-100% is where you want to be. So that is why, like, the components around just the inference layer will slowly start developing a lot more, and with that, like, once that 30% at least hits 70%, is when I would say now we're, like, we're at a sustained production workload.
Yeah, sure.
Yeah. Maybe last but not least, like, current geopolitical shifts, right? Highly, like, essentially accelerate sovereign AI.
Mm-hmm. Yeah.
That's right.
Yeah, and there's a component, you know, component to this as well, right? I mean, there's a lot of, you know, to get an AI workload up and running, you need the components-
Mm-hmm
... and time, too. So the companies that are getting those, getting access to those, are able to move a little further along. Last question, and I imagine each of you has a slightly different perspective on this, but this is an investor day, so had to ask: You know, if investors have to watch just one market indicator, a trend to track real, meaningful AI adoption, you know, not the, you know, "This model can build, you know, draw a cat more realistic than this other one," what should that market indicator be?
For me, personally, I think, investors should be looking at, whether AI companies are actually mentioning at what utilization they're running their GPUs at.
Mm-hmm.
If it's closer to the 70% mark, that's when you know you're in the late innings now.
Mm-hmm.
As I said before. The other one is probably attribute—like, how much revenue can you attribute to AI, actually? I would say right now it's about 8%-15%. You want it to be closer to 30%, and that's when you can say it's actually taken off, both in terms of productivity and in terms of revenue gain for companies.
Rene?
Yeah, well, it's sort of the $100 million question, right? It's, So I think it's really difficult to answer, and I think it's also, like, what is, what is meaningful AI, right? Like, I came up with, maybe the installed or increasing power consumption for on-prem solutions. So if enterprises really decide they're betting on AI, and they turn this on, I think then we really see an adoption.
For me, it's just productivity. I think it's a direct tie to adoption because if we can look at—I think, again, we were kicking this around earlier, and I think I was asked, "Well, how do you measure the productivity?" I think you measure productivity the exact same way that you did before, but now the question is: Were you sort of green before in your productivity, but now you're two or 3-X'ing that so that you're very green with what you're trying to do as a company? That, to me, sort of marks real, real adoption.
Sure. No major controversy up here, so really, really appreciate the conversation. I think we're out of time there, so thank you very much. It's been a great discussion. Can we get a round of applause for our panel? Thank you, guys. You know, what's clear is that AI is amplifying the core demands of infrastructure. Traffic is growing faster, latency tolerance is shrinking, and security is becoming inseparable from performance. So the next question is, how these shifts translate into business outcomes. To walk us through that, I'll hand it over to our Chief Financial Officer, Michelle Curran. Michelle?
Thank you, David. What you've heard so far today is how A10 has evolved strategically, and how our portfolio aligns with how customers operate today, including where the infrastructure market is headed with AI. My role this afternoon is to translate that strategy into financial results, and more importantly, into a model that's durable, disciplined, and designed to deliver long-term shareholder value. Let me start with taking a brief look at our most recent performance, which underscores how our strategy is translating into financial results. We finished the year at just under $291 million in revenues, up 11% year-over-year, delivering Adjusted EBITDA margin of 29.6%, translating into EPS of $0.90 per share, reflecting both growth and continued operating discipline.
Importantly, when you combine our profitability with our full-year revenue growth, we've achieved the Rule of Forty, which reinforces the strength and sustainability in our financial model. I'm proud to have joined a company that's both growing and profitable, and doing so in such a disciplined way. We'll continue to take that focus and discipline into 2026 and beyond. This slide puts our recent performance into a longer-term context. Our revenues have grown steadily over time. In 2019, we were just $213 million, and today, we're just below $291 million. We've grown steadily over that time, despite periods of macro and market volatility. In fact, over the last two years, we've seen our growth accelerate, reinforcing the strength that we've seen in both our portfolio and our execution.
At the same time, we've expanded Adjusted EBITDA margins from mid-single digits to nearly 30%. This margin expansion reflects operating leverage, disciplined cost management, and improved mix, particularly as next, next-gen networking and security become a larger part of our business. You can also see this directly translate into earnings. Non-GAAP EPS has increased meaningfully over this period to $0.90 in 2025, demonstrating growth and profitability are scaling together. We left the year with a very strong balance sheet. We have $378 million in cash and marketable securities. This level of liquidity provides resilience and flexibility. It allows us to continue investing in our business with innovation and go-to-market initiatives, while maintaining discipline and financial strength. At the same time, it positions us to act strategically when opportunities arise, without compromising our long-term margin profile.
With that foundation in place, our focus turns to how we allocate capital thoughtfully and to drive sustainable growth and shareholder value. As the company continues to evolve, we are sharpening our financial priorities around three core areas: driving sustainable revenue growth, operating a disciplined business model, and allocating capital to maximize long-term shareholder value. At A10, our approach is to create value grounded in these three areas. First is our revenue growth. Our focus is on driving sustainable, profitable growth as our portfolio continues to shift towards next-generation networking and security. Here we go. Sorry. Thank you. Yeah, sorry about that. Where we see stronger long-term demand and relevance, including participation in AI infrastructure and in AI infrastructure environments. Second is our business model.
We operate with a disciplined operating framework that emphasizes margin expansion, operating leverage, and strong cash generation, even as we continue to invest in innovation and go-to-market execution. Third is our capital allocation. We are deliberate in how we deploy capital, balancing reinvestment in the business with returning capital to our shareholders and maintaining strategic flexibility over time.... Together, these three priorities guide our decision-making and ensure that growth, profitability, and capital discipline reinforce one another. I'll now walk through each of these areas, starting with our growth profile. This slide highlights how our portfolio mix has evolved over the past several years, and more importantly, how that evolution positions the company financially going forward. As you can see, we've been deliberately shifting our revenue mix towards higher growth solutions, specifically next-gen networking and network security. In 2022, these categories represented just about half of our revenue.
Today, they account for more than 70%. As Dhrupad mentioned earlier, legacy networking continues to play an important role in our business. It supports large, mission-critical environments that generate meaningful cash flow. However, we expect the growth in this solution area to be flat or declining to mid-single digits over time, reflecting broader market maturity. At the same time, next-gen networking is growing faster, and we expect the growth to be approximately 16%-18%. This area benefits from hybrid architectures, software and virtual deployments, and increasing complexity inside our modern data centers. Network security represents another growth opportunity for us, and we see the growth in this area between 14%-16%. This reflects continued demand for our embedded security, API protection, and always-on availability, particularly as customers scale AI-driven and highly distributed applications.
The result is a portfolio increasingly weighted towards higher growth markets, with stronger, durable characteristics and attractive profit margins. This mix shift supports more stable revenue growth, continued operating leverage, and strong free cash flow generation over time. Across legacy networking, next-gen networking, and network security, revenue is increasingly anchored in a security-first value proposition, reflecting how we've embedded protection directly into our core platforms. We expect security-led revenue across all three solution areas to continue to represent over 65% of our revenue on a go-forward basis. As we look ahead, this portfolio evolution underpins our financial priorities. It supports disciplined growth, reinforces the strength of our business model, and enables continued capital returns while investing in the next phase of our company. This slide highlights how disciplined allocation has translated into meaningful operating leverage over time.
On the left, you can see Adjusted EBITDA absolute dollars growing steadily as revenue has scaled. At the same time, operating expenses as a percentage of revenue has declined meaningfully, reflecting increased productivity and a more efficient cost structure. On the right, that operating leverage shows up clearly in our margin profile. Adjusted EBITDA margins have expanded from low single digits in 2018 to just under 30% in 2025. Importantly, this margin expansion has come from continuous improvements in the business and ongoing productivity, while we dynamically allocate resources to the right opportunities. We've continued to invest in innovation, go-to-market initiatives, and capital support while maintaining cost discipline and focus. With that operating leverage as a context, I'll now turn to how we think about allocating capital to support growth while continuing to deliver strong returns.
The next question is how we deploy the cash that the model generates. Over the last 3 years, our business has produced consistent free cash flow, and we've returned much of that to our shareholders through a combination of dividends and share repurchases. As the chart shows, those returns have increased over time, primarily by share repurchases, while maintaining a steady and predictable dividend. Importantly, this has been achieved without compromising investment in the business. We have continued to protect product development, go-to-market initiatives, and innovation aligned with higher growth solution areas such as next-gen networking, security, and AI-related capabilities. This reflects a balanced approach to capital deployment. We prioritize returning excess cash to our shareholders while preserving flexibility to invest organically and pursue disciplined, strategic opportunities. Let's turn now to allocating capital going forward.
Capital allocation is a critical component of how we create long-term shareholder value, and we approach it with a clear set of priorities. First, organic growth. Our top priority is investing in the business, such as innovation and go-to-market initiatives that drive durable, high-return growth, particularly across next-gen networking and security. This is where we see the strongest strategic and financial returns over time. Second, returning capital to our shareholders. When we generate excess capital beyond what we need to fund the business, we're committed to returning it through share buybacks and dividends while maintaining a healthy balance sheet that's strong with financial flexibility. Third is strategic M&A. We'll pursue potential acquisitions with disciplined, focusing on those opportunities that enhance our portfolio and align with our strategic direction and meet clear financial return thresholds. Across all three priorities, our framework is consistent.
We focus on assets that align with our next-gen networking and security profiles, support a recurring revenue profile, and deliver a payback period of three to four years. Our disciplined approach to capital allocation reinforces growth, profitability, and long-term value creation. With our operating model and capital allocation framework in mind, I'll now turn to our financial outlook and how we see these priorities shaping our performance going forward. Let me close with our financial goals. Our objective is to execute a disciplined growth model in the AI era, balancing top-line expansion with profitability and capital efficiency. For 2026, we're targeting 10%-12% revenue growth, 28%-30% Adjusted EBITDA margins, and non-GAAP EPS growth exceeding revenue growth.
Looking ahead to 2027 and 2028, we expect to sustain 12%+ revenue CAGR, expanded Adjusted EBITDA margins of 28%-30%+, and continue to deliver non-GAAP EPS growth above revenue growth. This reflects a model built on operating leverage, mix improvement, and disciplined capital allocation, allowing us to participate in AI-driven infrastructure demand while maintaining strong profitability and cash generation. That combination of growth and discipline defines how we intend to create long-term shareholder value. With that, I'll now turn it back to Dhrupad to close out today's session and to share his final thoughts on the path ahead. Dhrupad?
Thank you. Thank you, Michelle. The last section, I'm just gonna bring back to a summary of what we have heard through the day. I think what we wanted to highlight was our business and strategy have evolved into customer needs and architecture. We talked about the centralized infrastructure era, the distributed, and the intelligent era. You heard a lot of discussion around how a lot of those factors are directly being affected with people using AI today already, and how they could continue to be changed. What is important to take away from this is, through all these periods, we have maintained focus on what is our differentiator, what we know how to do better, and actually increased our investment in areas that were aligned to our differentiation, and continue to create customer value through that, to continue to develop a sustained business around that capability.
If you look at the things we know how to do well around low latency, reliability, uptime, and security, as the progression of the network continues, these are more relevant than ever before. What is also important to note is when we talk a little bit about giving maybe an alternate perspective on how to think of our business segments, remember, in all of those segments, all these differentiators are exactly equally valid. It just happens to align with transitions from one type of network to another, and we are mindful and conscious of where we choose to align with our customers' growth area. So, so this continues to be a foundation, and we continue to focus on what we can differentiate and how we can apply that to new problems that customers are facing. We talked a little bit about driving future growth.
Michelle gave a little bit of perspective of that. We certainly have talked about 2026. We think, you know, we certainly see a path that is sustainable, disciplined, and aligned with strategy and customers to deliver consistent CAGR of 12% or more in outer years. Second, because of the nature of our business model and how we have executed that, and, you know, I've been there about six years now, is it is not something we do different every year, right? It is an operating system. This is how we run the business. This is how we make decisions. And so we are also comfortable saying, "Yes, we will invest in AI," but that doesn't mean our business model is suspended for two years. Just like for the last three years, we invested heavily in cybersecurity, and actually, our EBITDA margin expanded, not contracted, right?
So our model is inherently run in a way where I look at OpEx productivity in addition to margin fall-through, which therefore means EPS grows faster. Like, that's as simple as that. And an interesting data point for you might be that even last year, full year 2025, compared to 2024, we actually increased R&D significantly while expanding EBITDA percentage, right? So that's an important thing. We certainly, you know, heard a little bit of the conversations around AI and security trends and so forth. And like everyone else, we are very optimistic. We are highly engaged with customers who have ambitious plans to use AI for business gains and other factors, but the timing is not exactly clear, and there's a lot of things that drive it, including regulation, power, GPU shortage, all kinds of things.
We certainly believe that as that market matures and as more and more enterprises use AI actually, versus just building out large training models, this is the inflection point for us, and this is where we are engaged with our customers on how they best get business results when they use AI. So we certainly see security and AI infrastructure in out years as potentially tailwinds to our growth trajectory. Similarly, I think part of our business is exposed to cyclic CapEx spending with service providers, and that obviously, by definition, is cyclic, and we have seen kind of a depressed period for some time now. We can't project exactly when it's up or down, but that is, again, an area where we are deeply designed into these networks.
We are very close to these customers and fully expect that when that pattern resumes, we should be in a good position to help them to continue their build-outs as well. And lastly, I think, you know, as a snapshot, right, what we talked about, that I want you to leave with you. I think first is, as a company, our objective, our mission is around enabling a secure and available digital world, and you heard a lot of things that probably resonate and theme, tie into that theme as well. We offer a differentiated platform at the intersection of infrastructure, security, and AI. And what I would say again is unique is these are not disparate products that we have accumulated over time.
This is built on a common software foundation, a unified control plane, and therefore it provides the level of performance that it does. Trusted by the largest companies in the world. Second position for secular infrastructure expansion as that unfolds in the next few years. Third, I would say what is important is if you go back and see our performance as it relates to a variety of metrics on income statement, cash flow, or balance sheet, you will see a sustained, steady progression of improvement versus one year being completely great, one year being completely off, right? Yes, things can happen that are beyond our ability to adjust to that, but to the best of our abilities, that is an extremely important goal for us to achieve.
And durable cash generation obviously is important as it ties into our ability to drive the business, invest in things, and continue to balance investor needs, customer needs, and employee needs as well. And last, I think you've seen, you know, disciplined cash allocation, methodology. First priority, organic growth. Second priority, making sure we balance with investors. Third priority, be opportunistic but very strategic, because doing bad M&A is much worse than doing no M&A. So, continue to be disciplined, thoughtful around the business structure and environment we have put in place, and using that as a platform to grow, versus doing something totally different. So, hopefully, that gives you a good snapshot of the company, where we are.
Some of you obviously have known the company for a long time, so this is hopefully a good refresher on where we have been the last few years. And I think, you know, before I conclude, obviously, I want to thank all the 8,000 employees around the world, because a lot of the results you've seen are a direct result of their hard work, dedication, focus, and I would say, commitment to buying into a new business system and believing in it and doing it, right? So I, I think that's critical for what we have achieved. Second, obviously, I want to thank customers who continue to trust us... with very critical application all over the world.
Lastly, obviously, I want to thank our investors for your continued interest in A10, and hopefully, we can continue to do our work to provide you with a good option that you can count upon, right? Delivering a balanced model of growth and profitability with consistency. Thanks, and now I'll hand it over to David. Thank you.
Thank you, everyone. We've got a Q&A session here scheduled, and we've got some submitted webcast questions. If you do have a question, and you're logged into the webcast, obviously, we'd love you to use that portal to submit a question, and we'll try to address it while we're up here. We have Michelle and Dhrupad available to answer those questions. And then for those in the room, we do have Jeff in the back, who will be walking around with a microphone. I'll start with a question here. While, if you can raise your hand, Jeff will come by to bring the microphone to you. But we've got a couple questions online. The first one's from a couple questions from Michael Romanelli from Mizuho.
We've, of course, seen a number of AI-related product announcements and initiatives across networking and cybersecurity. Michael's wondering if we can elaborate on what is differentiated or unique around our AI-powered predictive analytics, our AI prompt and traffic routing, and then our AI inference security solutions. Anything you can talk about there, and then we have a couple other follow-up questions there.
Sure, yeah. So I think, you know, what I would tie it back to is continuing to focus on what is differentiating and what do we know how to do. So as a company that works with about 7,000 customers around the world for 20 years, understanding in extreme detail the network traffic and patterns and packets, our ability to now take contemporary AI tools to be able to look at that data, know what is important, what is less important, and create a view to predict performance or capacity planning, is something that would be very, very difficult for an AI company that doesn't know anything about network traffic, right?
So there is a domain knowledge upon which we are building, and what we are building is using AI as a tool to actually use that domain knowledge to make business decisions, not doing AI because we can do AI, right? So that's one dimension. Similarly, I would say, you know, not go too long, but one more example. Our platform, hardware and software, that allows us to deliver the low latency, high throughput, and ability to process packets in-line, is the platform we are using to now do reviewing of things like AI prompts, leak of PII information, et cetera. So we are not saying we are making a new AI product.
What we are saying is we can take a platform that has been developed over 20 years, is proven to work at scale, at line speed, and using that to solve a new AI problem, which may be people want to know if there is a risk of their employees using ChatGPT and leaking data. So once again, it is building on our differentiation, taking what we know how to do, and using AI as a capability versus saying we need to make an AI product, right? That's not at all what we are doing. So...
And Michael follows up with, you know, when we talk about phase three, that intelligent era, you know, we have talked about this, you know, investing in our AI portfolio.
Mm-hmm.
And our AI capabilities organically. He's wondering: Does this require any incremental investments from A10, any significant changes to the go-to-market strategy? And along those lines, so that's part one. Along those lines, how is A10 leveraging AI internally to help improve our platform?
Okay, so let me go in reverse order. Maybe not reverse order. I'll go in the same order.
Yeah, sure.
So, first question is, we have been doing AI last two years, and our gross margin and EBITDA margins have expanded. So at the same time, we have increased R&D in AI, right? So our business model is designed to manage our investments to high-growth areas and more important areas, while we still continue to deliver financial performance. If you go back to the numbers we showed, we are showing the same exact EBITDA percentage for 2026, so there's no dilution of business model. We are not ever going to tell investors, "Hey, we are doing AI, so don't worry about results for a while," right? That's not what we are doing. If anything, we are improving them. So that's the first question.
The first one, and then how are we using it internally at A10?
I think we are using it like many companies, right? Across a variety of areas. Obviously, in our software engineering team, they use AI for code inspection, doing quality assurance, doing testing. There's a lot of ways they have been using it for about two years now.
Mm-hmm.
We are using it in our support organization to use AI to better service our customers faster on diagnosing problems and solving them. So there's a lot of ways we are using that internally as well. So...
Sure. Thanks, Dhrupad.
Yeah.
Both Michael and Anja Soderstrom from Sidoti are, are commenting on our outlook. 2027 through 2028 is encouraging. Wondering if we can elaborate on the underlying assumptions provided in our framework, particularly, how are we getting to 12%+ revenue growth, kind of the components of that. How does the macro play into that? How does share capture play into that? And, and then as one quick follow-up, is this dependent on service providers returning our AI portfolio? Kind of expand upon that.
Yeah. So it is not dependent on service providers returning or not? I mean, I think obviously every single year in every business, there are things that go better than you thought and things that go worse than you thought. So there's no way to conclusively say in 2028 what will happen. But our 2026, we are expecting growth of 10%-12%.
Mm-hmm.
27 and 28, it's 12+, which could be slightly more than 12 as well.
Mm-hmm.
The drivers are very simple, right? We think AI infrastructure build-out is going to slowly go from initial phase of huge build-out to more, even growth as enterprises adopt it around the world, and it's hard to say the exact timing or quarter where it happens, but that's the timeframe we would see that. And second is, as we see growth in new security threats and problems, and we are engaged with customers on those solutions, that is also a tailwind-
Mm-hmm
... going into that, right? It's, there's no implicit assumption on service provider spending in that.
Yep. And maybe, there's a couple questions around segmentation and the solution areas. You know, Michelle, maybe you can comment just briefly, what is the 2026 and 2027, 2028 kinda longer term projections for enterprise versus SP in our, in our traditional segmentation? And then maybe, Dhrupad, you can expand around, you know, if we're planning to report on these new solution areas, which is one of the questions that came in as well.
Yeah, and maybe I'll start with that-
Sure, sure.
Michelle can answer that.
Yeah.
I think our objective is to give investors more insight into what is driving our business, where we see upside, where we see opportunities, and how we prioritize investing in the business. We do not plan to change our financial reporting segments. We plan to also show a view by this look for a period of time so that investors get both perspectives without suspending something we do today, right? That's not the objective.
Mm-hmm.
It's to provide what we think is a better indicator to understand our business, because the traditional notion of service provider and enterprise is more and more blurred, because most of them also sell to enterprise. So, we will continue to do that, we're not gonna stop that, but this is probably an incremental view, not a financial reporting segment, that will help investors understand the business.
I think it creates greater transparency. So to get to the other part of the question, I think right now we see enterprise, the enterprise segment, outpacing the growth in service providers. We expect enterprise to be between 15%-20% of our growth, and we see the service provider segment growing between 5%-10%, unless we see a return of the first segment telcos.
Mm-hmm. And I guess along the lines around growth, Hamed asks: what is the customer assumption for your revenue outlook, specifically, not enterprise versus SP, but new versus existing customer base?
Yeah. So I think that's a good question. I think so, once again, that is... You should think of that as the solutions we provide to all our customers. So there is no change in assumptions. We have customers today who buy legacy and next gen. We have customers today who buy next gen and security, and we have customers who buy all three. So there's not a new customer. This is simply creating a point of view that shows investors a better view of where we are investing and what is growing and why. And we don't expect a change in go-to-market because there's no change in customer. Obviously, as we develop our capabilities around AI, we will continue to assess and supplement our teams with incremental expertise, whether it's technical or selling or whatever function, to be successful, right?
But it is not, we don't expect the customer list you saw before to be different because of that reason.
Sure. Yep, thanks. A couple more. So Hendi is interested in sort of our pre-AI and post-AI data center infrastructure solutions, and specifically around the average deal size. Does the deal size expand once we introduce our new AI portfolio? We've noted, obviously, that our AI—the AI upside is not captured in our base case of 12%+, but maybe you can comment quickly on a deal size difference between the two.
Yeah. I would, I would differentiate that, though, right? I would be clear that, our products fundamentally are already being incorporated into AI infrastructure build-out.
Mm-hmm.
Right? So it's not related to we have no plain AI until we release something in two years, right? So that's not the case. I think if you think of the strategies, you have to pivot around either a market or a customer or a product. When we look at our customers who want to deploy AI, and they were going to spend, say, $100, with the AI firewall, the same customer may spend $120. But it's not that we have to find a new customer for $20, right?
Sure.
So it's a share of wallet expansion, it's not a wholesale replacement of portfolio, so...
Got it. I think we had a couple more online. Again, if you have a question in the room, just raise your hand, and Jeff will bring you over the microphone. Yep, we've got one in the room, so one second.
You talked a little bit earlier as it relates to sort of the AI revenue opportunity, about how you work. You're designed in pretty early into the cycle, and so it's really just a matter of, I guess, the network traffic volumes continuing to scale with AI adoption. I guess when it comes into that.
... design and opportunity, are we in the midst of that today? Do you already have these design wins in the bag, and it's just a matter of the volume scaling, or is there, I guess, opportunity for incremental design wins?
Yeah, a great question. So I would say we are very early in that process today. So we are in, you know, trials with customers. They are evaluating technology roadmap. They are running it in their network, giving us information. So it's pretty early in that phase. And, certainly, I think, you know, we would expect to get more design wins as well, but even the ones that we are engaged in will take time to mature in the next one or two years, right? But today they are in the phase of testing it in the lab, in the new network configuration, giving us feedback, and, you know, then we get, like, as you remember correctly, then we are in that early phase where we can say we are in this many sockets already, and that will just scale from there.
It's pretty early in that cycle. Yeah.
I think two more questions online that we should answer. Hendi is asking about the competitive landscape across those three areas, those three solution areas. Who do we primarily see, maybe as a reorientation, since we're not talking about enterprise versus SP specifically, but we're talking about legacy networking, next-generation networking, and our network security?
Yeah, so there, there's, I mean, as I said before, it's more to give investors a better view of how we sell and how we go to market and our solution set. There's no significant change. We still continue to compete with, you know, NetScout, and F5, and Juniper, and companies like that. But where we are doing new, unique AI solutions, there's, like, no one to compare it to, right? So that is, that is open space.
Mm-hmm.
Because we are innovating and doing new things, that's not even a category yet. But other than that, there's no-
Mm-hmm
... change in that landscape. I think we are focused on finding the best ways to grow, and we'll compete with different competitors in different places.
Got it. Gray asks, alongside Trevor at BTIG, when we, when we talk about those different solution sets, legacy networking, next-generation networking, security, is there an uplift if the customer takes all three solutions? Is there a typical uplift from legacy networking to next-generation networking? Maybe you can just talk a little bit about that.
Yeah. No, I think that's a good question, and obviously, that's our goal, right? So our goal is, we want the current or existing or new customer to buy as many categories as they can. So typically, obviously, we see a good attach between next-generation networking and security. We would also probably see a good attach between legacy and next-gen because what it means is the same customer could be in both, right? In one case, they are spending less on this product and spending more on this product. So-
Mm-hmm
... and in many cases, one of the, you know, things we highlighted is, my approach is really that we don't go to customers and say, "You got to replace everything you got if you want all this new stuff," right? So in many cases, we are actually enabling our customers to interoperate legacy and next-gen networks and equipment because that is in their best economic interest, and we want to be aligned with them to support them, what is driving that. So, yes, absolutely, right, our goal would be there's a natural connection of security to everything, but there's not a strong decoupling, right? Many of our large customers buy legacy as well as next-gen.
Thanks, Shervin. The last question is the easiest of all: Can you post this-Sorry about that. Can you post the slide deck to the IR page, please? Yes, we will do that. Our slide deck will be available on our website, and the webcast will be available for the next four weeks as well. I think we're wrapping up our Q&A session. Again, thank you everyone who has come here to San Francisco to enjoy this event with us. We really appreciate your attendance. Thank you, our special guests from AMAX Engineering-
Yeah
... and the A10 employees who contributed to this great event. We'll be around here for a little bit to answer any other questions in the room. But thanks, everyone online, for joining us for this webcast, and we'll speak to you soon. Thank you.
Thank you.
Thanks, everybody.