Hi, everybody. Tim Long, Barclays, Analyst Covering IT Hardware, Comm Equipment. Thanks for joining us for F5 today. We have François and Tom. Thanks for coming, gentlemen. We appreciate it. Let's. Do you have to do Safe Harbor, or? You want to go first?
Yeah. Thank you, Tim.
Yeah.
So before we respond to your questions, I need to get our Safe Harbor on record. Please note that our discussion today may contain forward-looking statements which involve uncertainties and risks. Our actual results may differ materially from those expressed or implied by these statements. Please see our SEC filings for more information on these risk factors. Thank you.
Great. Thank you. You know, I had a little discussion with François outside, seeing a lot of AI discussion today, so we'll start there for those that haven't heard the full spiel. But maybe if you guys could just talk a little bit about the F5 position, kind of both. Maybe if you can touch on hardware and software, products and services that you're feeding into the AI opportunity.
Yeah. So, Tim, first of all, you know, we are seeing across our enterprise customer base that, with time, the vast majority of our customers, if not all, will deploy AI applications. The question is, how will they deploy these AI applications? There's a separation clearly today, which is that we think the vast majority of our customers will not actually build dedicated self-hosted GPU infrastructure. That will be reserved for a small minority of customers that have very large needs and want to spend millions or tens of millions of dollars in building their own GPU infrastructure, so I'm going to segment these two sets of our customers 'cause the opportunity's different, so let me start with the vast majority of our customers. Even though they will not deploy dedicated GPU infrastructure, as I said, they will all deploy AI applications.
And what's top of mind for them is, how do I connect my data to my AI applications or to my AI model, wherever that AI model may reside? And that creates a net new opportunity for F5, which is basically connecting data to AI. And so specifically, an example of that is, if you're running an AI application, and you are inferencing that AI application, increasingly customers want to augment an LLM with proprietary data. The use case is called RAG, you know, retrieval augmentation. And in doing that, they need to access data stores in the enterprise. The access of the data store requires a lot of read requests on these data stores, which then require high-performance load balancing.
To your question, that opportunity is a hardware opportunity for F5, but we're seeing that as an immediate opportunity for the company, and we're seeing that a lot of customers eventually will have to do that, access these data stores. The same use case exists when a model that may reside in the cloud that is being trained or refined needs to access proprietary data. There's also a need for high-performance load balancing in front of these data stores. So those are opportunities that are emerging for companies that are not building dedicated GPU infrastructure. The second opportunity for these customers is more traditional web application security in front of their AI applications. AI applications are modern applications, like other applications we support, and we do need to provide security in front of these applications. That's likely to be a software opportunity for F5.
Then you go to companies that will build dedicated GPU infrastructure. I think that's going to be a highly concentrated set of customers. For F5 today, it's not the hyperscalers. The hyperscalers generally have homegrown technology, but it is very large enterprises that have a significant amount of data, that for whom AI is really important to their business model in the near term, and therefore want to build their dedicated infrastructure. For those customers, there is a need to load balance their GPU clusters, to improve the performance of inferencing or even training these models. That's a hardware opportunity, again for F5. And then if we go further than that, inside of these GPU clusters, there is a need to provide traffic management to enable multi-tenancy in these GPU clusters, to enable faster movement of data, and therefore better utilization of GPUs.
And that is an opportunity that we will serve with software. And that actually is the reason we have talked about a partnership with NVIDIA, is precisely to serve and improve GPU utilization inside of these GPU clusters.
Okay. Great. Yeah, that's helpful. Suzanne and I were joking about the whole hardware-software. I don't never understood Wall Street's craziness about the two when they're both very high margin, but is this much more of a hardware opportunity because the speeds and, you know, technical capabilities that you need with, like, your purpose-built hardware? Is that why we're seeing, a little bit more hardware bent to the potential AI business?
I think initially, yes, because, you know, load balancing becomes more important when the load increases. And in the case of customers that want to retrieve data from data stores, what is changing is that they need to access these data stores way more frequently with way more read request, and it increases the load. And therefore, high-performance load balancing served with purpose-built hardware makes a lot of sense for these use cases. And that's where I think we're seeing the customers that are starting to deploy AI applications; that's a bottleneck they need to resolve to improve performance of the application.
Okay. Great. And you mentioned the NVIDIA partnership. Can you just talk a little bit about, you know, kind of how that was formed and what you think the opportunity is there and kind of timeline of that impacting F5?
Yes. So of course, in terms of implementation, it's very early days, but the need here emerges from, as you know, GPUs are quite expensive. And so the companies that are building dedicated GPU infrastructure want to make sure that the utilization of this scarce resource is as high as possible. And today, the utilization of GPUs isn't great, largely because getting data to these GPUs has been difficult to get it to the GPUs at the speed that it needs to get there. So we have built this traffic management software that improves that problem significantly. It's the traffic management software that has existed in BIG-IP for years, but we have refactored it for these environments, specifically to work on ARM architectures, specifically to work in Kubernetes environment, which is how these infrastructures are built.
And so the idea here is that when you combine our traffic management software with NVIDIA DPUs, they become AI accelerators, and they improve the utilization of GPUs, but they also enable multi-tenancy, which is quite important. When you're building GPU infrastructure, you want to make sure that these GPUs can be used by multiple applications that are independent from one another. So multi-tenancy is really important, and that's what we enabled when you combine these two technologies. Now we're still in the process of, you know, fully validating the performance of the joint solution. It's not yet generally available to customers. That will happen later on this year. And then you know when that happens, there's a lot of questions around what's the potential commercial model for and go-to-market model for the joint solution, which we have not solved yet.
So this is why I say it's early days, and I think it's down the road that we will see this opportunity, hopefully materialize.
I'm guessing if I'm not an NVIDIA AI user, I'm using some kind of custom ASICs, that same type of application would be needed, the same traffic management, so it's not just an NVIDIA stack that.
One way or the other, to get the utilization up, to get multi-tenancy, one would need some form of traffic management.
Yeah.
Capabilities that do that, and you know, of course, we're, you know, we believe that we are clearly not just the market leader, but the technology leader in the space.
Right. Right. Okay. Great. Maybe last quick one. Does it matter, Ethernet and InfiniBand from the networking layer or any other type of interconnect? You still need similar type of traffic management?
We're agnostic to that.
Yeah. Okay. Excellent. Maybe we could touch on software. I think one of the big takeaways the last quarter or two had been kind of the return of renewals and deal flow. So maybe touch on that, kind of what do you think's driving that? Are there specific products or customer sets, or is it just, you know, macros getting better? How do you view that kind of recovery that we've seen in this software business?
Yeah. So maybe I'll take that one. You know, let me start by maybe level setting a little bit about sort of our outlook and sort of how we see things. We've guided to a 4%-5% growth for FY 2025. We said that it would be back-end weighted. And I think this is a lot like what we saw in FY 2024, that was similarly quite back-end loaded. And that's really a function of the visibility that we have around these renewals. You know, a majority of our business now is software. Majority of the software business is subscriptions. And most of our subscription businesses have a three-year duration to the life cycle of those agreements.
And so when you look back over time, we really started introducing these in FY 2018, but really picked up a lot of momentum in FY 2019. And so when you look at, you know, that cohort of customers, they came up for renewal in FY 2022. They're gonna do it again here in FY 2025. And so that gives us a lot of visibility into the timing of these renewals. The renewals have performed very well for us, even in more challenging economic cycles. We've seen that we've performed largely to plan on the renewals. And in fact, as we look to the back part of last year, Q3 and Q4, we in fact started to see greater expansion even than what we had been planning for.
I think this is largely a reflection of these being a really good moment in time for our sales teams to be able to go in and share with them the broader portfolio that we have. We've spent a lot of time over the last several years broadening the range of offerings that we're able to offer customers, and so these renewal moments are excellent times to share the broader portfolio and the set of solutions that we're able to bring, and we're seeing expansion as a result of that.
Okay. Maybe if you could touch on kind of cross-sell opportunities now that you have, you know, more security-based products or NGINX and just a lot of, you know, different tools that you can add when these things are up for renewal, so how does that conversation normally go with customers, and what are some common maybe packages or upsells that you guys would see?
Yeah. So I think we're really excited about the growth opportunity at these renewal events, and there are kind of three different levers that we think about pulling. The first is really around the number of applications that the customer is using. The second is really the bandwidth and consumption and usage of that application. Both of those sort of drive incremental usage of our solutions, but then a lot of the focus that we've really been emphasizing over the last couple of years is around upsell and cross-sell. The upsell opportunity is within a product family. We have a very rich set of offerings. So, oftentimes a customer that maybe is a BIG-IP security customer will be looking for more advanced security functionality. We've got a very deep security portfolio now that we're able to offer those customers.
And then increasingly, there's really this big cross-sell opportunity. So a lot of our customers are BIG-IP customers. We have 20,000 customers of the largest enterprises and service providers and governments in the world. And the opportunity that's in front of us is to really cross-sell them the other products in our portfolio. And maybe if I illustrated our SaaS offering, our Distributed Cloud platform, we launched it in February of 2022. We finished last year with over 800 customers now on our Distributed Cloud platform. And I think it's a great example of the cross-sell opportunity that we have in this portfolio, that we have of both 20,000 customers and a very rich set of application security and delivery offerings.
Okay. Yeah. I think in the early days of these term subscriptions, that second bullet point on consumption was the one that really exceeded what you guys were thinking, and that caused the true forwards and some of the, funky seasonality in the business. Has that business mature enough now generally that it's a little bit more predictable from a consumption and usage basis, or are you still coming across customers that just see a lot more utility to it and are, willing and able to consume at a much higher rate?
Yeah. So I think we've gotten much better about sizing these opportunities upfront. And then we see a really nice and consistent expansion for many of those customers around the usage of the products. Many of these, remember, are these three-year, multi-year subscription agreements. And in that commercial construct, we get really good visibility into their actual usage. So we know in year two what they actually used in year one and similarly on down the line. And so that gives us good visibility, and we're seeing sort of consistency on that. I think the piece maybe though to highlight is that these are term subscriptions. And so when they come up for renewal, the revenue recognition policies follow ASC 606. And so under ASC 606, the majority of it is recognized at the start of the contract term.
There is a services renewal component that then is ratably recognized over the duration of the contract. But a lot of it is at the initial contract term. And so that's what creates some of that unevenness quarter to quarter. It really is not around the usage. It ends up being much more about sort of the timing of when these transactions are completed.
Right. Okay. Yeah. As a hardware guy, it was a lot of fun to learn about 606. Changed my modeling. You did mention, Tom, the Distributed Cloud and the, you know, the 800 customers. Could you just talk a little bit, or François, about kind of the complexion of that? What do you know? I know there's the transition from Silverline, but what are you seeing with new activity, new customers? Is it, you know, are there any other meaningful things that you think will cause this business to be much more successful than maybe Silverline had been?
So I think we're very excited about the customer momentum that we're seeing with our Distributed Cloud platform. You know, we're launching it in 2022, today at 800 customers. We feel really good about that momentum. When you look at the characterization of those customers, about a third of them are new customers to F5. So these are customers that weren't buying any of our other solutions. In many cases, they were using competitive solutions, but were attracted to the benefits and the value proposition that Distributed Cloud provides. And so they've come to us. The other, and maybe worth noting that those customers have a very similar profile to our existing customers. So they're usually large enterprises and service providers. The remaining 2/3 s though are actually existing F5 customers.
To me, it's part of the kind of power of the expansion opportunity that we have, that we've been able to drive so many customers to this new platform in a relatively short period of time. You know, these customers are broad in that they represent sort of all different types of our existing installed base customers. What we're seeing is that they start with us in a small part of their environment, around a specific set of use cases, and then grow from there. We're quite encouraged. We've got a number of great examples of expansion of customers that have come to Distributed Cloud platform from BIG-IP. They're often using it still, and then add Distributed Cloud into their mix, and then start to expand on Distributed Cloud once they've adopted it in part of their organization.
Okay.
Yeah. And I would add, well, Distributed Cloud already has more customers than we ever had on Silverline to your question. The customer adoption continues to grow. We are really excited about what we're seeing there. If you pull way up and you think about what we have framed as the macro problem for our customers, which is the ball of fire, we have said, you know, our customers are now living firmly in hybrid multi-cloud environment that are here to stay. The big complexity for them is how do you secure and deliver apps across this hybrid multi-cloud environment? Distributed cloud is actually critical to resolving that complexity for our customers because it allows us to secure any app or API across any one of these environments. We have customers that start small and then start adding more capabilities over time. With that, that's kind of the baseline.
AI is essentially adding data to the ball of fire. So the ball of fire is getting hotter because the data is not something that in the past F5 had to network essentially. But when you're trying to use generative AI, and you're trying to use your proprietary data, you have data that in the past was siloed and did not need to be networked. I have my sales data here in Salesforce, an example. I have my customer support data in a different system. I have my marketing data in a different system. They live in their silos. Now you're trying to create a connected data lake of all of this so that your large language model can take advantage of your 360 data across your enterprise.
You have to make it easier for this data to get networked, and Distributed Cloud enables that, so it's another use case that is emerging for Distributed Cloud because this ball of fire is being exacerbated by having to network not just apps and APIs, but networking data.
Okay. Maybe one more on software, in NGINX as a service. You know, I think this has been a good acquisition. If there's any questions, it's probably been more about how the monetization level of it. But maybe, I know it's a relatively new offering, but what's kind of the early feedback and what do you think that does for this part of the business?
So NGINX , NGINX , as you know, has kind of become the primary complement in Kubernetes environment. Kubernetes as a technology does very well for orchestration. But when it comes to networking and security, NGINX is really the ideal complement to Kubernetes. And so for all these modern apps that are deployed in Kubernetes environments, either on-prem or in the cloud, NGINX has been, you know, really successful there. And over time we've added security capabilities. So NGINX is solving more security use cases. However, all of this, these deployments of NGINX have been done with NGINX as a deployed product. It's a software you buy, you own, you patch, you manage.
We have seen, in combination with Microsoft, that Microsoft had a lot of customers that are using NGINX and would've wanted to use NGINX as a service in Microsoft Cloud. And so we worked with Microsoft to make NGINX as a service, essentially a first-party offering in the Azure environment. And we're seeing good customer adoption with that. It is early days, and so, the revenues are not yet meaningful, but the growth and the speed of customer adoption and consumption is moving pretty quickly, so we're excited about what we're seeing there, as a cloud implementation, if you will, of NGINX purely as a service.
Okay. Maybe pivoting to security for a little bit. I think when you gave the numbers, it was relatively flat last year, but there's been funky, you know, numbers with the pandemic and whatnot. So maybe just talk about your security opportunity, standalone, bundled. You have a lot of different offerings. You know, it should be a relatively higher growth end market for F5. So maybe talk about the opportunities there and when we should see that returning to more reasonable growth.
Sure, so you know, I think we've got a very strong portfolio in security, principally around a set of technologies called web application and API protection. You'll hear people refer to it as WAAP solutions. It's made up of four separate subcomponents. So it's a web application firewall. It's distributed denial of service. It is anti-bot, and it's API security, and the reason it's so important to put all of these together is that customers really wanna solve their full set of challenges around securing and managing their applications. And we've got a very competitive set of capabilities there. We've got a lot of depth in sort of each of those individual areas, and then we've brought them all together into a unified set of solutions that are available on top of our Distributed Cloud platform.
Within that portfolio, I think API security is one that is getting quite a lot of customer interest. Customers generally are finding that APIs are not something that they have very well managed. We have particular depth in API security. So we do everything what people refer to as Shift Left. So all the way up into the way the applications get developed, all the way through to the Shield Right. So being able to provide runtime enforcements. And so we have a differentiated WAAP offering, broadly, but we have quite a lot of capabilities specifically in API security within that family of offerings.
Okay. Great. Maybe just a quick one on hardware. I think the guidance calls for growth this year. It's been, you know, again, with backlog and everything, it's been challenged. And François, you did talk about how hardware's playing more of a role in the AI portion, but more broadly around hardware. I know there's been some, you know, competitor dislocation as well. So what's your outlook there and what do you think are gonna be the key drivers of that hardware business the next year or two?
Certainly. And you know, what we're seeing is the refresh pipeline on our hardware is pretty strong. And we think there are a number of customers that you know, especially in the 2023 timeframe with the macro, had really delayed their refresh, perhaps while the capacity needs on their applications were growing. Now that they're, it seems there's more stability in macro, at least more stability in budgets, and there's little less uncertainty. We're seeing customers more willing to move forward with their refresh. So I think a dynamic, even if you put AI to the side, dynamic that is a tailwind to our hardware business certainly this year is a strong refresh pipeline. And I think the other dynamic is hybrid and multi-cloud.
We have gone from, you know, if you go back seven years in time, Tim, where, we had a situation where there was so much excitement about the cloud that some customers were thinking, "No, I'll be all in the public cloud in the future. I won't have any data center, so I have to be careful about investing in hardware again." They went from that to saying, "Actually, I'm going to be in hybrid multi-cloud, but that may be a transitional state," to now where most customers are saying, "I'm going to be in hybrid multi-cloud, and that's my end state." And therefore, you know, on-prem environments are part of my future state. And so, even broadly in the industry, you're seeing data center investment actually at the moment is increasing in part because of AI.
I think because of that, customers are more comfortable saying, "N ot only am I gonna refresh my hardware, but I'm going to augment my capacity to support this or this new use case." Those are two tailwinds that certainly for F5 are tailwinds for our hardware business, you know, for the next two quarters.
Okay. Maybe a few kind of customer areas. Federal government's always been an important business for F5. You know, with the change in administration, do you think there's going to be more positive drivers there? And I'd imagine we're. It seems like we're at the space where we're going to see a lot more sovereign AI type applications. So back to the AI piece. So what's your outlook for that, for that business, that segment?
You know, look, that segment on any given quarter is kind of, I think with 6%-12% of our total product bookings. So that's a way of thinking about it. It's too early to have a sense of whether this new administration is going to affect that part of our business positively or negatively. We haven't seen to date a fundamental change in the spending patterns in that segment. But what we do know is, F5 equipment in that segment is used to drive efficiencies and help customers, protect and secure their applications in the most possible efficient way. We don't think that spend or that need is going to go away, with a new administration. So that's.
Okay.
That's where I'll leave that.
Okay. Great. And then telco is another one where it's been a rough go for most companies in that area, but it does feel like it's starting to get better. So are you seeing that or you're expecting that maybe your service provider telco business could start to, to positively inflect? And would you think that would be hardware-led as well? I think there, you know, tend to be a little bit more hardware.
You know, I think telcos, as you know, have an aversion for OpEx spending, and so whether it's hardware or software, it perpetual license is a consumption model that I think our telco customers like, which is one of the reasons why we have, kept our investment in, you know, hardware and software in a perpetual license, but also software on a subscription basis, et cetera. Yes, we are seeing again, customer telcos also have been very constrained in their budget and very cautious in their spending throughout 2023. We saw that also through the first half of 2024. We have seen green shoots of more normality in budget and spending in the back half of 2024.
Our view would be that throughout 2025, we would continue to perhaps see easing in telco budget and spending patterns. Okay. I wouldn't say that's a hardware or software thing. I think it's more of a perpetual license, kind of consumption model.
Right. Yeah, it's good that they do seem to be, you know, catching up a little bit. So, we're only 30 seconds out, so we'll, I think we'll wrap it there. So thank you both so much for spending the time with us. Yeah.
Thank you, Tim. Thanks for having us.
Thanks, Tim.
Thank you.
Are you guys done with your meetings?
No, we have two more.