All right, in the interest of time, we will go ahead and kick it off at the Cloudflare session at the GS Communacopia and Technology Conference. Thanks everyone for joining us. I'm delighted to have on stage with me, Thomas Seifert, CFO of Cloudflare. Thank you for your time this afternoon.
Well, pleasure being here.
Thomas, one of our favorite slides from the investor presentation earlier this year, you have a pie chart with revenue by Cloudflare SKU.
Yep.
I think there is north of 50 products
Yep
On the pie chart. You made a very interesting point, which is a lot of the products that Cloudflare sells build upon each other and the central architecture that we have. So talk about this central architecture that you have a little bit. How do you think about allocating R&D to be able to offer the breadth of product that you have?
Yeah. Good question, and timely, because, you know, next week we start our planning session for next year's budget. So, you know, capital allocation will be a big topic for it. We have our fifth year anniversary for our IPO on Friday, I think, and when I got started, we had about eight revenue driving products. At the time of the IPO, I think we were around $200 million of revenue. So 50 products today is quite an achievement. From a CFO perspective, driving this platform strategy is really key, and this is where a big part of our future is from a consolidation perspective, from a delivering total cost of ownership improvement for our customers.
Consolidating spend is moving in that direction. And as so many things at Cloudflare, capital allocation and is a very data-driven project. So one of our big credos is we always invest behind the demand and not ahead of the demand. So currently, the team is doing a lot of modeling where money is currently spent and how it's going to shift. From a product perspective, we are at a point now where what we call our second wave of products, Zero Trust and SASE, I'm sure we're gonna talk about this, are reaching really enterprise-grade readiness, feature completion. So budget from an R&D perspective is moving towards the next generation of products. There's a lot of discussion around Workers AI, inference products, anything in that space.
On the go-to-market side, it's very different. A lot of money goes into go-to-market spend to support SASE. We've been on this path, stacking almost like what we do on the product side, we stack S-curves on top of each other. That is how Matthew likes to talk about it. The go-to-market side is not so different. You know, we develop from a freemium digital with search engine-driven business, credit cards, inside sales, field sales, and now strategic accounts, and especially in this transition on the SASE side, a lot of dollars go adjusting go-to-market. We've seen quite some success, and we will build on this going into the next year. So very different dynamics from a capital allocation perspective, R&D versus go-to-market spend.
One of the interesting comments you made there is on investment following demand.
Yeah.
Particularly investment moving up to Act Three products.
Yeah.
So tell us a little bit about the demand curve that you're seeing for Act Three and some of the ways in which Cloudflare has really established itself as the provider of choice for AI-first company?
There are a lot of KPIs that we currently monitor. There is the how do we look at the developer space, which is really important, and we have seen significant momentum on our Workers platform in the past that has now accelerated with everything that happens around AI. You know, we were at two million developers in the Q1. We were at two million four hundred thousand developers in the Q2. So even the short-term significant momentum from a developer growth perspective. If you look at preference of developers to work with platforms, we came in. We were nowhere literally two years ago. We came in second ahead of all the big hyperscalers in the last survey. So that is something we measure.
On our Investor Day, we said that close to 80% of the leading 50 AI, generative AI companies are now on our network, either as customers or to run training or inference work. That's a really important KPI. And then, what we really try to do at this point is less about driving monetization, the question that I get a lot, but driving adoption and diversity in the use cases we see. One of the bigger challenges is how you get GPU compute utilization to acceptable economic levels, and in order to do that, you need insight into many, many different use cases. So that's why driving adoption, driving diversity in the use cases is so important for us.
I'd love to understand holistically as well, when you think through some of the conversations you're having with your larger customers.
Yeah.
How do their plans for adopting AI and their roadmaps intersect with how they're thinking about the strategic relationship with Cloudflare?
So the one of the reasons why this is such an interesting field for us is because it drives multiple, almost independent growth drivers. You have, in the first instance, an AI company is like every other customer that we have. They look for a secure performance environment. They need to protect their infrastructure, they need to protect their IP. So they are a customer of most all of our products. In addition to that, most of them are startups, so they need to look for economical use cases, at a time where GPU capacity still is tight, and expensive. Finding a place where how you hold and where you store your training data, and how you find access to available and affordable GPU capacity is key. That's one of the big roles we currently play. That is a business that will disappear over time, you know, as GPU capacity becomes more affordable and more available, but that is a really important use case. And then you have all the worries you have about who in your organization interacts with what AI model, what data moves, what data needs to be protected, and we have a whole slate of products built around that, around our AI Gateway.
And then more and more, the really big advantage of the network being able to control the flow of data on the most regional level, and allowing companies, either as AI providers or customers of AI products, to stay within their compliance requirements of data sovereignty and data privacy laws, is really, really important. And, you know, we built products way early on, on this around GDPR requirements in Europe, especially. And, the product we call Data Localization Suite is now pretty much attached to every contract we sell in Europe. So there's a whole slate of opportunities for us in that space.
So you mentioned having 50% or close to 50% of the top AI app developers on the platform. Is that the right-
I said close to 78%, I think, of the top 50.
70% of the top 50.
Yeah.
Thank you. When you look at what those companies are doing with Cloudflare, you've mentioned in the past five minutes, both training and inferencing
Yeah.
Use cases. Maybe let's separate those. How do customers think about Cloudflare when they're in the training part of their of building their company? And then how does that shift, and what is your
Yeah.
Differentiation come out on the inference side?
So we see most of our opportunity in the inference space moving forward. Training today is, I think, a window of opportunity as long as GPU capacity is rare and large language models need a lot of concentrated GPU high leading-edge GPU capacity in order to get trained. We'll see how that change with when we talk about smaller language models and so on. But if you operate a network like ours, where you are literally 50 milliseconds away from whatever wants to connect to the internet, you're in this Goldilocks zone of where to run inference tasks close to the interface, but not on the device, helping to save, I don't know, bill of material, extending battery life, but delivering performance at a rate as if you were in a data center.
So we're in this Goldilocks zone, and we see a lot of inference use cases that are optimized for that, especially when it comes to human-computer interaction, where latency plays a big part, you know. Chatbots, chat rooms in gaming devices, how you run drones, and how you connect them. The variety of use cases is huge, and I said it's important for us, so we learn how we optimize our hardware stack towards those inference tasks. Not every inference job needs H100 or H200. Yeah.
I'll stay on the idea of optimizing the hardware across the network, because you all have been able to address a number of these inference use cases while also maintaining a low CapEx intensity ratio.
Yeah.
So talk to us a little bit about your learnings from the activity on the network between the different flavors of compute, and how you think about getting that optimization algorithm right?
So if you look at one of the... I mean, one of the biggest competitive moats Cloudflare has is the architecture of our network. And, if we leave inference and GPU and AI outside for a while, it's the ability that every product, every one of the 50 products we have, can run on every server in every location, and the software stack is completely agnostic to the hardware it runs on. It doesn't care if it's an Intel CPU or an Arm CPU or an AMD CPU. So we've been able to abstract that, and the network finds itself the best distribution of workloads, taking advantage of underutilization in some parts of the network, because that part of the planet is asleep at a certain time. That allows us to run the network at a very high utilization and highly CapEx efficient.
We try to move our inference workloads in the same direction, so most of the work we currently. It's software work, trying how you schedule, how you break workloads apart, how you serialize, parallelize workloads so we can get GPU utilization to that level. And again, that's why we need a lot of highly diverse inference workloads so we can optimize for that. So far so good. You know, we were at 6% of network CapEx in the Q2, which is very low. We said we will catch up towards 10%-12% until the end of the year. Most of the catch up is going into GPU compute capacity because we think we learned a lot, and we can rebalance that. Today, we already have GPU capacity in 100 and close to 170 cities in the world, so it's quite distributed already.
There's a really interesting dynamic with the Cloudflare network that you've talked about in the past, which is when you built the network to solve for your Act One use cases.
Yes.
The Act Two use cases come at incredibly high gross margin.
Yes
Because you could take advantage of the same capacity, the same pipes, but in the opposite direction.
Yes.
Where do the Act Three products fit relative to that? Is there an intensity or is there a way to think about, from a utilization standpoint, where they fit within the existing products in the Cloudflare network?
That's a really good question. So then here, I think the advantage comes in two directions. Most of the bandwidth work will be similar to what we've seen in Act Two, so from a bandwidth and size of the pipes perspective, it can, to a certain extent, live off the architecture of the first wave of products. And there's a certain fungibility also between low-end inference tasks and CPU capacity, so we don't need a GPU for everything we run. So, you know, it's a network that was not built to deliver content. It happens to be the fastest content delivery network, so we can deliver security and performance products with low latency to our customers. We always had a significant amount of CPU compute capacity in the network, so there will be some fungibility now in both directions. I know you can solve some of the encryption and decryption and packet inspection tasks easier with GPU capacity, so there will be a benefit. It's one of the unique things about Cloudflare is you have so many flywheels, and this is another flywheel that will fly faster the more diverse load workloads we put on it, even from an AI and inference perspective.
I want to come back to SASE.
Yeah.
And the momentum that you've had, particularly as I think you've commented that your product functionality and your product-
Yeah.
Roadmap is now at parity versus perhaps some of the vendors that have been in this space for longer. When you think about the limiting factor to adoption for SASE today, what are you hearing from your enterprise customers on willingness to engage, and how do we think about greenfield versus brownfield for the incremental growth in that business?
Yeah, despite of all the momentum, and let's face this, significant opportunity, I think, still in front of us, the products are featured out. As part of our larger platform, they are now an opportunity for customers to combine not only protecting their front door, but also all the doors they have inside of their enterprise and their network. So our ability to use SASE in combination with the other products we have, especially the Magic WAN products, the corporate networking products, is significant. Magic WAN was the fastest growing product, year to date, from a momentum perspective.
And it plays into what you probably have heard a lot over the last two days: in a time where you have macroeconomic uncertainty, where IT dollars need to go faster, longer, you have to make more out of them. Our ability to consolidate spend, and at the same time, deliver significant total cost of ownership reduction for our customers is significant. So having a fully featured enterprise-grade ready Zero Trust product is helping that, and we see, you know, anywhere between 30%-50% total cost of ownership reductions when we bundle these products. So this is sustaining the momentum, and you heard us talk about significant wins in the last two, three, four quarters. So the seat sizes on SASE are becoming really, really significant.
You talk about tens of thousands of seats in really interesting verticals. Yeah.
Is it fair to say the 30%-50% total cost of ownership reduction, that's a function of the same dynamic we were talking about, where you can leverage the existing architecture.
Yes
And thus generate high growth margin while also delivering total cost
Yeah.
Ownership savings to the customer?
Yeah, I think Matthew mentioned it on one of the calls in either the last quarter or the quarter before. We probably could consolidate all the SASE vendors that are out there today on our network without spending one more dollar on CapEx on network capacity, and that gives you an idea how much room there is for efficiency gains. Yeah.
Do you see customers in SASE? Is there a motion towards dual sourcing at all, where they'll bring in Cloudflare for a piece of their network and then scale it up over time?
Yes. I think there are many factors where we see that there. We have seen this earliest in very critical infrastructure customers, where SASE products are getting layered, so to speak, in order to have better performance. We have seen some de-risking or rethinking of resilience after the CrowdStrike incident, where customers split their deployment. One SASE vendor for employees, one SASE and a supplier for contractors. We have seen geographic split of offering for a variety of use reasons, from more remote workforce and stuff like that. So we have seen this, yeah.
Just to clarify, so you're talking about customers being more willing to dual source post July nineteenth. I imagine in some cases, that works to your advantage because you come in as a second person.
If you're the second entry, the second generation, as we say, it's a tailwind, of course, yes. Yeah.
And maybe I'll stay on this topic of resilience, because over the past decade or so, every now and again, you'll see a Cloudflare outage, and suddenly you are a mission-critical part of most-
Yeah.
Most companies' infrastructure. How do you, as a company, think about minimizing the downside risk from those types of events?
The first, I think, is we are rather transparent in when this happened, unfortunately, and how we mitigate and what we have done. We try to learn from these incidents. I don't want to minimize that when we had this big outage, where a WAF rule exhausted CPU capacity, we were out for 20 minutes, I think 24 minutes, which had a huge impact. It was 24 minutes because we run our own network. We are not running on top of a hyperscaler, we are not running on-premises, so it's all... Fixing it is just us. You know, that is a risk, but it's also an opportunity, and at least the latency to get back to normal was manageable. I think the first thing that we do is complete transparency, inside and outside, to what has happened. I think this makes us different from a lot of other companies, and a ruthless postmortem on how we need to improve. I mean, we understand you know the responsibility we have if you're such a critical part of many companies and countries, to be honest, infrastructure. You know, we try to learn. Yeah.
The one other product question that I wanted to get your thoughts on is the Cloudflare strategy of introducing products early and then iterating on them frequently.
Yeah.
So I think you've talked about... Well, Matthew's also said, if we're in the top right of the Gartner quadrant, we waited too long to release it.
Yeah.
So give us a sense, out of the 50-plus products you have today, are there a couple that you would say are most accelerating up that path from a 2 out of 10 to a 10 out of 10, that can move the needle for you as you look out over the next?
I hope we don't have two out of ten, not even when we file. That would be a stretch. But there is, you know, the first wave of products that we talk about, where you protect infrastructure in the load balancing, the firewalls, the DDoS mitigation, where we are best of breed, I would say, across the board. We've come to a point where the SASE products are now being featured out. That's why we see such a lot of momentum. Now it moves on to the next generation of product. We just acquired a company last year with a bit of Magic Firewall, firewall product, a networking product that allows you to manage all your other public hyperscaler cloud exposures through our dashboard. So there's a product that just launched a local traffic control on our. There's so much, I think. And then everything that moves into the AI field now, our AI Gateways, for example, and the products we have. So the third wave of products is now where the R&D dollars move and, you know, as I said before, the go-to-market dollars. Now exploiting the good product performance we have on Zero Trust and what we call second wave of products. Yeah.
Maybe I'll ask next about the go-to-market comments that you've made.
Yeah.
Pool of Funds deals.
Yeah.
As the company's portfolio has broadened, you've moved towards being able to give customers more flexibility
Yeah.
With pool, Pool of Funds. So talk to us about what those conversations look like on the ground to move customers from classic license, classic subscriptions to something more like Pool of Funds, and how does the opportunity change with customers when you introduce Pool of Funds to the conversation?
You know, and I think I said it before, so on the go-to-market side, it's like... It's almost like the same principle we have on the product side, where you stack S-curves on top of each other. This rough rule that about every eighteen months or so, the size of our largest customers takes a step function. You know, how we move from $100,000 of our largest customers to $500,000, to $1 million, to $5 million, to $10 million. And we seem to be at the threshold where, you know, $10 million and million-dollar ACV deals and more are becoming more prominent in the pipeline. And not all of them will be Pool of Funds, but Pool of Funds is one instrument to get to larger deals with customers.
Pool of Funds deals. We've generated quite some excitement on the last call. This is literally a customer commitment to buy product over a certain period of time. And they have a rate card across all your products, and they just consume products from that rate card. So you don't really know in the beginning, will it be more SASE? Will this deployment be better? It allows us to drive larger deals with more total cost of ownership reduction with our customers. It is less friction, especially with large, critical infrastructure customers, where it takes friction out in their processes instead of getting, I don't know, approval 10 times in two years on a $1 million contract. They do this once and then can consume the implementation significantly faster. It allows us to. It's a stickier engagement.
We showed the data at our Investor Day, that once we are north of nine products per customer, their retention is really, really, really high. We hardly see any churn once you are beyond nine. So now you have a customer that potentially can consume 50 products, that makes it quite attractive. So I think this is just, you know, the next step in our evolution and in the step function to get to significantly higher customer count, north of $10 million. It will not happen for every customer. It's not a good instrument for every competitive situation, but it works really well in certain situations. It makes our world, on the finance side, more complex in terms of forecasting revenue in the near term, some more. So we need to get used to that. But I think that there's significant upside and lift coming from that momentum.
And the complexity on the forecasting side comes from not having
Well, you know, it's like what you probably hear from other large enterprise software customers. It's primarily in the beginning, it's a promise to buy, so you have to encourage consumption. And how you manage that, how you make sure that your team is on top of that, what KPI, how you operationalize a contract like this, is key. And you know, so many things, we are also in a learning curve when it comes to that, so but we're making good progress.
What types of customers does it not make sense for?
Well, you know, we have large customers that just consume one kind of product. So you get to a significant ACV without complexity of many usages. You have digital native customers that have, you know, that are very lean in their processes and how they can approve purchase orders, and you know, it's probably less likely for a customer like that to be on a Pool of Funds deal. Yep.
You have been very transparent in some of the metrics you've given around sales, hiring, and sales productivity.
Yeah.
In both of the last two Analyst Days. I'm thinking of, for example, sales manager and AE hiring in 2Q being up 150% quarter over quarter, 163% year over year.
Yeah.
What we've seen with software companies when they go through a period of outsized hiring is, you typically see a productivity ramp
Yeah.
In the following months.
Yeah.
So talk to us about what that could look like for Cloudflare, and do you maybe any comments you're willing to make on the pace of hiring continuing and what the productivity ramp could look like over the next twelve months?
Yeah. So, you know, we, we've been on this, what we call sales transformation or evolution journey, right? It's just another S-curve from a go-to-market perspective that we stack on top of what we have done. And, knock on wood, you know, sales transformation can be rather disruptive, and we've been executing rather smoothly so far, so hopefully this will stay like this. And the big part of the transformation, at least when it comes to people and leadership, is behind us already. Mark and his direct reports are complete. There's some work on a second level that is still happening. We are on pace from an AE sales capacity hiring perspective. It normally takes nine to ten months for your AE to get to full capacity, so that additional lift is still in front of us. That's what we plan for to see next year. But so far, so good. So, you know, hiring is on pace. We were light in the Q1. We caught up, and now we need to see the productivity get to full potential.
So there's one piece that is outside of your control, which is the macro part of the equation.
Yeah.
I think it was about a quarter and a half ago where you commented on the signals that you look at being more dispersed than usual.
Yeah.
There being more inconsistency in the signals.
Yeah.
Talk to us about what you see today. You've provided some great data on the geopolitical environment, for example, at the Analyst Day.
Yeah.
Where are we in terms of demand signals and the metrics that you look at to inform the demand profile of your business?
So sometimes, you know, it's a blessing in disguise or curse in disguise. Sometimes when we talk too early about the signals we see, we get negative reactions. We see a lot, as you said, 20% of all websites are behind us. We move a large amount of traffic across freemium and paying customers, so we see a lot. I think the demand environment, unfortunately, is unchanged. It is, you know, every deal is a fight, and it's grinding through processes to get deals across the finishing line. Deals are back-end loaded, quarters are back-end loaded. We've been, in spite of that, executing well. In light of all these uncertainties, but it continues to be a fight. There's no improvement in sight, to be very honest. There, the additional uncertainties are piling up, including the elections in this country. There are very mixed signals, as you know, politically in Europe, that are not making it easier. And, no improvement. It didn't deteriorate either, but it's a grind, I would say. It continues to be a grind. Yeah.
The uncertainty that we see in the newspapers and around dynamics
Yeah.
Like the election, how does that manifest itself at the customer level? Do you have customers that are also less certain about their own commitments?
Well, you try to adapt. So, you know, the budgets are tight, and getting tighter, so it's more important that we talk about the overall value proposition. Delivering total cost of ownership reduction is getting more and more important. If you can give back 30% of a budget or 40% for other purposes or, you know, reduction of the budget at the same performance, that is key. Some of the macro headwinds, especially when it comes to a heightened threat environment from a cyber perspective, are tailwinds to our business. So we see quite some momentum in the federal business. We've been talking about some significant large deals in the last two quarters. The pipeline is still encouraging, so you know, for us, the unfortunate thing is that this is a tailwind to our business, so it's mixed.
I want to end here with a question on some of your engagements with your largest customers.
Yes.
I'm thinking, for example, of Apple, where you have the OHTTP service that's very differentiated. I'm thinking of Stephanie Cohen and Mark Anderson spending time with some of the largest Fortune 500 companies. How do you think about the revenue upside potential tied to some of your most strategic customers?
So you mentioned one customer that we never talk about. But we showed one, I think, really impressive slide on the Investor Day that says, "How many $1 million customers do we have, and how does that compare to our peer group?" And it showed the significant white space we have in having many more north of $1 million customers. So that is that's one of the significant white spaces we have. This is where tools like Pool of Funds play a role. But these deals are very much driven by the personas you talk to, and we need to learn to uplevel who we engage with in the Fortune 100 and the Global 1000 companies. And this is where Stephanie and Mark come into play. And it's not only them, it's the people they hire and the network they bring on board. So this is a significant white space for us in this journey up the enterprise stack, and it's important. So it will be one of the important growth vectors for the next two years for us, for sure.
Absolutely. Fantastic. Please join me in thanking Thomas for his time. Thomas, thank you very much.
Thank you.