Good afternoon, everybody. Thank you for joining us. I'm Hamza Fodderwala, cybersecurity analyst at Morgan Stanley. This afternoon, we have the pleasure of having Thomas Seifert, CFO of Cloudflare. Thomas, thank you for joining us.
Thanks for having us. Hello, everybody.
I'll give you a minute to drink your water while I read this, very important disclosure. So for important disclosures, please see the Morgan Stanley research disclosure website at www.morganstanley.com/researchdisclosures. With that, Thomas, thanks again for joining us. I wanted to,
Talk about fatigue.
Talk about fatigue, yeah. I certainly, and you certainly seem fatigued, but I think we can get a second wind here. Well, one, actually, on that topic, are you seeing any spending fatigue? Clearly, your Q4 results, Q4 results would contradict that, but I'm curious, you know, what you make of those comments.
I came in late last night. We had a late dinner, so there might be some fatigue, but not really. And as you know, we had a really good quarter. And some of our peers that we admire showed really strong results, too. So we cannot really speak of fatigue. The threat landscape speaks a completely different picture, too. I mean, you just have to open up the Wall Street Journal from this morning between, you know, ransomware and healthcare, attacks against critical infrastructure, nation-sponsored attacks against the defense industry, is driving a threat landscape that is anything but fatigue-y from a security perspective.
Fair enough. Very clear.
Yeah.
So a couple quarters ago, you know, Matthew talked about Cloudflare as the Connectivity Cloud-
Yeah
... and really encompassing a lot of different types of use cases. Can you elaborate what that means? And yeah, I think we'll start there.
This one could be a long-winded answer, but, you know, we, of course, built a network today that sits in more than 300 cities. It is in more than 100 countries in terms of points of presence, and there is a lot on there in terms of millions of free customers, 10,000 of what we call paid customers, very small startups, the largest financial institutions, governments. And they all connect through us to various private and public clouds, and they want this data to move securely, efficiently, performant, and cost performant, and observe all this through, you know, one control plane. And this is what we provide.
In the beginning, company, you know, external folks, and analysts talked about us as the fourth hyperscale, and that's not what we felt we are. We think we are the first networking cloud, connecting all of our customers, and everything they do, and all the data they move to wherever it needs to go. And I think this is what coined that idea of us being a connectivity cloud. And it defines, I think, really, really well, where the business model has started, what it evolved into, and where we think, you know, our future opportunity is.
Got it. Got it. Maybe going back to the most recent quarter, so Q4 was very strong. We were particularly impressed with the acceleration in RPO and CRPO bookings, in particular.
Yeah.
Just remind us, you know, what drove that, and is there anything that might have been one-timey there that, as investors, we should be careful about extrapolating?
Mm-hmm. Well, of, you know, as you said in the introduction, Q4 was a strong quarter. The good news was that it was not really driven by one thing or one off, one large deal or two large deals. It was driven by a broad set of areas and growth vectors. We saw, you know, maybe the most mundane. We saw good progress on our transformation from a go-to-market perspective. Everything that we initiated on the go-to-market side is taking hold. We are really encouraged by the accelerated momentum of our Zero Trust platform. That is our... You know, we call this our second wave of products, or product line.
We started with Cloudflare Access in 2020, and we always talked about that fleshing out, you know, the products in that product portfolio, Access, Gateway, Browser Isolation, Email, DLP, will take time. But we've reached a point where we have literally reached feature parity with everybody else who is out there. That is driving momentum. Now, we have a fully featured Zero Trust product in combination with the platform we have, is a rather compelling offering, and that drove momentum across a wide set of customers, verticals, and regions. Then I think the other really important topic in Q4 was a very strong federal business. We talked about one very large deal with the Department of Commerce, a Zero Trust deal, by the way, too.
But strength in federal, not only in our country, but outside or across the oceans in both directions. Very strong federal business in the fourth quarter, and a good pipeline, I would say, for federal business moving forward.
ot it. Got it. I definitely want to dig into that. So, yeah, you talked about, I think, a $30 million Zero Trust win in the federal vertical. You've had three different acts of your business. Act One has been the very strong application services business around DDoS protection, CDN, and various other services, and Act Two being the Zero Trust or, you know, what's also known as SASE. Just talk to me a little bit about the scale of the Cloudflare network and how you're able to use the install base and the revenue you're already generating in a very strong app services business to really extend that same competitive advantage in SASE relative to maybe some of your peers.
It's less about the revenue. It is much more about the infrastructure of the network. So when somebody starts to do work on us, I said, and ask me or our team, "You know, how do I get my arms around the competitive moats of Cloudflare?" I said, "You have to really understand the architecture of the network." So we said we are today in 300, more than 300 cities, in many cities, like San Francisco, in more than one location, a highly decentralized network. And every product we have and every service we offer runs on every server in every location. And that means that the complete surface of the network, capacity-wise, and infrastructure-wise, becomes our degrees of freedom, how we manage traffic, how we manage cost.
This is the key reason why our margin structure is so superior and why you have such an elasticity in our business model. For example, when during COVID, when most of our revenue is subscription revenue, it's all quasi-fixed, there is very little component of variable. When during COVID, we all started to work from home, traffic spiked on our network, literally within a couple of weeks, by 60%. Folks expected our margins to tank, and, you know, they didn't flinch. They actually improved. This speaks volumes about the efficiency of the architecture, but also the elasticity we have to absorb gigantic moves in data. Now, this network is built on the traffic we deliver or handle with our first wave of products. So, you know, it's a CDN network, but not a lot of CDN revenue.
It's the firewalls, the DDoS mitigation, the routing, the load balancing that happens. In this business model, you pay not or we don't pay for the amount of data we move. We literally pay for the size of the pipes we have installed, right? And the first wave of products is literally traffic moving out to the eyeballs. So when we now design a slate and a portfolio of Zero Trust products, they're literally moving traffic in the reverse direction. It's all about moving traffic back. So all that traffic that we collect literally comes for free. So our Zero Trust products are margin-wise, far of 90, 90%. We...
Matthew, I think on the last earnings call, very blunt point, he said we could consolidate all the zero trust providers out there, put all that network traffic on our network and not need to invest $1 of CapEx. So it gives you a really good idea of that the capacity of the network. So you know, you have all these products, they fill the infrastructure and run on the infrastructure we have built, highly performant at the edge of our network, at a very superior margin structure. And now you run all these zero trust products in the same control plane where everything else runs.
This combination of, you know, not only being a vendor consolidator, but, being able to consolidate this all on one platform is, I think, what makes us, so unique. Was one of the, I think, key reasons also why we won the Department of Commerce deal. Not only because the product was highly competitive feature-wise, but, because, it comes with a platform that is so much more than just Zero Trust.
Makes sense. So, you know, joining in on Act Two, on the Zero Trust side, so the performance advantage of the network, you know, seems very clear with Cloudflare. One of the things you've also done is make a lot of enhancements to the Zero Trust security portfolio.
Yeah.
You talked a little bit about the DLP, CASB, and all the different features that you have.
Yeah.
One of the other things you've been focusing on as well is upleveling the go-to-market. Cloudflare is a company that has obviously a massive install base, very natural product-market fit.
Yeah.
But as you sell these larger SASE deals, what did you have to do from a go-to-market standpoint to really uplevel that? And talk to us about, you know, the recent appointment of Mark Anderson in relation to that.
So as you move up market, you start to sell Zero Trust products, the personas are changing. You know, while discussions before have been mainly CIO and CISO discussions, now the deals become really, really big. You talk about the double-digit million-dollar contracts. So, you know, you talk to the C-suite, you talk to CEOs, you talk to CFOs, you talk to general counsels, because a lot of those topics, especially outside of North America, become compliance topics, data sovereignty and data localization topics. So you have to adjust messaging, you have to adjust the personas and target the personas. You need to talk to more sophisticated account structures from a support perspective. We've started to enable our channel.
You know, the first wave of products are so fast to install, so highly efficient. You know, whether you, as a customer, put your homepage on Cloudflare, it takes you probably five minutes. But you know, onboarding all of Morgan Stanley probably takes us, under attack, also takes us a couple of hours. So it's highly efficient. It didn't. These products didn't leave a lot of room for channel partners to provide value. With the Wave Two products, the SASE or our Zero Trust products, that has changed. So, you know, us enabling the channel has become an important part of our go-to-market strategy. And you know, channel-enabled revenue grew from 70% last year, so we're making good progress, but we have still a way to go.
This evolution is still giving us a lot of opportunity in terms of customer size, number of very large customers. We talked on the earnings call. Now, we have the first handful of customers that are far north of $10 million, but there's a lot of room to grow. So with Mark Anderson coming on board, it allowed us to accelerate that journey, I think. It's not so much about the disruption or changing course, but literally accelerating the journey. He's done this before. He's been very successful. He's been on our board for four years. He understands us, not only the enterprise side, but, you know, where that efficiency comes from. And it allowed us to combine not only sales, but sales and marketing under his leadership. He's just a great guy, so we are all excited that he is on board now.
Great. Great. Digging on the channel, particularly on SASE, obviously, channel very important to, to enterprise security sales. How is it that Cloudflare is able to incentivize channel partners to, you know, partner with, with Cloudflare? And are there certain things that you can do, given your scale, the network side, where you can offer better incentives, perhaps, than your peers?
Well, there—I mean, you know, with a non-obvious topic first, you know, success breeds success, so there's a flywheel. Once you have your first very large deals, especially the deal with the Department of Commerce here that we announced last quarter, it drives interest from other large channel partners to team up with us. All of a sudden, there's a Cloudflare that allows you to sign $30 million, $40 million, $50 million deals. So that drives a significant amount of interest. The products are compelling, the platform is compelling, and then we have a superior margin structure that we can take advantage of, in terms of rewarding successful partners for us, without, you know, endangering the price envelopes that are in the market. So this all combined, I think, has been the reasons why we've seen quite some success in building out our channel program.
Got it. I want to talk a little bit about how you're packaging the product as well. So, Cloudflare One was a new package that you launched, I believe, sometime last year. Can you walk us through how that's been able to help you land some of these larger customers, some of the pricing changes around that?
You know, when I started at Cloudflare, we had less than 10 revenue-generating products. Now, we are in the mid-50s. So you have to evolve how you market and how you sell those products. Bundling has been now an opportunity for us, a journey we've been embarked on. We are not finished yet. That is evolving. We are testing, as we speak, new bundle concepts with certain customer verticals in certain test markets. And, you know, there are some, you know, lofty examples out there of companies who've done good jobs bundling their products and getting pricing and expansion opportunities under control or taking full advantage of it. Salesforce, Microsoft, I think, qualify for that. So it's a journey.
We have seen really good progress, especially when it comes to consolidating spend and delivering ROI on our platform. A topic that has been, you know, usually important to folks like me and during time when budgets were tight last year and still continue to be tight. But it's a journey that we have not finished yet. We are right in the middle of it, to be honest, and we think there's significant upside still in front of us, both from our go-to-market land perspective, but especially also from an expansion perspective.
I wanted to shift back a little bit, maybe towards a broader security question that encompasses Act One as well. So, we talked a little bit about the rising threats that we're seeing recently. Cloudflare, I believe, secures over 20% of... or over 20% of internet traffic goes through Cloudflare. You've got rising geopolitical tensions. You have half of the global population voting in elections this year.
Yeah.
How is that increased threat activity, you know, driving, you know, perhaps more revenue for Cloudflare?
Well, first of all, you know, in situations like this, it's important to not think of revenue first, but about protecting. I mean, yesterday was Super Tuesday, Election Tuesday. We protected more than 100 websites, state and federal institutions as part of the process just last night, more than 400, you know, across the country. We are in a very unique position. You know, there's a difference if you are a pure enterprise company, and you sit in front of thousands, maybe even in front of 10,000 enterprise customers. The perspective you have on the threat landscape is very, very narrow. We sit in front of millions of free customers, 10,000 paying customers. We protect election platforms.
We have a program that is called Galileo, where we protect for free, you know, voices that need to be heard, critical journalists and organizations, people that are under heavy nation-sponsored attack. So we have a very unique perspective on attacks. We see attacks in their how they start. Somebody who wants to attack Morgan Stanley starts years earlier, practicing malware and attack vectors, and we see those attacks being developed. That makes us a very interesting partner for companies that need help to defend. It also feeds our products in terms of the secure posture we have. We literally defend 80 billion attacks per day across the network. And it's a data game, right?
What we see, how early we see it, so that allows us to defend better, and that hopefully leads to better products that we monetize. But the first step is really getting the defense posture up for all those entities that put their trust in us, and sit behind our network.
Got it. Got it.
But it's an interesting time. The threat landscape is at an all-time high. We've seen the highest and most sophisticated DDoS attacks rising up. You know, we've blocked, posted about an attack, a highly sophisticated attack against our own infrastructure that we successfully defended. So there's nothing, but there's no fatigue that we see. Come back to how we started.
You know, I'd be remiss if I didn't talk about AI. I know it's super early days, but I think Cloudflare is different in that obviously a lot of companies are coming out with AI copilots, but Cloudflare is really a, a, an AI enabler, if you will. So can you just high level explain what are some of the different vectors for monetization as it relates to AI? Because I know there's a lot that you're offering.
You know, that drove a lot of the discussions we had in the various meetings today already.
Sorry to make you repeat yourself.
No, no, no, no. That-
Yeah.
It's a super fascinating topic, and we would be remiss not talking about AI. For us, AI plays from a revenue and monetization perspective, plays into various layers and vectors. There's, you know, over the last... there's, of course, AI companies just signing up with us for their own security posture, and they- I don't think there's, there's an AI company of name, small or big, that is not behind our network at this point in time.
A use case that we did not expect that was and is still driven by the GPU capacity shortage that we see is that LLM companies are putting their data in on us in our R2 product and use us as a departure point to find available and affordable GPU capacity for training their models. That might not be a business model forever, but it's certainly a good business model for the time being. We just help them find available and affordable GPU capacity without paying for a huge amount of egress fees transporting the data to where the capacity is.
The really interesting use case for us is, using our edge network or distributed network and our ability to have GPU and compute resources close to the eyeballs where they connect, as we seem to be in this Goldilocks zone, where inference tasks are run in a highly efficient way, efficient, secure, and compliant. A lot of the inference tasks in the future will be about where data is and where it can move. We started to deploy GPU capacity at the edge of our network in order to enable that and prepare for that. We rolled out GPU capacity. We originally targeted 100 cities. By the end of last year, we were a little bit ahead of plan, 120, I think, if I remember correctly.
But we will be literally in every location by the end of this year, enabling inference tasks to run at our network, either where for latency reasons, for compliance reasons, for cost reasons, where you enable some devices by enabling AI capability, literally milliseconds away from the device, or offloading expensive hardware infrastructure from a device into our network. That is most promising. We launched a vector database last year that has a higher touch rate to everything we sell now. That's a significant opportunity for us. And that inference and vector database is a business where we try for adoption, not for revenue. That is really important to point out for us.
We learn a lot by how we have to model and how we have to scope GPU capacity, and what kind of GPU capacity, our customers need at the edge of our network. The idea is that we abstract the need of the customer and the software that runs on it from the hardware stack we have. That is something we have done successfully on all the other products we offer, and that is, you know, another reason why our margin structure is so superior. And then last but not least, is, you know, we, we all want to, as, as individuals and as companies, want to interact with large language models, and, and how we make sure that this happens safely, and securely, without data leakage, without opening up new incursion vectors and threat vectors is a big topic for us.
So we just announced today that we started developing a Firewall for AI. How you mitigate traffic on LLMs is very different from API traffic, right? We talk this natural language, but every answer we get is the same. So it's an interesting topic. That will be a third vector for monetization.
A lot of stuff there. A couple of follow-ups, and then I want to open up the audience for questions as well. You know, one of the questions I get is on the inference opportunity, AI inference opportunity. You know, what does that GPU capacity that you have look like, and how is Cloudflare able to make these investments? And I believe even, like, I think last quarter or last year, your CapEx was down. So how are you able to do it in an efficient way, given the GPU shortage out there?
But CapEx ratio is down. Dollar-wise, we're still growing.
Right. Excuse me. Yeah.
You know-
Yeah
... For a lot, for a lot of good reasons, the hardware vendors are doing a good job. But you, you have to come back to what we said earlier. You know, our, our Zero Trust momentum is accelerating, so that is revenue that is, that is coming in, that hardly, that literally needs zero CapEx because it's living off the infrastructure we already have. This allows us to deploy, ratio-wise, at least, CapEx dollars towards GPU. You know, among the really unique things about us is, if you move so much data through your network, you learn a lot from it, and you're never, you're never in a position where you invest ahead of the demand curve.
So in the beginning, we used our CPU capacity to learn from the inference task, and then we bought our first 500 cards, and you learn on that. So today, it turns out for inference task, if you just go to Hugging Face and say: "You know, this is their inference task. I would like to run. What GPU hardware do you recommend?" It's hardly ever the bleeding edge that you need to train models. It's the whatever, L40S or whatever it's called. So we are deploying a very large mix now of GPU cards, you know, from all the suppliers.
NVIDIA, of course, Intel, AMD, and the first ASICs that are specialized, and really trying to find a hardware mix that is optimized towards the inference tasks that we see. And the idea is to, what I said earlier, to abstract the need from the software stack, from the hardware stack. So, our customers shouldn't worry about what GPUs they need to provision, how much capacity they should reserve. They literally buy what they use, and we decide, or the algorithms decide, you know, where we run it, on what hardware we run, where it's most efficiently computed. And, that's a journey we've been on. We are really good at that. The teams that are working on this, and this is one of the reasons why we have so efficient CapEx numbers.
Just one more follow-up, and I'll open up to the audience. You know, one of the, the key selling points, in the past on, on Cloudflare Workers, and some of the developer services has been, the fact that you're not charging these egress fees. Recently, GCP, and then, AWS talked about, you know, dropping egress fees. I don't know if it's made official yet from AWS, but I think there's more nuance to that argument. So maybe just explain, is that, that for Cloudflare? Is that... Yeah, how does that impact you?
I remember when we launched our two storage products, and people said, you know, asked Matthew, you know, what, what's their, you know, what happens if there are no egress fees, et cetera? It's a win-win either way. Either they're high egress fees, and then our R2 model and revenue is going to benefit, or there's zero egress fees, and then that is all about being a connectivity cloud. Then we can move freely data for our customers, and we'll make revenue somewhere else. So first of all, they're not waiving all egress fees. They're just reacting to some legislature that is going to come in Europe. So it's just if you leave a company for good and move your data, then the egress fee is going to be waived.
But if you continue to move data in and out, you, you'll run up fees. As it happens, we just launched a product today, that is a multi-cloud product, Magic Cloud Networking. I always want to call it Magic Multi-Cloud, whatever. We just launched this today. It... What it does is, it's like an interpreter that that understands all the different public clouds and-... With that interpreter comes our big backbone that allows you to move data between private and public clouds. And, less R2 revenue, more multi-cloud revenue. So I think either way, it's going to be a win for us.
We probably would prefer a world without egress fees and have data move freely, and customers decide where the best location, most cost-efficient location is to compute, store, and do whatever they want to do with data. And we just help them move it securely and highly performing and cost-effective.
Any questions from the audience? We have one over here.
Thank you so much. I have two questions. One is, can you elaborate on the CapEx on GPU and also the nature of the business that Cloudflare is doing? You're deploying GPU. Is this like renting out GPU capacity, like Oracle Cloud, or is a different nature of business?
So we are not renting out capacity. I think this is, you know, if you come to us, you don't have to think about how much capacity do I need, how much capacity do I reserve? Will I be under-provisioned, over-provisioned? You just pay for what you use. I think this is. And we worry about the abstraction. That is why it's so important for us we model it correctly, and we get to a high usage of the GPU capacity we have. But we are not renting capacity. You come to us, you pay for what you use. The first part of your question is, you know, we have servers in every location. We have significant CPU capacity out there.
Sometimes people confuse us with a CDN. We are not a CDN. We happen to have a really fast CDN network because we deliver security and performance products at the edge of our network. So we've always done significant high volume compute task at the edge: encryption, decryption, packet inspection. And now we started already years ago, and you know, Matthew had the foresight to think about this a couple of years ago, to leave slots open, PCI slots open in our server, so we just can slot them now with GPU capacity. So that, that was a flexibility, the capability that he started to, luckily or not, foresee a while back. We buy a mix of capacity hardly at the bleeding edge. That's why it's affordable and has high availability.
We don't need H100 cards at this point. And we provide, and we procure them. Today, still mostly NVIDIA, but more and more from a broader set of suppliers.
Got it. My second question is that you also elaborate that the abundant opportunity ahead of us through bundling, through the usage of AI. So, if you're thinking about a little bit longer term-
Yes
… next, like two, three, five years, in this environment with AI deployment, both training and inference usage has been accelerating that, and also our business initiative, for example, bundling and all that. Would you see the… and also the spending optimization? We know there's a fatigue or optimization come to an end, right? So, would you see the growth accelerate in this macro environment, or what signal do you see to see that acceleration happen? Do you have to see a lot of inference happen to stimulate the growth of Workers and edge products?
Yeah. So the first signal is how our adoption of Workers and Workers AI is accelerating. And we see really encouraging trends. If you look at our download data, it's steep up. And a third of the developers that sign up for Workers AI are net new, so there's significant interest. The second big indicator is the variety of use cases we see coming on our network is huge. With an Investor Day in May, we'll try to give some insight into how much interest and how much variety you see in the use cases.
So those are the two best indicators we have today that we seem to be in a really, in a Goldilocks zone for inference tasks. Like in so many cases, when we push new technology, we try to push adoption and not revenue. You know, one of the core principles of Cloudflare is never to discourage a byte of data from moving through our network, even if it is for free. What we learn from that byte of data in terms of where it comes from, where it goes, good, bad, threat vector, is where we will derive value moving forward.
On the inference side, this is now how we model capacity, how we are able to abstract the software layer from the hardware stack, how we can get to 6... If you look at one of the problems today in training land, is that GPUs are very spiky in their utilization, and you have long periods of significant underutilization that makes it very expensive. We want to get to the same utilization rates that we see on the CPU side, and we think we're in a good path there. So we learn from the diversity of data that is on our network, and that is why usage is, at this point, more important than revenue.
If I could maybe squeeze in just one last big picture, sort of macro question. You have a large install base, large enterprise customers, SMB, and you also have relatively short sales cycles, so you're able to see sort of changes in demand relatively quickly versus your other vendors. So just curious, what are you seeing from a macro standpoint, across those two different segments and kind of what you baked into the 2024 outlook?
It hasn't really changed much from what we said on our earnings call. There's still a lot of noise in the data. I would say, for sure, stabilization, not deterioration, but we think we are still in a grind. There's also, you know, you know, what you see in Europe and then what you see in specific countries in Europe, what you see in Asia, what you see in specific countries in Asia is contradicting. So we think we are on somewhat stable ground, but we'll continue to grind for a while.
All right. Well, Thomas, thank you so much for your time. Thank you very much.
Always a pleasure. Thank you.