If you have any questions, please reach out to your Morgan Stanley sales representative. Hello, I'm Josh Behr, Software Analyst at Morgan Stanley. We are thrilled to have the DigitalOcean leadership team here. We have Paddy Srinivasan, CEO, and Matt Steinfort, CFO. Thank you so much for joining us.
Thank you. Thank you, Josh, for having us here.
Excellent. Paddy, was hoping you could start it off talking about your strategy. You came in a few years ago with a very clear product and go-to-market strategy. Wanna check in on where we are as far as those initiatives, your focus with large customers, AI strategy, and what's evolved.
Yeah. Thank you, Josh. Yeah. It's been two years since I've been here, and we just crossed a big milestone, which is $1 billion ARR in December. It, it was a natural time for us to take stock of how far the company has come. It's an incredible story. Two years ago, when I joined, the company already had a phenomenal foundation of having built a very iconic developer cloud. What was missing was, as these developers started scaling up their footprint, we were missing some very critical enterprise capabilities, which prevented us from scaling with their needs. That was priority number one for me, is to plug all these gaps and make the platform enterprise-ready. That was number one.
Number two is, as these companies were scaling, we had to reinvent our go-to-market to be in service to them. The second big pillar was, two years ago, AI was just emerging, and as an infrastructure provider, we needed to have a strong platform, a strong story, in the world of AI. We made a very conscious decision. We could have chased the training world of AI, but that would have meant that we would have to reinvent ourselves and make ourselves into a GPU farm and become a landlord, which would have been a reinvention of the company. What we decided to do was to lean in to our strength, which was, we were really, really good at software.
We are really good at capturing mind share of developers. We said we are going to focus on inferencing. That's the second pillar of our strategy. Two years later, I think, we have done a really nice job of executing on both those things. For me, strategy is all about not just being clear on what we are going to do, but also on what we are not going to do. We have been very, very focused and disciplined on these two pillars.
Great overview. We'll dig into a lot of that. I think it was last week. Time is an interesting concept. You announced Q4 and had an investor update with a path to 30% growth. Could you unpack some of the key takeaways from that combined results and investor update?
Yeah. Q4 was a phenomenal quarter for us, and it was a capstone quarter for what was a very defining turning point here for DigitalOcean. One of the things that we announced last week was that we brought in $51 million of incremental ARR, the highest organic ARR in the company's history, right? And we talked about four key takeaways. Number one, the top customers that I was just talking about was once a constraint, now it is our growth engine. We had phenomenal results from our million-dollar customers, 500K customers, and 100K customers.
Our $1 million customers are growing at 123% year-over-year, and we have had zero churn in that cohort for the last four quarters. We took what was once a constraint and really made it into our growth engine and a strength of ours. That's number one. Number two is, you know, we are hearing and seeing a lot about how software is eating software. AI is disrupting all types of software. As an AI infrastructure provider, we are on the right side of this disruption. We are equipping both cloud natives that are defending their territory and AI native companies that are the insurgents trying to capture markets in whether it is horizontal SaaS or vertical SaaS and things like that.
We showcased many customer examples that we have won over the last 90 days. That's number two. Number three is, how are we doing this? We are doing it with a very differentiated software stack. A stack that includes not just core cloud, it has a very robust and diverse lineup of GPUs, but the most important thing is we have built a full stack Inference Cloud capability on top of all of this. This is what is helping us drive AI customer revenue of $120 million, growing at 150% quarter-over-quarter consistently for the last several quarters. That's three. We are doing all of this in a very responsible manner, right? You mentioned, Josh, last year, we finished a Q4 with 18% growth.
We guided for 21% growth this year. We said we will exit at 25% in 2026 and guided to a 30% plus growth in 2027. We are going to do all of this with a rule of 50 plus. When we said 50 plus, people started crunching numbers, and they took 50. We are reiterating it's gonna be 50 plus, and we will do it profitably. That's the balance and the responsible investments that we continue to do.
Paddy, you're talking about line of sight to 25% growth at the end of the year and then even growth into 2027. What gives you the visibility to look into 2027 for the full year? You know, how much revenue is coming from existing bookings, existing contracts? What do you need to secure to get to those numbers?
We are in a very unique position when it comes to inferencing, right? RPO was never a thing for us, but in Q4 we announced very robust RPO, which grew 500% year-over-year, and it was double the from the quarter pre-previous. What is important to us is the demand that we are seeing from cloud-native companies, some of them are here in this conference. If we were to give all of our capacity to the first customer that asked us, we can take some time off for the rest of the year, that is not our business model, right?
It'll look great on paper, but what we wanna do is let a few dozen flowers bloom in our Inference Cloud because these customers are really taking market share and they are disrupting the software landscape, and we want to be part of as many of these stories as possible. We are not going to get carried away by what the other training clouds are doing in terms of announcing one customer or two customers, and they're sold out of capacity for four years. Versus we have a very different business model where we want to have our platform be used by several dozen AI native companies that are experiencing hypergrowth.
We work very diligently with our prospects and our customers to make sure that we can provide capacity on demand, help them scale, and do some of them take big chunks of our capacity? Sure. What gives us confidence to guide what we guided to, by the way, with only existing committed data center capacity alone, we can grow in excess of 30% next year. It is all stemming from the demand and the pull we are seeing from the market for our Inference Cloud.
Excellent. I thought it was really helpful to isolate, all right, bring on 31 MW, and that gets you to the 2027 revenue. Like one of the topics of conversation I've been having with investors is, well, it's not like you're gonna just stop adding capacity and add nothing in 27, so what's the impact to financials? Maybe to bring Matt into the conversation, I mean, what framework would you suggest, like provide to investors just thinking about what's gonna be the pace of incremental capacity looking forward beyond 2060?
Yeah. I would start with a couple of things. One, we very intentionally guided that way, right? The company just a year ago was growing 11%, 12%, 13%, pretty stable, you know, not a lot of growth capital in the company. As we're accelerating, that changes the dynamic quite a bit. The bringing on of 31 MW of incremental capacity this year is like a 70% increase in our total capacity. It causes lumpiness, right? It causes lumpiness around margins. It causes lumpiness around the equipment that you need. The thought was, well, shoot, one, we haven't committed to any incremental capacity beyond 31, or we'd tell you.
Two, we need to give the market a clear view of what does it look like once you reach some level of kind of steady state with that capacity, where that capacity is, I'd say, healthily utilized. That's the way we guide it. We actually posted a investor supplement in the to our investor website this morning describing a little bit of those dynamics around the impacts on cash flows and the impact on margins, the impact on leverage and how equipment financing kinda impacts that. I'd recommend everybody take a look at that. I'll come back to the, okay, what should you expect going forward? As Paddy said, demand is already well in excess of supply.
You take that and you couple it with the fact that to get data center capacity, you have to be planning, you know, 12, 18 months in advance. We're already actively in conversations, and we have been in conversations with potential data center providers talking about 2027 capacity, talking about 2028 capacity. We'll certainly share more information when we've locked down some plans and we can share what the resulting increase in growth would be. You know, that's the big question is, well, how much are you gonna add, and what is the economic of that once you add it? We tried to give people the blueprint with this 31 and been very, you know, transparent about how that's gonna impact our financials so that hopefully the market, you can get your models, geared up.
When we come back and say, "Okay, well, now we've committed to some incremental capacity," you can just flow that through and see how that works. Very excited about the growth potential. Expect to add incremental capacity, expect to communicate more on that in the coming months.
Makes a lot of sense. How should investors think about financing all the equipment for that incremental capacity? Maybe we could talk about 2026, what is committed, but then on a go-forward basis as well.
No, that's a great question, and we've gotten a lot of questions about the dynamics of equipment financing, and we can talk about that in a second. If you just think about it, the quantum, one, we don't guide to, we've never guided to CapEx or to the amount of equipment that we would need to put in. It's a super lumpy metric. If we bring, you know, CapEx or equipment on in the last week of December, it shows up as the full year number, even though it's clearly for 2027. That's not a spectacular metric that we guide to.
If you just think of an order of magnitude, we spend, call it, and it varies, and I'll explain why it varies in a second, call it between $20 million-$25 million on equipment per megawatt for a new data center. You're like, "Well, how'd you get to that number?" It's not just GPUs. We don't just put GPUs in our data centers. We're a full stack cloud. We put full core cloud capabilities in there with our, our general purpose cloud with storage and database. There's networking, there's other gear that you need to put in there. The amount varies depending on what kind of gear you put in.
You know, if you put in some of the NVIDIA latest gear, it's more expensive than some of the latest AMD gear, so it depends on what you put in there. Think order of magnitude, you know, $20 million-$25 million. While we don't guide, you can take the incremental megawatts that we've committed to, and you can do some math, and don't forget to add a little bit of just general purpose cloud kind of growth. The rest of our network is still growing. Our top customers are growing. You can kind of back yourself into, you know, what a gross number would be. To me, the gross number is not as important as, well, what are the long-term margins you're generating, and what should you expect from cash flow generation over time?
That's why we've added the guidance that we have to give that clarity.
That's really helpful. Could you talk a little bit about your decision process or your framework for determining when to use cash to buy this equipment, when to issue debt, when to enter into equipment finance leases?
That's a great question, I think people give us and maybe me too much credit. They're like, "Oh, that seems like a really complicated, you know, sophisticated financial structure." I'm like, "It's either paying upfront for equipment or it's paying for it over time." It's that simple. If you think about we have an option of, and I'll just make a number up, if we're gonna put in a bunch of equipment that costs $100 million, we could pay for it right now. You'd say, "Well, how'd you pay for it?" Well, we used cash, but we had that cash because we had borrowed money, and we have a TLA, and it's got a little bit of interest on it.
Arguably, you're paying upfront for that equipment, and you're paying interest on that equipment because you've borrowed money. The alternative is, well, I can pay for that over four or five years, and I'll pay a similar amount of interest 'cause the interest rates aren't that different, and everything else is the same. We still own the equipment at the end. We still operate the equipment. There's nobody operating the equipment. We're not outsourcing that to anybody. We're running the equipment that's in our facility. It shows up as debt for leverage purposes exactly the same, either that TLA amount or the total obligation that you have on the liabilities of paying off the principal. It's pretty much the same.
You say, "Well, why would you do that?" Well, it's way better for us as we're scaling to align the investments we make with revenue, right? We'd much rather pay, you know, 20% or 25% of that each year because we get revenue that covers that, and we're generating cash on the back of that from year one. If you take it all upfront, it's a limiter. You can only take so much because you're gonna burn a ton of cash, and then you pay for it. You pay it back over time. Again, it's not sophisticated. It's fully transparent, it's visible, it's on balance sheet. There's no real friction. It's just you're paying over time instead of paying upfront.
We've been fielding a lot of questions around this topic and free cash flow targets. Could you unpack or maybe bridge between the unlevered free cash flow guidance and targets, levered free cash flow, and I think maybe you've covered CapEx.
Yeah.
Anything else with that bridge?
Yeah. When we were, again, growing 11%, 12%, 13%, not a lot of growth capital, didn't really have any leverage, to speak of with a 0% zero-coupon bond, our reported adjusted free cash flow was a pretty simple metric, and it didn't have a lot of CapEx in it, not a ton. It didn't have any interest in it. People could use it as a proxy for our unlevered free cash flow. As we grow, though, that becomes a less useful metric for valuation purposes because you start to put in a bunch of capital.
Clearly, you don't put in, you don't include all growth capital in your multiple of free cash flow because you'd be digging us for all of the investment we're making now times the perpetuity value. And you also clearly don't include interest. We said, "Okay, well, we need to start breaking this out and providing more visibility." We introduced unlevered free cash flow. Unlevered free cash flow, you know, captures the normal cash from operations. It captures the CapEx that we do spend if we pay for something upfront. It's not a perfect metric because it doesn't capture the principal payments associated with leases. That's fine. The guidance for the unlevered for the year for 2026 is 18%-20% growth.
If you take out the lease payments, if you say, "I'm gonna burden you with lease payments," and you take that out, that number drops to about 12%. You say, "Well, is that a good metric to use?" I'd say, "Eh, you still have valuation challenges you have if you're using that metric." One is, if you're putting a multiple on unlevered free cash flow minus principal payments, well, you're double counting the debt because we're already showing the debt as in that obligation to those future payments. The second thing is, again, you're still putting a multiple on growth capital, which is probably, you know, deserving to be treated differently.
You could then add leverage on that, which is, you know, to get to kind of a fully levered, with all cash payments, metric, and we've disclosed that in the materials that we posted, today. I think the challenge that we have and you'll have as investors valuing the company is you can't just take the old, you know, free cash flow method and just apply the same kind of multiple to it. You have to tease out. We've got growth capital that you have to, you know, be able to value for the potential revenue creation that's gonna drive and profit creation, and you have to make sure you're taking out, you know, the leverage.
We're just trying to show all the different, you know, components so that people understand what's in there, and they can, you know, value us however, you know, the market wants to value us.
Perfect. Let's shift the conversation back to Paddy and talk about the business. You framed sort of your AI strategy and focus in the opening remarks. Can you go a step further and really lay out your competitive differentiation?
Yeah, sure. When I look at our competitive differentiation, first let me start with what we currently have, right? In the earnings deck from last week, we had two slides that I want to refer back to for those online. There's a slide 19 where we show our full stack and slide 20, which is a Harvey Balls comparison chart with us and other providers. Our stack has three major components. One is the full stack cloud, which we have gone through the school of hard knocks over the last 12 years building a full stack cloud, running it in 20+ data centers across the world. It is not, you cannot wipe code a full stack cloud platform, right?
There's a lot of sweat equity that goes into building and operating a full stack cloud. That's number one. A full stack cloud has the obvious stuff like compute network storage, Platform as a Service, Database as a Service, orchestration with Kubernetes and whatnot. It's a full stack cloud comparable to the hyperscalers. The second piece that we have is, of course, the GPU infrastructure. Our GPU infrastructure is slightly different from the ones that you may find from GPU firms or the neo-clouds in the sense that our GPUs are purpose-built for inferencing.
Even this morning, we announced a very deep technical paper talking about how our inferencing scaled up for a public company called Workato, where they're achieving more than two-thirds cost optimization and almost 80% reduction in the time for first token and things like that. The point is, our GPU infrastructure is purpose-built for inferencing, and we do a lot of different things optimizing GPUs for that. The third thing that we have is our inference engine. The inference engine starts with a lineup of all kinds of leading models, open source and closed source. We have kernel optimizations and optimizations that enable inferencing customers to get the best bang for their buck, and they measure this in four ways.
One is the throughput rate, low latency, high accuracy, and the best TCO. These are the four things that matter when companies go into inferencing mode. We have a bunch of artifacts and modules that deliver these four things, right? Then we allow our customers to come into our platform, whichever way they feel comfortable. Some customers say that, "Hey, just give me raw GPUs and just have some orchestration on top of it." We got the rest. Most customers are increasingly starting to prefer other ways of entering the platform. For example, serverless inferencing. They just want a bunch of models to be available to them using API endpoints so that they can focus on their business and not managing the infrastructure. We have dedicated inferencing clusters.
We have run your Python code in a container, but consuming inferencing endpoints. We have multiple ways of consuming our inferencing infrastructure. Those are the three big clusters, right? You have the core cloud, we have GPU infrastructure, and then we have the inference engine. When I look at the competitive landscape, the only class of competitors that even have the breadth and the depth of what we offer are the hyperscalers. Of course, we've been competing with the hyperscalers for the last 12+ years, and we win our fair share. We build a $1 billion-dollar business by winning a fair share of those customers that prefer simplicity, lack of like open standards and lack of vendor lock-in.
That is a big angle for us. Number three is predictability and transparency of pricing. That is very unique to DigitalOcean. That's how we win against hyperscalers. When you look at how we differentiate ourselves with the neo-clouds or the inference wrappers, they typically have one of the three things I talked about, right? Neo-clouds have GPU firms. They don't have a full stack cloud. They don't have an inference engine with a software differentiation. The inference wrappers have the inference engine, but they don't have a full stack cloud, and they typically come to people like us for GPUs. They have one of the three pillars that we have.
I feel what we have built already is very sophisticated, and we have a lead in the market when it comes to inferencing, and that lead is only going to keep expanding. On April 28, we have our Deploy conference here in San Francisco. We are going to lift the covers on a lot of the things that we've been busy building, and it is going to increase the lead we have competitively with all of our competitors. Super excited about what we have already built, where we are, and that lead is only going to keep increasing.
Excellent. kind of related to some of the competition, I mean, you provided some transparency into your ARR per megawatt, which is around $22 million currently, and you're expecting it to sustain around $20 million per megawatt, even with the incremental 31 that comes on board. This, you can calculate some of the public neo-clouds anywhere $8 million, $9 million, $10 million, $11 million. a big premium for you. How do you Like, where does that come from? Is that, is that durable? How does that flip back to your comments on, you know, Attractiveness from a pricing perspective.
Yeah. The, I guess the way to think about it is we were at $22 million per megawatt at, in Q4 of last year. Let's just literally take our ARR of Q4 and divide it by, you know, 43-ish MW that are active at the end of the year. You get $22, and you're like, "Okay, well, that's great, but that's a lot of core cloud, and you're just building the AI business." You gotta say, "Okay, but what's the incremental?
For every incremental megawatt that you add, what do you think you'll get, and how does that compare to the competitors? Based on the guidance that we provided, if you fast-forward to the end of 2027 and make some assumptions around, you know, we gave you 30% growth for the year, and you back into the AI growth rates and everything, you'll likely conclude, well, the overall answer will be around 20. That's what we said. You can back into that we're implying there's around $13 million per megawatt of incremental capacity that you would consider AI customer revenue. That compares to the $9 million-$12 million that you said for the neoclouds. You say, "Well, why are you getting more, why are you getting more revenue?
Are you just charging higher prices for the same thing? It's not that at all. If it's just bare metal, there's a lot of price transparency in the market and people generally know what those margins are, and they're not particularly strong. If you layer on inference services, which like [Even] just layering on GPU Droplets on top of the wrapper around a bare metal to abstract that, you know, kind of the some of the administrative capabilities, offering serverless inferencing, offering some of the other higher layer AI services, then also pulling through core cloud. You're getting database, you're getting storage, you're getting bandwidth, you're getting compute, CPU compute. All that is incredibly higher margin.
Like, the margin on the core cloud is, you know, think 70s, 80s%, where the margins on the bare metal, as everyone knows in the industry, is like 25-ish ±. It, like, depends on how you're really thinking about it, but it's not spectacular. We're able to get more of the wallet of the customers using that GPU infrastructure. It's higher layer, which means it's stickier, right? You start getting data and database, that makes that workload stickier. Someone can move a bare metal training workload. It's harder to move an inference application and we get higher margin for it. We think that that number only goes up over time and is the embodiment of the differentiation that Paddy just articulated.
Matt, just to add to that. We talked about $120 million of AI customer revenue. That $120 million, we had a slide last week, the bare metal part of that is 30% and shrinking, right? 70% of that AI revenue is coming from higher order services, which by definition are much higher margin. That's how we are able to get more, and that's only going to go up from here.
To clarify, shrinking in mix.
In mix.
Growing but at a slower growth rate.
Growing more,
I'd argue I'd love it to shrink.
Yeah.
Flip it. We've already had customers, like Fall that came to us, wanted bare metal initially, because that's what they were getting from everyone else, and then after experiencing our network and our capabilities, they migrated up to higher layer services. I'd like that bare metal to, you know. I'd love for everyone who comes in to want to take advantage of our higher capabilities. Clearly, we'll take more if we need, you know, if we need to win customers initially, but we'd be working really aggressively to migrate them up.
Great. Let's talk about agents and OpenClaw. Basically, how are you positioned? You know, what are you seeing in the market or how are you positioned around that opportunity?
Yeah. We are positioned extraordinarily well. Last week I talked in the earnings call about how just in a handful of days, we had more than 30,000 OpenClaw 1-Click App running on our platform. That was with zero marketing dollars spent. Like, we just overnight became a natural destination for deploying OpenClaw agents, for one primary reason, right? The reason why I'm super excited is, everything we've been talking about from an agentic perspective, we were able to see and then some. What I mean by that is, agents need lot more than just an AI model to run, right? Of course, we have AI models. You can bring your own Anthropic key, or we have a dozen or so open source models, so that's all fine.
Agents need memory, agents need storage, agents need a way to orchestrate and need sophisticated API capabilities, and they need CPU compute to perform actions and things like that. They essentially need a full stack cloud. They need serverless inferencing. They need everything that we offer, and that's why we saw this massive explosion of OpenClaw agents, and that has really not stopped over the last several days. It's going strong and for me more than anything else, it just establishes a blueprint for agentic applications of the future, right? OpenClaw was more of a personal productivity type of agent, but when you extrapolate that to agents that are going to deliver value in the enterprise, we're just getting started.
The beauty of our platform is it is ready-made for agents to deploy other agents. It is ready-made for agents to consume our APIs. For example, we shipped Remote MCP Server last quarter. With Remote MCP Servers, these agents don't even have to talk to a human and log into the cloud console to create new artifacts and stuff like that. They can just spin up new instances. They can spin up new capabilities on the DigitalOcean platform without talking to a human using the Remote MCP artifact. We are set up beautifully for this, and that's why we have become a natural platform because we have all the underlying pillars that agents love to leverage.
I'm super excited, for a simple reason that this shows everyone what a blueprint for an agentic cloud looks like.
Perfect. Wanna come back to the data center strategy. You all co-location, and then also as you think about bringing on this capacity throughout this year, what kind of risk, or how do you think about the risk around delivery...
Yeah.
timing?
We communicated at earnings that our first of the three data centers, which is the smallest at 6 MW, is gonna start ramping revenue in second quarter. The other two, which is one is 10, one is 15, come on in second half. We've been, as you've hopefully come to learn, appropriately conservative in terms of our, you know, planning around when we actually get those turned up and when we start to generate revenue and the pace at which that revenue ramps. We're very confident in the guidance that we provided and, you know, the 21% for this year, exiting the year at 25%+ that's a good. You know, we feel very good about that.
We're working again because we work with kind of existing colo providers. These aren't new investors that are building from dirt and have never built a building before and are kinda going through the first time. In most cases, these are data halls and existing facilities and with very experienced operators and we feel very confident about their ability to execute and our ability to partner with them.
Great. Wanna come back to the go-to-market, and as your focus shifts to AI native enterprises, you know, how do you approach that from sales-led growth, product-led growth? You know, where are you on building out sales team? Where are the investments needed?
Yeah. As you all probably know, we are very likely the company at our scale, we are probably the most sophisticated product-led growth company in the industry. On top of it, we layered in a little bit of sales-led growth, primarily looking at our digital native enterprise customers. You don't have 0% churn in our $1 million cohort by accident, right? It was very deliberate from a go-to-market point of view. We put our arms around our big customers. We are working very hard to expand them, making 100K customers, 500K customers, $1 million, and $1 million customers, $5 million customers. That farming motion or account management motion is working really well.
On top of it, from an AI perspective, our product-led growth continues to be a big top of the funnel machine for us. On top of it, we are also adding, we are becoming very active in the venture community, in the startup community, ensuring that we have a good, steady top of the funnel of well-funded startups that are in the precipice of changing their respective domains. We are a a16z alumni. We are a Techstars alumni. I'm actually keynoting at the Techstars conference on Monday. We are very active with these communities to ensure that we are able to pick and participate in the top companies of their respective portfolios and bring them to our ecosystem.
Those are all the things that we are doing. Our AI sales team is still fairly small. It is mostly just inbound. We have no problem generating demand given our product-led growth machine as well as the technical evangelism that we continue to do. It's all hands on deck. We don't need an army of salespeople to bring in this revenue. We have a small but mighty AI sales team, and we will continue to expand that primarily in one direction.
We are doubling and tripling down on forward deployment engineering, which is gonna be super important for the next couple of three years in terms of working hand in hand with customers because there is a lot of magic that happens when engineering teams get together. FDE teams is a big focus for us, and that's how we are evolving our go-to-market.
All right, great. wanna round out the discussion, just hitting on capital allocation. We talked through CapEx, finance leases, but what about buybacks, and what about M&A?
Buybacks have always been an important part of our long-term capital allocation strategy. I'd say, given the priorities right now are 100% focused on organic growth and maintaining a healthy and flexible balance sheet, we're unlikely to do material buybacks in the near future. We still have an authorization of, I think it's $100 million over two years, but I would expect that we'd be using our cash for growth, not for buybacks in the near future. M&A is always something that we're focused on. I'd say we're probably more focused on product, you know, things that advance the product roadmap or acqui-hires versus any kind of a scaled M&A.
We don't anticipate it being a material use of capital. At this point, we think we could do it like little tuck-ins here and there.
Perfect. Paddy and Matt, thank you so much for the conversation. Really appreciate it.
Thank you, Josh.
Thank you.