Awesome. Thank you, everyone, for being here at the UBS Global Technology and AI Conference. My name is Radi Sultan. I cover the mid-cap infrastructure software stocks here at UBS. Next up, we have DigitalOcean, Paddy CEO, Matt CFO, and Melanie runs IR. First of all, thank you very much for being here today.
Great. Thank you so much for having us here. It's one of those conferences I really look forward to coming from Boston or living in Boston. It's like that couple of days, get out and enjoy some warmth in December.
Awesome. Awesome. Yeah, maybe just to get started, you recently put out an impressive 18%-20% growth outlook for next year, a full year ahead of schedule from the guidance you gave at the April Analyst Day. Maybe just to level set, I'd love to hear from both of you on what's driving the recent strength, what's changed since April that you felt comfortable pulling that forward, and maybe we'll kick it off there.
Yeah. Thank you for that great lead-off question. April just feels like an era away. It's only been seven months, but a lot has happened in the market since then. When we put out the 18%-20%, we were coming off of a year where we had a couple of different priorities. One is on the core cloud. As we discussed in our Investor Day in April, our biggest priority was to take care of our customers with the largest workloads. That was priority number one for us. For that, we had to address some of the core product gaps that we had, as well as build a go-to-market that was complementary to what we believe is the industry's best product-led growth machine.
Those were the top priorities to fix or address the needs of our customers with the largest and the most sophisticated workloads on cloud. The second one was to incubate the cloud business. If you fast forward seven months, not only did we address the issues that we had with some of our largest customers or the biggest workloads defecting from our platform and going into hyperscaler clouds, we took that weakness and turned it into one of our biggest strengths. That is why we have started reporting our 100K plus customers, which grew at 41% last quarter. On top of that, we also talked about our million-dollar plus customers growing at 72% and becoming a big part of our customer base. We took what was once a weakness of our portfolio and now have started turning that into a strength of ours. That is number one.
The cloud business at that time was fairly small, but when you string together five quarters of 100% + growth every quarter, it becomes a fairly sizable part of our business. The combination of these two things is what is giving us the confidence to say, hey, we'll take the outlook that we provided for 2027, and we are very confident that we will get there a full year ahead of schedule. The additional thing that gives us confidence is the fact that now we have also announced that we are going to take 30 MW of extra capacity from a data center point of view to accelerate our AI deployments. It is a combination of these strong business fundamentals that is giving us the confidence.
Awesome. Paddy, you called out multiple eight-figure deals just in the month of October following the most recent earnings call. Very strong relative to the size of the deals you've signed in the past. Maybe you could just speak to what's sort of been the biggest driver of that traction? What's the mix between core cloud and AI and maybe any trend in sort of the nature of those end customers and how you landed those deals?
Yeah. This is a relatively new muscle for the company. As those of you will appreciate that have followed the company for a long time, we have barely had any RPOs to report, primarily because of the fact that we grew to just shy of a billion dollars of run rate on the shoulders of 640,000 paying customers. You can do the math. There are a lot of small customers that have really built up the business to date. Over the last three or four quarters, that has started changing quite appreciably. We have these large six-figure, seven-figure, and now eight-figure deals, both in AI and core cloud. Let me circle back to last month. In the earnings call, we talked about the fact that Q3 was the highest organic net new ARR add in the company's history at $44 million.
We also mentioned that less than half of that came from AI. More than half of that came from core cloud customers. When you think about the seven-figure, eight-figure commitments we have, it is really a blend of all of these things. We have some AI-native customers that are willing to commit themselves to multi-year seven-figure, eight-figure deals with us. The vast majority of that comes from AI infrastructure, AI platform, but also some core cloud consumption as part of their commitment, but also from customers that are driving our core cloud consumption.
In the last earnings call, I talked about a longtime customer of ours called Bright Data, which is a leading web data provider, has significantly increased their footprint on us because their business is thriving as being one of the leading providers of web data to the LLMs, the leading frontier models of the world. We are getting a lot of good tailwinds, both from AI-native companies that we are building relationships with and nurturing them, especially in the inferencing world, but also with some of the product gaps that we have closed in the core cloud over the last 12 to 18 months, is helping some of our cloud customers to increase their footprint on us, especially repatriating some cloud workloads from hyperscalers to the DO platform.
Got it. Got it. Maybe just drilling down into the AI side, you talked about AI revenue reaching mid to high teens by the end of next year. Can you just talk through where you're seeing the biggest uptick in AI demand and maybe how broad-based is that?
Yeah, it is quite broad-based. If you think about our footprint now, it is just shy of 10% of total run rate. If we keep stringing together these 100%+ year-over-year quarters, it is not inconceivable that we will get to that number that you just mentioned by next year. We feel really good about the kind of traction we are also getting. As I just mentioned, a vast majority of our AI revenue comes from infrastructure, but most of that comes from inferencing workloads. I would say most, if not all of it, comes from inferencing workloads. The reason why that is important to us is twofold. One, it enables us to build direct customer relationships with these AI-native companies. Number two is, by the nature of inferencing, these companies are in post-product market fit.
They are no longer burning venture capital money trying to find a good niche that they can fit into. By definition of an inferencing workload, it means that an end customer, whether in a B2C consumer space or a B2B enterprise, an end customer is paying for these workloads. That gives us the confidence that the investment we are making in time and money in these advanced AI-native companies is going to help us build a very durable business for the long haul because, unlike some of the other NeoClouds, we are not just taking excess spillover Bare Metal-as-a-Service capacity from a hyperscaler. We think that is not the most durable way to build our business.
Given how rich of a software stack we have, we feel like building that durable relationship with end customers will give us the most runway in building a very, very strong business for the long haul.
Got it. Is there any type of sort of end customer use case driving an outsized portion of that AI demand? Maybe how much is AI-native versus sort of more traditional?
Most of it is AI-native. When I say AI-native, these are new generation of mostly Silicon Valley companies that have emerged in the last 12 to 24 months, disrupting a B2C or B2B space. One example that I gave in the last earnings call is a company called Fal.ai, which is emerging as the hugging face of generative media models. They have some of the world's most bleeding-edge generative media models used by companies like Shopify and others to improve e-commerce shopping cart conversions. The reason why that is really important is it is a real use case that is being consumed by large digital native and other e-commerce companies. That is one example of a B2B use case.
There are also other B2C examples where companies are building digital characters or AI characters that consumers want to interact with, either standalone or in the context of a gaming system and things like that. These are all real use cases that customers are spending money on. For us, it is a good combination of some B2C use cases, but also some B2B real enterprise traction.
Got it. I know you mentioned this, but most of the AI revenue is coming from the infrastructure layer today. You have a full-stack AI offering. Maybe you could just talk through how you expect the mix of that AI revenue to sort of change over time.
Yeah. Right now, even in the infrastructure space, we have multiple points of monetization. Six months ago, I would have said most of our revenue comes from Bare Metal-as-a-Service, just like how most of the NeoClouds today only have a Bare Metal-as-a-Service offering. For us, a lot of the infrastructure consumption has moved from bare metals to what we call GPU Droplets, which is a layer of abstraction that we have built on bare metal. We charge a premium for it. Most of our customers prefer to use GPU Droplets because of some of the performance enhancements we provide. In fact, the performance is on par, if not better, than bare metal access.
It also takes away a tremendous amount of headache associated with building the image, managing the image, managing the lifecycle of the infrastructure, the observability, the resiliency, and the availability of these droplets is significantly above what they would have to do managing bare metal. They're willing to pay a premium for that. That's another point of monetization for us. The next one is we have a class of AI-native companies that say, yeah, I'm tired of just using GPUs directly. Can you give me serverless endpoints for these types of models, whether it is an open-source model like DeepSeek or Qwen or something like that, or Llama, or a closed-source model like Anthropic or OpenAI? We have serverless endpoint as a point of monetization.
On top of that, we also have a variety of building block services in our platform layer, like guardrails, knowledge base, observability, agent evaluation, agent templating. We have a bunch of middleware modules to help companies build and run agentic software. In the last earnings call, we announced that we have 19,000 agents in just like six or seven months of being in production that companies are using to deploy agents in their enterprises. We have different levels of monetization. Most of this also drags through or is starting to consume our core cloud services all the way from Kubernetes to object storage to database storage and everything in between. We have different ways of monetizing our AI stack.
Got it. Got it. Maybe just turning to Matt. AI financing has obviously been very, very topical lately. You guys introduced equipment leases for the first time this past quarter. A big investor question I get is around the role of those leases going forward when you think about the financing side of things. Maybe you could just talk through how you think about financing that AI build-out, the role of those leases going forward, and how you see that.
That's great. Yeah, we've been talking about this for multiple quarters now. We've got, like Paddy said, almost a billion dollars of ARR. We got really good margins at like 60% gross margins and low 40%s EBITDA margin and high teens free cash flow margin. One of the questions that we would get from investors is, OK, but what happens when you need to accelerate? If you're going to lean into AI, is that going to reduce your free cash flow margins? What we've always said is, no, there's a lot of different ways you can finance gear and get access to capital without messing your free cash flow margins. We've been able to tap the equipment financing market, both working with some of our relationship banks and some of the OEMs. We're getting really good terms. We're very, very comfortable with that.
We also have access to additional capital sources that are interested in putting capital to work in this space around equipment financing. For us, it is straight equipment financing. Literally, we just pay over time, and we own the gear at the end. It is dollar-cost buyout, and we are able to better match our outflows with our revenue. It is a great way of accelerating the growth of the business. The evidence of that is when we pulled in the free cash flow, or sorry, when we pulled in the revenue guide by a full year to 18%-20% next year, we were able to also guide that, hey, we will still be in mid to high teens free cash flow while we are accelerating our GPU investment. We are bringing on 30 MW of new capacity and still generating very, very good margins across the board.
Got it. You do not think any of the recent sort of AI financing concerns have sort of bled through to your ability to access capital and sort of those range of options?
No, we've demonstrated that we have a lot of the market available to us. We just did a $625 million convert earlier this year. We raised a very, very attractive bank facility, $800 million earlier this year. We've tapped the equipment leasing market. Yet, we still have really good leverage and generating really healthy free cash flow margins. I think, unlike some of the other players that are involved in this space, we have an existing business. We have almost a billion-dollar business generating free cash flow. We're a slightly different credit profile than maybe some of the other folks in the space who are leasing just GPUs to rent them to hyperscalers or someone else. It's a different credit profile.
Got it. Got it. Maybe just on the capacity addition side, you've talked about adding 30 MW of capacity next year, a big step up from the 40 MW-45 MW that you guys have today, which supports just under a billion of ARR. Maybe just how should we think about your ability to monetize that incremental 30 MW and how that compares to sort of how you're monetizing the existing footprint today?
Yeah. It'll be a healthy mix of GPU in that investment in the 30 MW. We tend to think about it as a portfolio. We've got 43 MW of existing facilities. The nine most recent megawatts we added in Atlanta had both GPU and Core Cloud. We'll have GPU and Core Cloud in each of the three facilities that make up the 30 MW that we'll bring online in the first half. If you again look at us and you compare us to, say, a NeoCloud, and I know we've had a lot of conversations today about, OK, how do I do the math? You're adding 30 MW. How many dollars of revenue are you going to get from that? I'd say look at our dollars per megawatt across the portfolio and just based on the 2025 consensus.
What you'll see is that clearly, because we have Core Cloud and AI, and AI is still a small portion of our revenue, we have dollars per megawatt in revenue that's materially higher than what you're seeing from some of the public numbers that you'll see around the NeoClouds. When we expand that, clearly, we'll mix in a bigger blend of AI than we have historically. The number won't be the same as it is today. It'll be lower. If you look at the yield that we'll get relative to what some of the other folks are getting on a dollar per megawatt, because we've got higher layer AI services and higher margin, we've got the pull-through of the Core Cloud, we expect to get a healthier dollar per megawatt than perhaps what you're seeing elsewhere in the market.
Got it. Got it. Maybe just, are there any other big puts and takes when you think about maybe CapEx per megawatt and then maybe anything else on the revenue side that would longer term prevent you from trending more towards where you're monetizing existing capacity today?
I think if you look over a very long time, you think that the reason that we can get so high of a dollar per megawatt on the Core Cloud is it's a very established market. There's a lot of density you can pack into that. Clearly, there's some characteristics of AI that are going to make that lighter in the near term. They take up more space, and they take more power. You're not getting the same density. Again, the goal for us is we're not in this just to lease GPUs to people. If that's all it was, that's not our game. That's a scale game that we're not participating in. It's mostly training-oriented. For inferencing, to be an effective inferencing provider, you need to provide the full suite of capabilities because all the inferencing applications need storage and bandwidth and database.
They need higher layer services, as Paddy talked about. They do not all want raw access to GPU. We believe that our investment in GPU infrastructure is going to bring a much bigger kind of revenue pie than, again, what you may be seeing elsewhere in the market.
Got it. Got it. Maybe just when we think about supply constraints have been a big issue in the broader industry around bringing capacity online, can you just talk through how you were able to secure capacity, maybe any big constraints you see to bringing that capacity online, and maybe how you're thinking about that?
Yeah. The capacity that we've taken under contract, like we said, it's 30 MW. It's across three different facilities. This is a very similar dynamic to what we saw with broad GPUs. What, 6, 12 months ago, people were talking about GPU shortages and how long it was taking people to get it. We didn't really have a problem. It's because we were buying at a smaller quantum than what maybe the hyperscalers or some of the larger projects were undertaking. Ours were a lot more flexible with respect to the space. For inferencing, you don't need giant clusters. You don't need to have all of your GPUs in one place. We are able to take 30 MW, again, in dispersed kind of form across three different data centers, all in the U.S. That gave us a lot of flexibility.
When you get above 50 MW, it starts to get a bit more competitive because the hyperscalers are taking down capacity in those kind of chunks. They're typically not taking down a 6 or a 10 or a 15 or those size facilities. There's enough colo activity in the space right now where there's third-party colo providers that are still putting money to work and have capacity available that we've been able to get the supply. On the back end, GPU capacity has not been an issue for us either. We've got a variety of global OEMs that we leverage. We buy both NVIDIA gear and AMD gear. You certainly have to order in advance. You got to order like four or five months out in advance.
There are no restrictions in terms of the amount of capacity that we could get at this point.
Got it. I mean, we talked about 30 MW of capacity for 2026. Maybe how do you think about that longer term? What do you look for when you're adding capacity, how much do you look to have a secured commitment before you make that investment? Maybe you could just talk through sort of the longer-term capacity planning algorithm.
Yeah. As Paddy said, the reason that we were able to make the 30 MW commitment now and to accelerate revenue growth is we have better visibility into our demand than we've ever had. We've got a number of large committed contracts. We've got really strong pipelines for both AI and some of our large Core Cloud customers. That gave us the confidence to secure the incremental 30. Given the pace at which data center capacity is being taken down, clearly, we had to order enough to not only serve the capacity demand we have today, but give us room to grow into 2026 and early 2027. I can tell you we're already out in the market looking for, OK, what are we going to do in 2027? What are we thinking about in 2028?
We're certainly paying a lot of attention to the projected capacity requirements and to what's available in the marketplace. It is certainly causing, I think, the whole industry to think years in advance rather than 6 and 12 months.
Got it. Maybe just on the margin side of the equation, Matt, at the analyst, you talked about a few levers you had to maintain margins even at the gross margin layer. Maybe just how are you managing margins going forward? Maybe in the event that you do get more AI revenue coming online sooner, how you sort of manage that margin offset?
Yeah, that's a great question. We still have, say, margin capability inside gross margin. Gross margin, think of it as the two biggest cost structure elements there are depreciation and space and power. Depreciation is just the cost of the equipment. As those prices become more competitive, you'll get better margins. That's not something that we control in the near term. The one thing that we do control, though, is we can optimize our data center footprint while we're expanding our capacity. We're going to take it down an incremental 30 MW of capacity in bigger chunks than we ever have, right, in kind of 9, 10, 15-ish size. If you looked at our existing data center footprint, before we implemented our first AI-focused data center, it was a bunch of 2 .5 MW facilities. They're in really expensive markets.
They're in New York. They're in San Francisco. They're in Toronto. And then globally, they're in really pricey locations. We can optimize out of some of those smaller facilities with high prices and just consolidate it into some of the bigger facilities in the Midwest or in kind of second-tier cost markets. That's on the gross margin side. On the operating expense side, I think we've demonstrated a really good ability to control costs. We have been investing in R&D and in sales and marketing. We expect to get operating leverage over the coming years. We're certainly driving a lot of operating leverage in SG&A in the G&A side of that. We certainly see operating expense improvements that we can drive. We've hired, I think we're up to about 150 people engineering in Hyderabad in India.
We have been leveraging our global cost structure for that. As always, we are very protective of our free cash flow margins. I think the evidence there is the mid to high teens guide for next year while we are accelerating our revenue growth.
Awesome. Paddy, maybe just on the hyperscaler migration opportunity, it seems like that's really picked up recently. Can you talk about why that's seeing an uptick and specifically sort of what's changed around the product suite that's driving those customers to migrate from the hyperscalers over to your platform?
Yeah. There are two primary reasons why we are seeing an uptick in this. One is all the product features that we have shipped over the last four or five quarters. Broadly, you can think of some network enhancements, security enhancement, observability and manageability enhancements, and things like that. That is one reason why. The second one is we've completely overhauled our go-to-market machine all the way from having technical account managers who are, emphasis on the word technical, they're really technology folks who are managing the relationships with our large customers, giving them the assistance that they require to move migration workloads. We're also expanding our systems integrator partnerships where some of them are building practices around helping migration of workloads to DigitalOcean, investing in the appropriate tooling required to help accelerate some of these movements.
If you take a step back and think about this, multi-cloud is a thing that is here to stay. Most companies, whether they're digital natives or brick-and-mortar enterprises, everyone has a multi-cloud posture at this point. Until about a year and a half ago, we were not even in the conversation to be an active participant in this multi-cloud world. Now we are. There are not too many public clouds. There may be exactly five public clouds with our posture and footprint. There is Microsoft, Amazon, Google, OCI, and DigitalOcean. That is pretty much it. If you want to have a reasonable footprint on the public cloud, it is only five clouds that you can deploy to. Most companies will pick one of the big three. Then we become a very natural second or third cloud for these large deployments.
One specific feature that has been a really big enabler of this is the Direct Virtual Private Cloud Connection. Now we enable between our data center and Google and Amazon cloud, which enables our customers to have very sophisticated workloads split across clouds. That is a big unlock. I can keep going on these things. It is that one-two punch of fixing some of the product gaps that we used to have and investing in the appropriate type of go-to-market function to give our customers a good, smooth on-ramp to increasingly adopt DigitalOcean as a proper multi-cloud option.
Is there any particular workload or maybe customer type where you're seeing the most success today? Or is it sort of you think broad-based?
I think it is mostly broad. It is very broad-based. We are still staying true to our focus on digital native enterprises. We still do not go after on-premise deployment and try to move them to the cloud because there are just so many other things that come with it, like legacy workloads have a lot of center of gravity that at this point we are not focusing on. They also have other compliance and privacy issues and things like that. We are focused on digital native companies. There are enough of them, as I explained in our investor day in April. There are enough of them in the world that are big and thriving. These are also companies that are on the bleeding edge of AI adoption.
If we are able to focus on this over the next couple of years and really nail this, we'll have plenty of market share to take.
Got it. You said something earlier in an earlier meeting around the pitch for DigitalOcean has really changed since you joined. I was wondering if you could elaborate on that and maybe just talk us through that journey.
Yeah. The pitch for DigitalOcean, especially when we are talking about attracting world-class caliber leaders to come and join us in our journey, it has completely transformed in the last 20 months. Twenty months ago, my pitch was around, hey, this is broken. You have to fix that. We have a deficit in the leadership bench. You have to come and recruit, blah, blah, blah. Now it is a pitch that is built on a very, very strong foundation. It is a very positive pitch to say, hey, look at all the things we have accomplished over the last 18, 20 months. Now we can build on top of this foundation and take advantage of this generational opportunity we have ahead of us with AI. Even in AI, we have some of the largest AI native companies running direct live traffic on the DigitalOcean platform.
That is a huge plus in terms of our ability to be attractive for world-class talent at all levels of the organization. It is a really positive shift in making ourselves attractive both to prospective employees but also to other customers. Nothing is as attractive as showing real-world traction. That has also really helped us with the pre-sales qualification and helping move pipeline through the different stages.
Awesome. Maybe just to wrap it up, Paddy, when we first met a little while back, you mentioned that product is really where it all starts for you. We'd love to get a sense of the next 12-month product roadmap, what you're most excited about on the product side, and then how you're planning on pulling that through on the go-to-market side of things.
Yeah. On the product side in cloud, for example, we have accomplished pretty much most of the things that we set out to do for making our platform complete for the type of digital native enterprises that I talked about in the investor day. Now we have raised the bar again. We are now going after more sophisticated worldwide workloads that are in the seven-figure, eight-figure range. The bar shifts in terms of what we need to build out in database as a service, storage, performance, and things like that. That is a very, very attractive challenge for our principal engineers and the architects that are working on our platform. On the AI side, we are very proud to say that we have one of the best architected inference infrastructure in the world.
Whether Bare Metal Services or our GPU Droplets, it is second to none in terms of its resiliency, scalability, and performance throughput. We are winning AI native workloads purely on the back of our performance or FLOPS per second and things like that. On the agentic gradient AI agentic layer, we have the most comprehensive agent development life cycle. We are just getting started. There are a lot of other product roadmap items we are co-inventing with our AI native customers. That is, in a nutshell, what the roadmap looks like over the next 12 to 18 months.
Awesome. Look forward to checking back in this time next year. Thank you so much, guys.
Thank you, Radi. Appreciate it.