Just delighted to have DigitalOcean joining us for our last session of the.
Saving the best for the last.
100%. 100% of the Citizens Technology Conference in San Francisco at the Ritz-Carlton. Has it been a year?
Yes.
It's been a year, so good timing. DigitalOcean is a $4 billion market cap, roughly. We'll do almost $900 million in revenue this year. Profitable. When I wrote this note, it was trading at a really pretty reasonable valuation. Five times revenue, high 20s times free cash flow. There's been a lot of change in the last year, and things seem to be working. When did you guys report this quarter?
Just last week.
Last week and it was good. It was good. I think I actually have a chart in here somewhere where I show, yeah, here it is, so there's the performance of all the companies after they reported. You see the one that's up and to the left?
Nice.
Yeah, yeah. So you guys had the best performance after reporting your quarter of anyone in our universe. So let's start at the top, Paddy, which is how's business? What would you say?
Business is good, Patrick. Thank you for first of all, thank you so much for having us, and thank you for hosting us today at the Fireside Chat.
Thanks so much.
Yes, it has been a year. And as we've been discussing, I had three priorities coming in. One was to really recharge the company's management team, rebuild the leadership group. Number two was to accelerate the product velocity, product roadmap. And number three was to build a go-to-market motion to complement our self-service product-led growth motion. So on all those three fronts, I feel really good that we are shooting for.
All right. I'm sorry. I was totally distracted by our microphones. OK. What are the three things?
So, the three things, one is leadership.
Yep.
Number two is product direction, product velocity. Number three is go-to-market, and as you just mentioned, we reported last week Q4 earnings, and very pleased that on both number two and number three, we are now demonstrating proof points that things are starting to click. One, top line reaccelerated to 13.2% growth.
Number two is from a validation for the fact that our products are now starting to resonate with our customers. Our NDR, which was stuck in neutral at 97% for multiple quarters, jumped to 99% in Q4 with our core DigitalOcean NDR even touching 100% last quarter, so we're very pleased with that. That is a validation. It's a really lagging indicator of the fact that our customers are starting to adopt our products.
And in fact, we reported last week that of our top 100 customers, at least 50% of them adopted one of our new features that we shipped in Q3. So those are all good, positive indicators that what we're doing from a product perspective is starting to work. We, of course, have a lot more work to do. And number three was go-to-market. A leading indicator of that was the NDR bouncing back and the fact that now we have, for the first time in the company's history, over 500 customers, which we call as Scalers Plus at $100,000 or more ARR.
So when you landed here and you looked around and figured out what you had to work with, and again, this is a year ago. But what were the top things you identified that you're like, OK, well, this is what we got to fix?
Yeah. The clear answer is we had a leaky bucket at the very top. So some of our larger customers, the good news was, when I interviewed a lot of customers that had exited recently, they all said, hey, we loved you guys. We loved the platform, but unfortunately, we had to move or look for other options because you didn't have these features. So I said, OK, first order of business is give ourselves six months. We have to fix this problem.
And we really shot out of the cannon in Q3 with increased product velocity, continued that momentum into Q4. Q1 is looking as robust from a product roadmap perspective. So we are not only shipping fast, we are shipping really meaningful enterprise-grade functionality that our big customers need for running mission-critical workloads on our platform.
What's an example of something that was causing the leaky bucket that you plugged?
One concrete example is advanced networking. When you are running mission-critical workloads, you need the ability to do advanced networking, like distribute your load across different data centers in different parts of the globe.
You need the ability to network securely between not only DigitalOcean data centers, but also establish virtual private cloud peering between our data center and AWS, for example, because most customers are multi-cloud by definition. Things like that you normally don't think about when you're running a very small workload on the cloud. When you're running mission-critical workloads, these are must-haves.
All right. Then so it's interesting because the CEO at Snowflake, similar tenure, right? It's been about a year. Yeah. Similar problem, which was their product release velocity was just way too slow. He's also sort of doubled the amount of features that came out. So when the CFO of Snowflake was sitting here earlier today, I go, how did he do it?
He listed a bunch of changes he made. One of the changes was whoever was in charge of products before liked to save up the new features and release them all at the same time. Sridhar lets them go as they're ready. I mean, he wants to make sure the quality is there, but he doesn't save up all the new features and release them. He just lets them go. What are some of the things that you did to speed up the velocity of product innovation in DigitalOcean?
One of the things so we did, I don't know, maybe a dozen things to speed up the velocity. One is, of course, bringing in new talent from a technology perspective, very, very strong top leadership that has very deep experience operating large public clouds. For example, our Chief Product and Technology Officer built AWS SageMaker and Bedrock.
Yeah, that's amazing.
Fortunate to get him, and then the leader that runs our Infrastructure as a Service product line ran the largest AWS fleet in the world for Alexa. The gentleman that is running our AI/ML infrastructure was an instrumental part of Google Gemini products. Up and down the stack, and I'm just talking about the top layer of management, but you can imagine the kinds of people they have brought in to fortify their respective leadership teams.
I think it all starts from having the right skills. The second thing, very importantly, we brought that mindset of, hey, we are running a public utility, which means it's 24 by seven. We really have to get ahead of our customers in terms of detecting problems before they're reported by customers. I mean, things that you would say, of course, wait a second, did you not do that before?
But just bringing that mindset of the fact that we are running a public utility was another important part of it. Number three is making everything customer-centric. So obviously, we have a lot of things that we would like to do, but what are the things that our customers want us to do? So really drawing the lines, declaring code red for everything else other than the top customer priority. So drawing the line between what is important, what is urgent, and maniacally focusing on that.
And yes, we also have a continuous deployment philosophy where we don't wait to bunch up things and create marketable moments and things like that. We have a very continuous stream. And then streamlining, just-in-time planning. So rather than planning for weeks together and then getting to work, we're like, plan just enough so that we can start shipping. Planning is an integral part of shipping. Really democratizing and pushing decision-making down to the engineers versus people in ivory towers making all the decisions.
Awesome. All right. And so Matt, when you look at sort of the financials in the NDR, what are the things that you think that investors should be most focused on at the moment? What are sort of the signs that the financial prospects are improving?
Yeah. I think the fundamental question that investors ask us is, OK, we buy the story. We buy your right to exist. And all that's great. But if that's true, why aren't you growing as fast as the larger providers, the hyperscalers?
Yep.
And so to me, the proof points there are what Paddy just talked about, NDR. It's very difficult to grow in kind of the high teens or 20s% if you're losing the top of your customer base. And you've got an NDR that's below 100%. So getting from 97% to 99% and hitting 100% in the core DO business is a very, very positive sign. And as we said, for next year, we're not assuming that the macro improves. We're assuming that we continue to innovate and drive product velocity and show growth with our top customers.
So the second thing that I would point to is, and this is the reason we disclosed it, is to kind of provide an alternative narrative for the folks that are worried about the graduation of our largest customers. We've got over 500 customers that are spending over $100,000 a year with us. And that's growing 37%. And that's tremendous. It's growing faster than the market. It's 22% of our revenue. We've been steadily adding those.
500 customers and spending over how much a year? It's growing at 37%.
100,000 a year, so 100,000 ARR.
Yeah.
That's a huge thing.
Which is interesting, right? Because it's kind of the opposite of the way people historically have thought of DigitalOcean.
That's precisely why we disclosed that.
And people, even earlier when we were talking about it, people were saying, customers leave us. We don't have a customer leaving us problem. Churn has been very low. It's consistent. It was consistent over the last several years. We have an expansion problem with our larger customers where they would move workloads to someone else other than to us.
We wouldn't get the growth. And where pre- say, 2022 kind of time frame, those customers would grow 35% a year on us that were growing. And that had dropped to the low 20s. And so our challenge has been, how do you get them to grow with you? And the blockers were very apparent, as Paddy indicated, when he came in.
Like the advanced networking.
Exactly. It's not 30 things. It's like five or six things that we've knocked out. There's probably another dozen or so that are collectively needle movers that will enable us to grow. And part of it is, and Paddy was talking about this based on your earlier question, the focus, the acknowledgment that those bigger customers are the source of our growth, that's where we're going to put all the wood behind the arrows on product development, on go-to-market.
That's a huge difference than even 18 months ago when even internally there were debates about, well, why are we prioritizing on the bigger customers? We're a developer cloud. And that's just the wrong narrative for this company. We treat developers really well. We're part of that ecosystem. We care very much about it. But those developers become bigger customers. The ones that are going to drive the scale for us are the larger customers. That's where we're focused.
Yeah, that's actually really interesting. OK. So historically, you would think of DigitalOcean and other players in that space as this is the sort of alternative infrastructure, easy to use, simple, inexpensive, but not necessarily a feature-rich solution that you're going to grow up with if you're a rapidly expanding business. So how should we be thinking about DigitalOcean today?
So we should be thinking about DigitalOcean as the alternate cloud for everyone that is not willing to commit a multi-million dollar contract for multiple years with a hyperscaler. And there are a huge number of underserved customers who have unmet needs in the world of cloud and AI. And what are these unmet needs? Things are super complex. If you go to a hyperscaler, it's complex because I worked in two of the hyperscaler clouds.
So I know why their platform is so broad because they're catering to large, complex enterprise companies. So it'll kill you with the curse of the choice. And that just balloons the complexity from an operational posture perspective. You need a massive DevOps team to babysit and monitor the infrastructure. So for us, we make the complex simple, make the simple affordable and predictable from a cost perspective. We also take pride in the fact that we are the most approachable cloud. Why?
Because we are large enough to scale with our customers but small enough to care about them, so we do that with our first-class customer service and support, and we are a pure cloud, meaning that we don't come back and compete with our customers, so I think that segment of market, which is the blue ocean between the very early-stage startups or hobbyists to the threshold where you get a named account status with a hyperscaler, that is a massive blue ocean for us where we feel there are a significant amount of opportunity of blue ocean where there are underserved customers with unmet needs.
Yeah, so if you spend less than a million, you are an unmanaged account at AWS.
And here we are talking about we have 500 customers at $100K.
Right.
So we can.
Unmanaged account means you're not important enough to have a human be associated to the account as a.
Right. So we can 10x our output with our big customers and still fly under the radar.
Yeah. Yeah. Let's talk a little about AI/ML. And so the metrics you gave us, Matt, were 160%?
Yeah, north of 100%.
Which is great, right?
Yeah, our growth year.
Yeah, fabulous. And that it contributed more than three points of revenue growth. So can we infer from that how big it is?
You can.
Yeah.
You have paper and pen.
Yeah, yeah, yeah, so is it exceeding your expectations, Paddy?
It is exceeding our expectations, and in my earnings script, I used some words, very careful choice of words to say it follows our hypothesis, so we stayed very disciplined to our philosophy of we are very clear on what we don't want to be. We don't want to be an undifferentiated GPU farm provider. What we want to be is a provider of a software-centric AI stack which is heavily indexed on the inference workloads, and that's what we are building towards, so I've been saying that we have a three-layer AI strategy, infrastructure platform, and applications.
We have shipped products on all three layers. We have dozens of customers in our infrastructure layer. We have thousands of customers in our platform and agentic layer. It is still early for revenue attribution for the platform and agentic layer. A lot of our revenue today comes from our infrastructure layer.
But even in the infrastructure layer, we are super differentiated with our GPU Droplets, which has run out of capacity a couple of times just in the last four months that we have had it live. And the reason for that is it, again, provides the same kind of simple, scalable, approachable concept in the world of GPUs. And it is available on demand. It's available for fractional access.
So a combination of those two things makes it super easy and accessible for customers that want to do fine-tuning of LLM models. But we think the future is going to be in inferencing. And that's why we are scaling towards the GenAI Platform, which is essentially API-based serverless endpoints for open-source LLM models like Llama, DeepSeek, and Mistral. We feel really good about the growth rate while staying disciplined to our strategy and not getting carried away by someone else's strategy.
So you've run out of capacity a couple of times. What happens when you run out of capacity?
We provision more because we have a pool of so bare metal. We have a few other options. But we are very frugal when it comes to figuring out what the capacity utilization of our different parts of our AI stack is and ensuring that we are meeting our customers where they are.
Yeah. You know these sort of headline stories about data center orders having been canceled or scaled back. Do you think that's really happening?
I think there's a load balancing happening for sure. And there's a lot of nuance in looking at the Microsoft strategy. I don't want to speak on their behalf. But I think there's definitely a moment in time. I don't know if it is now or in the next few weeks or a couple of months. But the center of gravity is shifting from training to fine-tuning to actual inferencing.
Because at some point, people have to make money from AI. And that is going to happen when businesses derive business value out of AI. And that comes from inferencing. And we are going to start seeing a shift. We are already starting to see a shift even in our infrastructure business where we're seeing a lot of more advanced startups using our infrastructure to do inferencing rather than fine-tuning.
Interesting. All right. And how are we doing on time? Five minutes. If you look at Vultr and CoreWeave, differentiate DigitalOcean strategy from those two.
Yeah. CoreWeave is a data center provider of high-end GPUs servicing a very small training-oriented LLM builders.
Yeah.
None of.
You don't want people.
Yeah, so none of what I said from our AI strategy, the Venn diagram doesn't overlap.
Remember in GTC last year, we were talking to one of your salespeople who said that he'd had someone approach him and basically asked to rent everything Paperspace had to train a model for two weeks. Like, we'll take everything you got. And the answer was, yeah, it's just not the.
That's not our business model. Yeah. And I also think that there is a huge difference between training being a very hardware differentiated value proposition because it depends on your chip. It depends on your network configuration, your data movement inside the data center, and things like that. Inference, on the other hand, is a software differentiated strategy, which is our sweet spot where a lot of the things that we are good at really matter.
Vultr versus us, I don't want to speak specifically to any customer, but the alternate small cloud providers, we are well ahead in terms of the breadth of our platform, the sophistication of our software. I think that's why we call ourselves the simplest scalable cloud. There are a lot of simple clouds, but they're simple for a reason. They're not scalable. But we are the simplest cloud that is also scalable.
Yeah. You give up some growth rate, though.
We are catching up.
Yeah.
We are catching up, and that is, as Matt said, the leading indicators are what you're seeing, which is, hey, larger customers and customers that are growing. We are capturing market share from other clouds at the high end of our spectrum. Our NDR is recovering, so the next thing you're going to see is a reacceleration of growth across the board.
And we're focused on durable growth. And that's what as Paddy was saying. We could buy more GPUs and we could rent them. And that would drive a near-term revenue boost. But then what do you do the next year when the margins for that service go down and the effective price you're getting for that GPU rental continues to be commoditized? And we're subscale. So we're not going to be competitive at cost on that. Our differentiation is in the software layer and the application layer. And that's where we've been putting our investment.
Yeah. I'm good with that. I tend to cover my companies for decades. It's what ends up happening. So make the decisions that are going to make it work out over time. It makes my life easier, too. OK. Questions from the audience? Yeah, in the back. Sorry. We're blinded by lights, so I can't see who it is.
How do you think about the convertible that you have outstanding that's coming through in a little over a year and a half? How do you come up with $1.5 million?
Yeah.
To repeat the question, the question is the plan regarding the outstanding convert, right?
Yes, so as we said on the call, we're likely to do something at some point this year before it goes current. We're in a phenomenal position from a balance sheet perspective. We've got a load of cash. We have over $400 million in cash. There's zero coupon on that convert. We've got a lot of different vehicles available to us, whether it's additional convert or it's traditional debt. And we're paying attention. And we're thinking about it. We'll likely do something before the end of the year.
Yeah. Last one.
Can you talk a little bit about the Atlanta data center and how that will benefit DigitalOcean?
Yeah. I can start, Matt. And then you can chime in. So the question was color around the Atlanta data center. So as we described on the call, we have the Atlanta data center coming online this quarter. And the way it will benefit is twofold. One is it provides us with a clean cluster, a native build for our AI needs.
And there's a lot to unpack there in terms of we have jammed our existing AI infrastructure into existing data centers, which is not the most optimal for us from a density and networking configuration and things like that. So this gives us a clean build. So that's one. The second thing, which is perhaps so that is the immediate thing. It gives us more capacity to do things our way or the engineer's way.
The second thing, which is perhaps a little bit more longer term, is, as Matt likes to say, we are in some of the world's most expensive geographies from a data center perspective. It's San Francisco, Toronto, New York City, London. It's like, OK, was Tokyo not returning our calls? Because it's like Mumbai, Singapore, literally every most expensive city we have a data center right in the heart of the city.
So this gives us an opportunity, at least in North America, to take some of the load off longer term, and we won't rush into it, but it gives us the ability to do some consolidation and Atlanta is one of the cheapest from a power perspective, so it really gives us a decent footprint to start consolidating some of our data center, the compute data center footprint in favorable terms in terms of an economics perspective.
So that will take several quarters to play out. We're not going to rush into it. So we're not building out the whole shell. We have a shell. But we are only going to fill it as the demand dictates. But over time, it's going to be very good from a data center expenses point of view and favorable from a gross margins perspective.
Why is power cheaper in Atlanta?
I don't know the answer to that question. But between Atlanta and Minnesota and a couple of other places is where it's a combination of the local regulation and things like that. But what we got was a really, really good deal.
You got a really good deal. OK. Love it. We'll stop there. All right. Matt and Paddy, thank you so much. It's great to have you here.
Thank you, Patrick.
I really appreciate you coming.
Appreciate it.