Good morning, everyone. I'm Brad Reback with the Stifel Equity Research team. Next up, we have DigitalOcean, Paddy Srinivasan, CEO, Matt Steinfort, CFO. Gentlemen, thank you very much for joining us.
Great, thank you.
Thanks.
It's nice to be here.
So, Paddy, maybe just for those newer to the name, you know, high level and very candidly, what, why does DigitalOcean exist?
Okay.
Right? There are three massive CSPs. You know, how have you defined your niche in the market?
Thank you, first of all, for, for coming, all of you, and it's a, it's a pleasure to be here. Great question to get us started. So DigitalOcean's mission is to simplify the cloud so that developers can change the world, and we literally mean it and live that value and mission every day. The reason for our existence is, is that the world of cloud computing is super complex and super expensive, if you don't have super deep pockets and you are a large enterprise. A lot of our customers, most of our customers are what you would generally term as small or medium business with less than 500 employees.
For them, they don't typically have the bandwidth or the skill or the budget to take something super complicated like cloud or AI and build innovative solutions on top of that. They just want something from a public cloud platform point of view, that is super simple, that is compelling, gives them 80% of what they need and 0% of what they don't need. Get their journey kick-started very, very quick so that they can validate and start growing with them as they hit certain thresholds and start growing. So DigitalOcean exists to serve these developers, and I think we have done a tremendous job over the last 10-plus years. We have over 640,000 paying customers.
A lot of them, 480,000 of them, we call as learners, and learners are developers that pay us to build and host small applications on our platform. And typically, they're learning how to do programming, they're experimenting with the new application, and once they do that, they start building a real application, and they graduate to what we call as builders, and then they get to scale. So that's why we exist, and I think our business model has proven that there is definitely room for a public cloud platform that exists to serve the needs of the masses.
A few years ago, I think it's about 2 years ago at this point, you acquired a company, Cloudways that was more of a service as well as some... I guess a service on top of the infrastructure. So if you think about your customer base and your go-to-market, how do you determine who's a Cloudways customer and who's a DIY customer, we'll say?
Yeah, that's a great question. So Cloudways is a managed hosting service, so the word service is not to be confused with a consulting service of any kind. It is a technology platform, but for hosting websites, versus the core DigitalOcean platform is for building and deploying technology and software applications. So if a customer walks through our combined front door saying, "Hey, I want to host a WordPress application," we funnel them into the Cloudways funnel. For everything else, it is the core DigitalOcean platform.
Got it. Rather than putting it off to the end, let's go right to AI. So maybe help us understand how Paperspace has accelerated your presence there and what the AI strategy is, because it's clearly the Wild West out there right now.
Absolutely. It is the Wild Wild West, and I'll start with the AI strategy, and then I'll explain to you how Paperspace and the acquisition fits into that. So for us, as I have dug in in the last 100 days, I think AI fits in perfectly with what makes DigitalOcean a special place for the developer community in general. So, in my mind, there are three reasons why customers choose us. One, we make the complex simple. Number two, we make the simple affordable, consistent, and transparent from a ROI perspective. And number three, we do one and two, standing on the shoulders of our incredible developer community. And our developer community is thriving and vibrant, much more so than any other developer community I've been part of.
And I was part of the Microsoft developer community back in the day, then Oracle and Amazon and so forth. We have very, very passionate set of developers in our community. So when you look at these one, two, three, these, all these are not only durable but essential for the AI wave. AI is super complex right now. Like, it is super hard if you're not one of the hot Silicon Valley startups with billions of dollars of funding, it's really hard to get started. How do you get access to compute power? How do you get a lightning-fast storage capabilities? What, which LLM do you want to use? Do you want to go with a closed source as an open source?
There are many, many complex decisions that one has to unpack, and we are offering, as always, a very easy, simple, delightful way to get started. So that's number one. Number two, AI is also super expensive today. So I think we have a great opportunity over the next several quarters and several years to make it super affordable, whether it is just consuming an LLM endpoint on the cloud, on the edge, or building your own version of an existing model, forking it using open source, building a collaborative, machine learning ops pipeline, and hosting a model. We believe we can build a very durable, compelling ROI to make cloud affordable for the masses. And number three is there is a tremendous amount of appetite in the community for learning how to build AI-enabled applications.
Because I firmly believe that over the next several years, pretty much every horizontal and vertical application is going to be reimagined using AI. So if you think of cloud, cloud was a displacement market, right? Cloud took software that was built one way and changed it to a different way: subscription-based, SaaS, SaaSified, remotely hosted type of model, right? So that was cloud and SaaS. AI is going to do all of that and displace services, services that are rendered by humans. So there's going to be a new category of applications that will emerge, which will replace different types of professions and professionals. So I feel there's going to be a tremendous displacement with AI, and I feel the appetite to learn how to build those essential services is huge. So in all three of these, DigitalOcean will play a very, very vital role in helping democratize the accessibility of AI.
So in your last earnings call, on more than one occasion, you all mentioned the idea that you were capacity constrained, serving this segment of the market, and clearly getting H100s is a nontrivial task. How do you sort of think about that, the CapEx requirements, the speed with which you'll invest against that on the capacity constraint? And, you know, candidly, how do you compete with, you know, the CSPs from a purchasing standpoint? You know, does NVIDIA carve out some for you?
Yeah. So from a capacity constraint, it was less that we couldn't find the gear or get it. It was more that we were starting from zero-
Yep.
and we were having to buy, receive the gear, deploy it, get it tested, get it spun up. And so we articulated our plan for this year, which was a fairly modest, as you indicated, capital spend, relative to the billions of dollars or hundreds of millions of dollars you're seeing from either the hyperscalers or some of the pure plays. So for us, it's really a you know, a thoughtful pacing of you know, how much capacity do we need? We have a different customer set. They don't need the giant GPU farms with tens of thousands of cards and that's not gonna be the model that we pursue. We need to make it available to the small developers in ways that they're you know, that they're gonna need to consume it.
So we've worked through kind of the early stages of that. We have a lot of the capacity that we had ordered late last year that's now becoming online, and so first quarter was constrained 'cause we didn't have it all deployed. We're still deploying some, and as Paddy had said, early indications for us are very positive. We're seeing good traction. We had a good 32% increase in ARR from the fourth quarter to the first quarter. We saw a big increase in the hours sold, and we'll continue to add capacity over the course of the year. At the quantums that we're buying capacity, we don't have any issue getting it. There's enough people in the industry, through the distributors, that cancel orders that we can find capacity. If you're wanting $1 billion of H100, you probably have a problem. If you're looking for tens of millions, you don't have a problem finding that.
So, Matt, you have a history in telecom, right? Telecom equipment and infrastructure. Candidly, I don't remember how far back that goes, but-
Ninety-nine.
Okay, so that's perfect. That's exactly what I was hoping. So as you kind of think about your experience way back in the beginning of your career and the build-out of the Internet, how does... What are the similarities and the differences this time?
Yeah, it's an interesting analogy. So I joined Level 3 Communications in 1999 at the height of the kind of the telecom fiber deployments. And what you had then was a lot of people had strong conviction that the world was gonna need a massive amount of fiber, and they put $ billions in the ground. The difference between now and then is a lot of the people building back then used debt to finance that, whereas a lot of the $ billions that are going into AI right now are from the hyperscalers, who have ample cash. So there's maybe not that same kind of liquidity leverage overhang. But you have a lot of people pursuing a market with a lot of capital intensity, and once you do that, you have a lot of pressure to sell into that capacity.
The challenge is, do you have not too much fiber or too much GPU capacity? Do you have too many providers of that capacity, which can lead to challenging pricing? I think that, you know, this is one of those waves where, you know, you look back and you say, in history, did we have too much fiber? No, we didn't have too much fiber. It'll all get used. Did all of the people that were pursuing it at that time make money on it and end up existing, you know, longer term? They didn't. We're hopeful that this time, again, there's a lot more prudence that we're deploying in terms of our strategy around how much we're sinking into the AI space to take advantage of the growth. But I think you'll look back at this in five years and you may see some similarities in terms of not everybody putting, you know, dollars into GPU capacity in an undifferentiated way will get the returns that they're expecting to get at this point.
Got it. And to your point on tens of millions versus tens of billions being spent on GPUs for you, it sounds to me like you can be very reactive to what your demand signals look like. So maybe you're off by a few months one way or the other, but you don't have to get multiple quarters or multiple years ahead from an investment standpoint. So your CapEx doesn't need to be as front-end loaded as maybe the CSPs are doing right now.
Yeah, it's. I'd say it may be more front-end loaded than what we're used to, but it won't be on the scale or the quantum that the pure play GPU farm providers or the hyperscalers are doing.
Great. Paddy, you said something interesting before as related to developers and AI apps going forward, and as I think about it, I think a few years from now, all apps will be AI infused in some way, shape, or form. I know it's a hard question to answer, but as you think about the next... We'll play, we'll say, play it forward five years from now, what % of your chip capacity do you think are standard chips versus GPUs, or does that completely blur over time as well?
Yeah, that's a really impossible question to answer. Anyone that tries to answer that is just making it up, because right now, there's only one way we know how models are built and inferred. It's GPUs. But there are already multiple efforts to change that, and have LPUs and different types of architectures, right? And right now, it is not just GPUs, but it's immediate GPUs of a specific kind. So I expect the architecture to evolve over the next several years. So it's really hard to speculate what the makeup of a GPU versus CPU is going to look like in our farms, but it is gonna be a blend of all of the above. But I think I also want to complete something that Matt said, which I find really interesting.
So it's the key word that Matt used in his last sentence is, "We have to be differentiated in our AI value proposition." That's why we are very particular about... And you asked about Paperspace. So we are very particular about not following someone else's AI strategy. So we are not going to build hundreds of thousands of GPUs in a GPU farm. We are trying to understand the needs of our customers and their AI needs at a very deep level. So right now, with Paperspace, what we acquired is a quote, unquote, "platform as a service for AI." Right? On top of it, and I'll explain that in a second. On top of it, we introduced infrastructure as a service for GPUs in January. So we have two entry points for developers that are looking to build AI apps.
If they're sophisticated enough, and they say, "Hey, we just want direct access to the GPU, give me just enough virtualization and orchestration, and I know what I'm doing. I will go run these workloads. I will figure it out," they can adopt our infrastructure as a service. If, on the other hand, they're like: "Hey, we need some handholding. We need help, and we are not going to build a model, but we are going to extend a model, inject our custom data, and we want to use AI as an inferencing endpoint to AI-enable our existing application," then they typically start from a platform as a service layer, right?
We have full life cycle support for that, all the way from discovery of models, to designing your application, to building a data pipeline, to having different versions of your machine learning models, and then, of course, deployment and inferencing. Our platform as a service covers the entire gamut of that. So for that, the last step needs GPUs, but the rest don't, right? We are crafting a very thoughtful, durable AI strategy that is tailor-made for our customers and their needs, rather than following someone else's strategy. So from that point of view, time will tell what that composition is going to look like, but we are going to be very disciplined in how we approach this. Because all that matters for us at the end of the day is to make our customers successful and help them adopt AI at the pace that they want to adopt, and we're not going to get caught up with the hype cycle.
Got it. Switching gears, as you mentioned earlier, roughly 650,000 total customers, 450,000 that spend appreciable amounts of money with you each month. Worldwide footprint, like half U.S., half rest of the world. So you have a pretty good view into the SMB economy out there. Without getting into the nitty-gritty and trying to tease out what consumption trends are last week, are there areas, be it geographically or vertically, where you're seeing strength or weakness? You know, any appreciable trends that are different within the base?
So from a big picture point of view, I would say no. There's not a whole lot of idiosyncratic patterns that we observe, whether it is geography-wise or vertical. But SMBs are SMBs. They face all the same headwinds that you read about in the newspapers, and they are very careful in terms of eliminating wastage and being frugal and mindful of what they are consuming, and turning off unnecessary capacity and things like that. But also, there's another important thing that I want to bring to all of your attention, which is, are all our most of our customers SMBs? Yes. But there is a new, very important new ones there. Most of our customers, especially the bigger ones, are tech companies. So they are SMBs, but they're tech companies. Why is that important?
It's important because, tech is their business. They make money from software that is running on our platform, which is a pretty big difference compared to, "Oh, I'm just running an appointment-setting service, as a local bicycle shop." That's very different from, "Hey, I am selling this application, and this is how I make my payroll." So, that is a big distinction in my mind as, we look at our customers, a lot of them, or most of them, I would say, on the top end, are tech companies or ISVs.
So on that point, Matt, you've mentioned, you know, one of the moderators on growth recently has been contraction, right? And, you know, as I sort of high level think of contraction, I liken that to optimization, which we saw out of the CSPs broadly. A lot of that's abated, but you all continue to talk about some contraction headwinds. So what exactly is that for you, and what's the difference with that versus churn, to the model?
Yeah. So NDR is... It got three components, right? It's got expansion, it's got contraction, and it's got churn. Churn is when a customer leaves the platform entirely. Contraction is when they spend less. Expansion is when they spend more on the platform. So churn has not been elevated. It hasn't been elevated for over a year. It only, at the peak of the, kind of the downturn last year in January, it was up a point. It was just it wasn't an issue. Contraction was up a handful of points, and it remained up a handful of points. It's been slowly moderating, though, so we're seeing contraction come back to kinda historical levels. And again, it's only a handful of points above historical levels.
And for us, it's more likely that it's the underlying customer's business is not growing, or they're shrinking, than it is they're dialing back their service. I'll give you one example that's counter to that. We had one, and this was at the height of last year when the pressures were at their most. We had one customer that said, "Look, we used to store data for 90 days as part of our business model." It was a legal something service. They were like, "30 is good enough," and so they changed their business model. They don't need 90 days of storage. They needed 30. But a lot of it is just the kind of ebbs and flows of their own business.
Where we've seen the biggest headwind for growth and the key driver for us getting NDR back above 100, expansion has slowed. And so expansion is customers that are growing their business on our platform. That was in the mid-30s, you know, a year plus ago. And with the slowdown in the market, and this we attribute, you know, largely to macro pressures, it's now in the low 20s, and it's very stable. So it's not... You know, it stabilized in the kind of late summer last year. And that's where a lot of the increased product velocity and innovation that Paddy's been talking about is where we're focusing on: how do we enable our customers to grow on our platform? If their core business isn't growing, how do we win more workloads? How do we enable them to buy more products from us to drive that ARPU up until the macro environment improves and those core businesses themselves start to grow again?
See if there are any questions in the audience. Okay, so on the expansion part... Oh, I'm sorry.
No, that's.
Go right ahead.
I came in a bit late, so I missed it, but how do you think about, like refreshment cycles for GPUs? It might be a little different, but, you know, NVIDIA keeps releasing a new GPU every year, and, you know, like Rubin is the one after the Blackwell. How are you, how are you thinking about providing the latest infrastructure? I've seen folks say that their refreshment cycles are seven years, some are three years. How do you think about keeping the latest infrastructure for your customers, whether SMB or scale or whatever?
I can start, and then Matt, you chime in with the cycle.
Yeah.
I think you're being maybe a little generous by saying every year they come up with something. I think it's more like every quarter these days. Blackwell is not even out. They're talking about the next gen. So from our point of view, we are carefully calibrating, looking at what is available and what our customers need. And again, this goes back to not getting too caught up in the hype cycle. So we are right now on the Hopper series, and as we keep placing new orders for GPUs, we carefully blend it, so it's not just 100% of one type of box, but we diversify, and we'll continue to do that. There are specific workloads that need different types of GPUs, and we follow the demand from our customers in terms of what their needs are.
Yeah. On the refresh cycle, you gotta separate that into physical life and useful life and economic useful life. So physical useful life, these are servers. They'll last us about the same length of time that another server lasts, whether that's 6 years, or it's 5 years, or it's 7 years, depends how you're maintaining that. The real question, though, is: what's the economic useful life? Which is, if you're getting a certain amount of $ per hour of compute on the latest version, and there's a series of newer versions that are continually coming out, how long do you get that same price on that, the current version's H100? How long is the price of an H100 gonna last, and how quickly does it decay? And that's where I think the industry is still sorting that out. I don't think anybody knows, and I think there's a lot of different opinions about how durable that is and what the price decay curve will be. But, we'll certainly see that over the coming years.
Have you seen any price compression on the A100s?
There's less price compression. I think the older, you know, the older models tend to be pretty durable in terms of the innovation, but the pace at which the new models are coming out, as Paddy said, is fairly dramatic right now, and you have still really a single provider in the industry. You've got other providers that are, you know, working on things, and I think you'll see the cost curves in AI bend quite a bit in the next 18-24 months, which we expect. And again, that's another reason why, you know, we're not building a giant farm with hundreds of thousands of chips of a certain variety. At the pace at which we'll deploy our capital, it'll be more kind of smoothed along kind of versions as we go.
Thank you.
A couple of final ones. Product expansion, you just mentioned, you need more product. At the end of the day, Paddy, you've talked about some product gaps last quarter. Maybe high-level areas where you see opportunity to invest in the solution to expand.
Yeah, great. So, as I explained in the last call, the areas that we are focused on right now, I mean, there's a lot of work to do, but right now, if you ask me, "What are you focused on?" It is, expanding the breadth of our platform for our scalers. So specific things around advanced security options, advanced reporting, management capabilities, networking, under the covers, giving a little bit more of a, a leading-edge, geographical coverage and advanced load balancing, those kinds of things. Looking at and some of the stuff that we recently released are along the lines of different types of, droplets, which are basically our virtualized infrastructure, whether it is memory-optimized, storage-optimized, those kinds of things, increased, robustness in our storage and CDN capabilities.
So these are all things... And if you look at the underlying pattern in the dozen things that I just mentioned, these are all for companies that are doing really well. They're expanding. They need better coverage. They need a way to connect different types of applications running in different data centers. They need different ways to. Their developer community has grown, and they want better way to manage the infrastructure. So, these are all addressing the emerging needs of companies that are doing really well and thriving on our platform. So that's where most of our focus is.
If you ask me, I guess your next question is gonna be, "Okay, what is next?" Obviously, there's gonna be a lot of AI, both AI as a service on the infrastructure side and the platform side, but also AI in how our products are being used by customers. So one of the things we released, we didn't think it was going to be such a big deal, but we released a ChatGPT-like functionality for our product documentation. Widely successful, so much so that Matt had to approve incremental budget in, like, two weeks after we released because the usage was so crazy. So there's obviously a lot of demand and pull from our customers to get some of these things from us. So you will see us follow where the demand and the pull from our customers are coming from.
Should we think of that being almost exclusively internal development, or do you see buying being part of the equation on the expansion product?
I think it is too early in my tenure. We are assessing the overall landscape, internal, external. There's a lot of opportunity, and this is not just a vanity statement, like, in terms of build, partner, buy.
Yep.
In that sequence, we have a lot of things that we should be building as a core part of our platform experience. I think we will be more aggressive in strategic partnerships to expand the footprint of our offering, not to mention the partnership on the go-to-market side. But even in our core offering, there is a lot of opportunity for us to expand using strategic partnerships. And then, of course, as a tech company that is really looking to grow, we are always scanning the landscape to see what is interesting and looking at acquiring different types of assets.
And then wrapping up, 'cause we got a minute left, maybe wrapping it all together. Over the last couple years, you've priced for value a few times. Where do you think you are in that cycle right now? Is there further opportunity in the near term, or is it more medium to long term? And, you know, as you think about that and the margin impact.
Yeah. So I think, from, from a pricing and packaging point of view, yes, we just did a price increase, and we have lapped it through, right? So from a financial modeling perspective, we will be very creative in the packaging world. So for example, we released premium optimized memory-optimized Droplets, storage-optimized Droplets. Those are all new capabilities for premium price, and a lot of our customers have been demanding that. So you'll, you will see a lot of other packaging initiatives in helping workloads-based packaging, as an example, to help our customers consume more of our stuff at a click of a button, those kinds of things. And we will keep reevaluating how things stand on an ongoing basis. But, you should expect to see a lot more on the packaging creativity from our side, rather than on the core pricing perspective.
Perfect. Right at time.
Awesome.
Thank you, gentlemen.
Thank you so much.
Thank you, Brad.
Thank you.