DigitalOcean Holdings, Inc. (DOCN)
NYSE: DOCN · Real-Time Price · USD
95.87
+0.66 (0.69%)
Apr 27, 2026, 11:53 AM EDT - Market open
← View all transcripts

53rd Annual JPMorgan Global Technology, Media and Communications Conference

May 13, 2025

Moderator

Thank you, everybody, for coming. I'm delighted to have here with us CEO of DigitalOcean, Paddy Srinivasan, and CFO Matt Steinfort. Guys, thank you for joining.

Matt Steinfort
CFO, DigitalOcean

Thanks for having us, Paddy.

Moderator

OK, let's start with maybe give a brief introduction about yourself and maybe talk a few words on DigitalOcean for the people who might not know about the company.

Paddy Srinivasan
CEO, DigitalOcean

OK, Matt, do you want to start?

Matt Steinfort
CFO, DigitalOcean

Yeah, I'll start. I'll just give a background on myself, and you can cover the Ocean story. So Matt Steinfort, I'm the Chief Financial Officer. I've been with the company for two years, a long background in both startup software companies and large-scale telecom infrastructure companies. Very excited to be here. And I'm the longest-tenured executive on the team now, with Paddy coming in and kind of bringing on a new team. So very excited to be here.

Paddy Srinivasan
CEO, DigitalOcean

Thanks, Matt. And Pinjalim, it's wonderful to be here. Thank you for hosting us. I'm Paddy Srinivasan, the CEO of DigitalOcean. And for those of you who are new to us, DigitalOcean is a cloud and AI platform serving digital native enterprise companies. We have over 640,000 paying customers, of which we think 170,000 plus are digital native enterprises that are using us for a mission-critical aspect of their revenue generation activity, whether it is powering the technology they sell or a product they monetize using technologies that run on DigitalOcean. We've been around for 12 years, four years in the public setting. And I took over DigitalOcean about 16 months ago. And since then, we've made a lot of calibrated bets, which I'm sure we'll get into. But that's a little bit of background on DigitalOcean. That's the perfect segue for my next question.

Moderator

You joined in January of last year. I was looking back at the model. At least what we were forecasting, we were looking at at that time was single high single-digit, I think, organic growth at the time. Net retention had kind of dropped below 100. AI/ML was kind of a new space because you just had acquired Paperspace. Fast forward, organic growth is starting to pick back up into low to mid-teens now. NDR is now over 100. And obviously, you're making great strides with AI/ML, right? So talk about the bets that you were talking about before. What are those bets? What is working? And where do you need to see further improvement? Yeah.

Paddy Srinivasan
CEO, DigitalOcean

Great. Yeah, so you set it up perfectly, Pinjalim. The bets that we made last year was, number one, it all starts with the product. So having a great foundation of product innovation is essential for us as a cloud platform. And for that, the essential answer or the question we had to answer was, who is the target customer that we're going after? So very pleased that we've been able to hone in on that and really understand the motivations of such customers on why they choose DigitalOcean or what we should do to stand out in the market to be attractive to these digital native companies. So that's number one that we've been able to figure out. And we talked a lot about that in our investor day on April 2.

Number two is, of course, once we have that customer identified, we have to build the right products for them. So over the last three quarters, as you have seen, we've done two things. One, we have massively increased the velocity of product innovation. And number two is we have started shipping features that are essential for companies, specifically digital native enterprises, to scale their footprint on DigitalOcean. So these include advanced features from a networking capability, from a multi-data center deployment, advanced security features, granular access control, bigger Droplets or bigger footprint of things to run more sophisticated workloads, bigger, faster, better in every dimension possible. So this is all intended at helping larger companies that, over the years, because of lack of these capabilities, were forced to leave us and defect to other platforms, mostly hyperscalers.

So now we have the ability to keep them and keep them scaling on our platform. And that shows up in the cohort of customers larger than 100K footprint with DigitalOcean growing at 41%, as we reported in the last earnings call. So that was a big bet that we made. And it is starting to pay off. And the final piece is the company grew to its current levels primarily on the backs of a single go-to-market motion, which was product-led growth. We have a product-led growth motion, which is a gold standard in the industry. A lot of people copy our self-service funnel and our product-led growth motion. But over the last several quarters, we have started adding more complementary go-to-market motions to augment and amplify the product-led growth.

Number one, we are standing up or we stood up a very small sales team specifically focused on AI opportunities to go outbound and build relationships with AI-native startups and the venture community. We also started recruiting partners that can amplify our reach and get indirect sales into the pipeline, and number three is we also started having a named account model to take care of our largest customers so that we can build a holistic full-life cycle engagement with those customers, so that's still evolving. In the Analyst Day, I said that even though these new motions are very new, in 2024, 20% of our new revenue came from these motions, so it is starting to show more than green shoots, I would say, and in 2025, we expect more from these, but that was the second big bet in terms of go-to-market.

Moderator

Most of the Scalers Plus is what you're talking about. These are the customers who pay you over $100K, I think, per year, right?

Paddy Srinivasan
CEO, DigitalOcean

That's right.

Moderator

And that has kind of accelerated in growth. And you're saying that's because of product as well as go-to-market. Would you say the product side is more flushed out at this point than go-to-market?

Paddy Srinivasan
CEO, DigitalOcean

Yeah, so as it should be, yes. Product is absolutely a couple of steps ahead, and go-to-market is growing into those shoes of ensuring that we have both the customer acquisition engine as well as our customer nurture mechanisms both are in flight, and it is a multi-quarter undertaking. It just takes time to be able to do that at scale and do it very efficiently. We have a very efficient, perhaps a little too efficient, sales and marketing engine. We spend 7% of our revenue on sales and marketing, and we are appropriately frugal in looking at our go-to-market version next. We still want to maintain a very efficient way to acquire and nurture customers, but at the same time, we know that it's an essential aspect of taking the growth from what it used to be to where it is now to the next chapter of our growth story.

Moderator

You have been pretty vocal about kind of this graduation effect impacting the growth rate in the past, right? And you're kind of stemming that graduation effect with products. Was there any particular one or two capabilities that kind of have started stemming that? Or would you say it's kind of a portfolio effect of a number of capabilities that you have added so far? And I mean, I don't know what metrics do you look at? Has that graduation effect? Is the main metric basically scalers accelerating? That's kind of the main metrics for us to kind of judge?

Paddy Srinivasan
CEO, DigitalOcean

Absolutely, yes. Yeah, so the number of scalers, the output of scalers, how fast they're growing, all of those things are the lagging metrics or the metrics that I would like the investors to look at. But it's a combination of many of these capabilities, but also the customer engagement that we are now having. It sounds very elementary and basic, but our ability to put our arms around these customers and ensuring that they have a full-life cycle support from us goes a long way. And then the product capabilities range from having bigger Droplets, having bigger footprint of things to choose from to something that I announced in the last earnings call, our ability to scale up to 1,000 nodes in our Kubernetes cluster. That's a huge deal. We co-innovated with a couple of customers who specifically wanted this feature to scale up and scale down.

Another great example of a very sophisticated feature that some of our larger customers need is the Partner Connect, which is the ability to connect our data center safely, securely, and privately with other hyperscaler data centers so that you can run multi-cloud workloads on us. Virtual Private Networking for our Virtual Private Clouds to run multi-data center privately networked applications. I mean, these are all pretty big, meaningful pieces of functionality that many of our larger Scalers Plus customers are using.

Moderator

The Scalers Plus portion of the business, which is, I think, roughly a quarter of the ARR, right? If I was doing the math, it seems like your average ARR per customer is somewhere around 300K at this point for that particular cohort. How should we think of that opportunity kind of growing, right? You're starting to add some really large customers with that $20 million deal that you talked about, right? That 300K, how should we think about the opportunity going forward for each customer to expand within?

Matt Steinfort
CFO, DigitalOcean

Yeah, there's a huge opportunity for growth. We tried to give some visibility in the investor day where we described the number of customers that spend $500,000 a year with us and the number of customers who spend $1 million a year with us. Those numbers are growing at or above the same rate as the 100K customers. The revenue growth is higher. The number of customers is higher. You've got a number of factors here. As Paddy talked about, we had a product deficit with some of the larger customers before. When we would make an attempt to spin up a go-to-market motion and direct sales, we would find a lot of times we couldn't close the deal. We were missing a product gap.

And so that gets to the lag conversation that Paddy had started, which is now we're layering sales on top of having the right product fit. And so we see a tremendous opportunity to grow the core cloud business. And then on the other side, you've also got AI as a tremendous growth opportunity for us. And we're seeing more and more kind of larger inferencing workloads and customers that have real scaled applications where they're trying to layer in AI capabilities. That's an entirely new growth vector that will help drive up the output.

Moderator

Yeah. So I want to talk about the rest of the business, right? So 25% of the business, let's say, is doing really well in the right direction. But the rest of the business is still a little bit of a laggard in terms of growth, right? So when do we see or what are the efforts that you're making on the go-to-market side or product side that would result in that growth rate to start to inflect, at least on the builders' part of, if not the lowest cohort?

Matt Steinfort
CFO, DigitalOcean

Let me just answer because I want to add a piece of context, and then Paddy can answer some of the things that we're doing on the go-to-market side. One, the way we categorize customers, we break it by MRR, right? A Scalers Plus is 100K or above. And so you say, well, it's the next one down. Well, it's the Scalers. It's less than 100K. Well, as those customers in that cohort today are successful, they move into the next cohort, into the Scalers Plus. Because we range-bound it, the average RPU of the next level down is going to be roughly the same all the time, right? Because you're saying customers that have a spend from X to Y, and the average spend is going to be in between. That's not a great metric. To me, the best metric is two things.

Look at overall NDR, number three. Look at overall revenue growth. Look at NDR, and then look at the metrics around the Scalers Plus, which is the top end of our funnel. As long as that's growing and it's growing well faster than the aggregate, then the business is humming and we're migrating people up through the customer lifecycle. If you said, well, what's your ARR target for the next two buckets down? I'm like, it's math. It's going to be somewhere in the middle of the upper end and the lower end of the range. What we're really looking for is, are we graduating people up into the higher spend cohorts? I don't know if you want to talk about that.

Paddy Srinivasan
CEO, DigitalOcean

Yeah. So I think Matt articulated it really well that there's a selection bias in the larger cohort. So as more customers graduate from a lower cohort to the big one, obviously the averages are compressed in the middle bands. And that's exactly how it should be. And we look at builders as the farm system for scalers and scalers as the farm system for scalers plus. And it is also important to underline that both our product and go-to-market efforts are both not just focused on the top end of the customers exclusively, right? We are doing a lot of other things. For example, we are trying to democratize AI by building this GenAI platform. We have over 5,000 customers, and over 80% of those are existing DigitalOcean customers that are trying out something new on GenAI using our platform.

So these are all great leading indicators to say, okay, most of that is today, as the rest of the AI market, it's all in proof of concept mode. But some of them are starting to deploy these things in GA, and they will go into production. So we are doing a lot of things from a product innovation point of view to cater to the long tail or the masses of the DO customer base. But also from a go-to-market point of view, we have 3,000 named accounts, and then we have another 5,000 accounts that we are modeling as the ones with the biggest potential to consume more of our services. And we are using an algorithm using some heuristics on product usage, company attributes, firmographics, all that stuff to figure out who these next 5,000 customers are.

We're doing a lot of things both on product as well as go-to-market to figure out how do we get the next set of next couple of thousand customers ready to get beyond the 100,000 plus. As Matt said, have a look at that revenue growth as well as the Scalers Plus growth over the next few quarters, that should give us the answer.

Moderator

Great. I want to go to AI after this, but I want to spend a little bit of time on this migrations point that you highlighted in the last call, right? 79 migrations, I think you said in Q1. And I just want to ask, where are these migrations coming from? Because people, if they hear people are moving from AWS or Azure to DO, and there's a question mark, right? Why is that happening? Maybe talk about where is that migration coming from? Is that from the three hyperscalers? Is that from the smaller companies that you compete with? Where is that bulk of the migration?

Paddy Srinivasan
CEO, DigitalOcean

Yeah, it is all over the map. I mean, obviously, hyperscalers have a dominant market share in the world of cloud, so naturally, a lot of the migrations come from the hyperscalers, but also come from smaller cloud providers where the customers are outgrowing just the need for a hosting platform. They need more sophisticated storage, databases, other functionalities that hosting providers cannot provide. The interesting thing is the migrations are now being aided and enabled by us and our partner ecosystem, so that, I mean, a lot of the migrations were already happening, but now we have a proper program and a focus and spotlight and an incentive structure to intercept these migrations, aid these migrations, accelerate these migrations, and make sure that even after they deploy the first workload, we actively engage with these customers and make sure that we are nurturing them and expanding them.

Because these are the customers who are choosing to come to DigitalOcean for a specific reason. And we think there's more we can do to help them take advantage of the whole breadth of our platform. So the reasons are the same things that we articulated in the investor day. It's significantly easier to stand things up and scale it on the DigitalOcean platform. The pricing is very predictable. The pricing is transparent. You get significantly more open-source support on the DigitalOcean platform. You get world-class engagement and support on us. For us, a $100,000 customer is absolute king. And you're spending $10,000 a month on a very large hyperscaler. It's hard to get the attention. And for a $100,000 customer, you get to sit down with Bratin and Steinfort and ask for specific product features.

There's a really good chance that we're going to build it over the next several weeks or a few months. Those kinds of very, very elevated type of engagement, both from our go-to-market teams as well as our engineering teams, is something which many of our customers absolutely appreciate.

Moderator

Just to put a final point, out of the 79, what percentage would you say are coming from the three hyperscalers?

Paddy Srinivasan
CEO, DigitalOcean

I don't want to put any specific numbers. We'll see how it evolves over the next few quarters. But I would say it mimics, if you can tell me what the hyperscaler portion of the cloud market is, it's probably very similar to that.

Moderator

Got it. Okay. AI/ML, that's kind of the topic to deal with DigitalOcean nowadays. In the last earnings call, you kind of noted the demand you were seeing is kind of exceeding the supply for some of the NVIDIA and AMD chips around inferencing, I think. But I want to ask you, in general, as you think about the infrastructure side of things, not the platform side, how is DigitalOcean kind of differentiating in the market, right, versus some of the other players?

Paddy Srinivasan
CEO, DigitalOcean

So there are a couple of ways by which we are differentiating. So first of all, the demand exceeding the supply, I think I specifically mentioned for the H200s and the MI300X, like the bleeding-edge GPUs, which is largely also true for the other ones, but the new ones, we cannot rack and stack them fast enough. But what are the differentiations? So first of all, from a, so we have multiple layers of AI offerings. We have infrastructure, we have platform, and we have agents. On the infrastructure level, we have both a bare metal offering, which is the same as what companies can get from pretty much any other GPU provider. But we also have something called the GPU Droplets. The AI droplets come pre-configured with certain things that you need to take advantage of a GPU.

So it comes pre-fabricated with all the drivers that you need, all the libraries, like all the PyTorch libraries, all the dependencies. Everything is ready to go, just like how you experience droplets in a traditional CPU sense. So why is that important? It's really important because it just takes all the overhead of managing GPUs out of the equation, right? We provide that abstraction, and it is really important because it's really hard to do all of these things. It takes several hours to be able to preload all of these things and ensure that they're working for that exact network configuration, for that exact storage, and things like that, so we make that super easy with a click of a button. Number two is these GPU droplets are available on demand and fractional, so you don't have to pre-purchase or have dedicated clusters.

You can use as few as you want, and you can flex in and flex out, so that's a big, big reason why companies prefer us, and as we explained in the investor day, as we are now transitioning, I think this is a key point, a majority of what we are seeing in Q1 and now in Q2 for sure are now transitioning over to inferencing. Now, when you talk about inferencing, it's a totally different ballgame, right? Inferencing means it is real world. Inferencing means you're running things in a fashion where you're actually serving an end user's needs, so inferencing also means that it is not just the GPUs that are important. You also need a place to pump in data. You need a place to transform the data or use data pipelines to transform the data. You need databases. You need storage.

You need a lot of other things which are typically offered by a full-stack cloud like ours. So when it comes to inferencing, we also have certain things that provide optimized experience for running workloads in an inferencing mode. So there are many other things that make our infrastructure very specific to or more conducive to running inferencing workloads. And on the platform layer, our GenAI Platform is one of the only three or four which are super comprehensive, not just LLM endpoints, but also all the other essential building blocks like Guardrails as a service. You have the ability to host knowledge bases, build RAG pipelines. So we have a dozen or more other services that wrap around an LLM to make application building and consumption of LLM super simple.

Then, of course, we have the agentic layer, which is totally differentiated with our out-of-the-box agents for troubleshooting, website performance, and things like that. At every layer, we have differentiation built in that is helping us ramp up this business really fast.

Moderator

I want to go into the platform side, but before I go there, the Atlanta data center, I think you said it's coming online or has come online in Q1, I believe, right? But I'm trying to think, if you think historically, whenever you have these capacity unlocks, do you typically see a spike in GPU consumption in the next couple of quarters after that?

Matt Steinfort
CFO, DigitalOcean

So, typical is kind of hard to say, given most people have only been in this market for a year-ish. I'd say if you look at our first deployment of GPU, when we first brought on our first wave of infrastructure, we did see a spike. That was in the second quarter of last year, but you have to, again, put yourself in the context of last year's second quarter, which was, one, we didn't have an offering prior to that, right, that met these customers' needs, and so we were accumulating a backlog of customers who were interested, and we got all the gear online at once, and so when we sold it, we had an initial pop of revenue. One, since then, we've been selling GPU capacity. We have GPU capacity in our New York facility, in Amsterdam, in Toronto.

We're able to sell, so we're meeting the demands of the existing customers. Bringing on Atlanta is part of our long-term data center optimization strategy, in addition to giving us GPU capacity. We've got a lot of GPUs already turned up there. They're already being sold and delivered to customers. We're in the process of deploying the rest of the DigitalOcean fabric there, the core cloud, because a lot of the GenAI platform and inferencing, you need all those other elements to come on so that you can have a full cloud offering to attach to those AI capabilities. That's still in process.

So we expect the ramp to be smoother in 2025 and not be a giant pop, but it's certainly helpful to have a lot of that capacity online, not only because we've got customers clearly interested in it, but it puts us in a better cost structure than deploying the same capacity in our legacy footprint.

Moderator

Yeah, thanks. Thanks for that. I want to go back to your comment on inferencing workloads attaching outside of GPUs only, right? Attaching the core droplets and the core kind of DOKS and platforms, the databases, the other things that they're. How much are you seeing that happening? How much is that kind of attach rate at this point for these inference workloads? Maybe help us just understand things through that.

Paddy Srinivasan
CEO, DigitalOcean

Yeah, so it's still early days. We mentioned that as an interesting data point just to emphasize the fact that inferencing is more than just GPUs. We'll continue to monitor that, and it is more true in GenAI as a service or our GenAI Platform. It is more true because it is to build a knowledge base, you need a robust database for you to pre-process the inferencing workload. You need to have many of the other cloud primitives. It's still early for us to put any numbers on it. I think we gave some numbers in the Investor Day, but we are watching it, and it is proving out our hypothesis is definitely proving out to be true, especially in the GenAI Platform, but also in the inferencing infrastructure side.

Moderator

Yeah. So then the GenAI platform, it seems like you're seeing robust kind of growth, right? I think in the analyst day in April, you said something like 2,000 customers are using it, building 6,000 agents. And then in the earnings call, maybe a month after, you kind of said 5,000 customers with 8,000 agents. Maybe talk about how much of that—what are those agents? What are these people building? How much of that is experimentation? How much of that is actually production use cases at this point?

Paddy Srinivasan
CEO, DigitalOcean

Yeah. So a lot of it is experimentation, right? Experimentation might be too strong of a word, but a lot of proof of concepts, and proof of concepts for two reasons, right? One is proving out the technology, and number two is proving out the business model behind that technology or customers, because these things cost money. Even though it's 1 million tokens cost $1 or something, these things add up really quickly. So our customers want to make sure that they know how to monetize it, because it's not going to be an agent-based pricing or something like that. It's going to be outcomes-based or task-oriented and things like that. So they're figuring that out on their side. So the vast majority of the action that we are seeing is still in that proof of concept validation phase. In terms of the, what was your second question?

Moderator

What are the agents?

Paddy Srinivasan
CEO, DigitalOcean

Yeah. So I had a couple of examples in the earnings call. We're seeing a lot of copilot type of examples: AI agent for sales enablement, agent for customer service, those kinds of things. That's the obvious agent use case. But we're also starting to see some industrial IoT or IoT use cases. There was a company that I quoted in the earnings call, which is doing weather forecasting models using GenAI. So that's another actual use case that's in production. So these are all emerging use cases where we are seeing any kind of manual human workflow is getting automated. And it's not just a single-step workflow. These are now our GenAI platform can also do multi-agent orchestration.

So you can take one task, break it down into two different agents, and have the agents independently work on something, and then you converge the results from these two things and stitch it together. So we already have that capability. So that's another pattern that many of our customers are using to automate human workflows of different kinds.

Moderator

Yep. Understood. We have five minutes left. Let me see if there are any questions in the audience. No. I don't see anything. All right. Let's go to macro. In the last call, you kind of said you're seeing some pockets of weakness, but not nothing systemic overall, right? I think you highlighted EdTech as a whole. But maybe talk about what are the metrics that you're monitoring. And so far in kind of Q2, April, May, is there any change in that?

Matt Steinfort
CFO, DigitalOcean

So I'd say just to start that the guidance we gave for the full year reflects, which we reaffirmed, reflects everything that we're seeing. So we have over 640,000 customers, and we have good visibility into their daily usage. We can see exactly what they're doing and how much of it they're consuming. And so what we were saying is, okay, we're generally, we had a very good first quarter. Things looked fine in April. But we have seen at the time, we had seen pockets of, like if you looked in Germany, you'd say that, hey, usage across German companies is down a bit. You're like, well, there's a recession in Germany. If you looked after the tariff announcements in early April, we saw blips in kind of small declines in usage outside of the U.S. in pockets of Asia and pockets of Europe.

That pretty much stabilized by two weeks after the news had come in. So what we're saying is we're certainly seeing people behaving differently in different pockets, but nothing that's risen to the level of concern. What we've done, though, is we've said, okay, well, that was at the time, four weeks after the announcement. Things are happening so rapidly. I mean, didn't know China and the U.S. were going to put a pause on tariffs on Monday. That's great news, but didn't know that was happening. Don't know what's going to happen at the end of this week. There's a lot going on. So we just thought it would be appropriately cautious to just say, hey, we're fine right now. We're going to keep guidance flat, even though we came off a very strong Q1 and we felt good in the first month of Q2.

We just felt it was a prudent and appropriate message.

Moderator

Yep. Understood. Maybe, Matt, talk about the guidance range, right? So you have a wide range, 11%-14% growth. What could potentially take you above the high end? What could potentially take you below the low end, right? What are the scenarios we're talking about at this point?

Matt Steinfort
CFO, DigitalOcean

Yeah. I think I'll do it in the reverse. And to get below the low end, I think you'd have to have some kind of macro dislocation. You can see the metrics. Again, we exited Q1 at 14% growth. The Q2 comp will be a little harder because we had a big jump in Q2 of last year, as we talked about with AI. But on an incremental ARR basis, it still looks really good. So I don't think there's a lot of risk other than macro risk that could compel anything below the downside. And then on the upside, it's really just all about getting NDR again. We've said it, and Paddy just repeated that it could be in the 99%-100% range for the next couple of quarters. And if we can get that up above 100, that's a huge tailwind for us.

It's going to come from the larger customers. It's going to come from the Scalers Plus likely. More traction there, more rapid product adoption, more migrations, that'll certainly be a big contributor. We're seeing strength in our customer acquisition. Our new customer revenue from new customers has been doing well. That's probably a better impact on 2026 than it is on 2025. The last leg would be AI. If we can get some of these bigger, chunkier inferencing workloads, that could certainly be a needle mover and push us towards the upper end of that guide.

Moderator

Let me pause one more time to see any questions in the audience. No. On the long-term framework, thank you for providing that 18%-20% growth, mid-teens adjusted free cash flow. But I want to ask you, you're doing 14%, let's say, at this point, right? Your guidance is 11%-14%. To 18%-20%, that's quite a bit of an acceleration. And I understand you're seeing some of the good things at this point, but there is a little bit of uncertainty overall at this point. So what gives you the confidence kind of to go to that range? And I'm trying to think, should we expect this to be a little bit of a non-linear kind of a move from where we are to where you're thinking that you're going towards?

Matt Steinfort
CFO, DigitalOcean

Yeah. I think from a confidence standpoint, in the short time that Paddy's been here, since January of last year, Bratin's been here since July, we've taken a self-serve model that was really the only source of growth. And we've layered on, one, addressing product gaps that enable us to scale with our bigger customers. We've layered on new go-to-market motions that didn't exist or weren't effective before because you didn't have that product. And then we also have the entirely new growth vector around AI. And so given the traction that we're seeing across all of those, we feel like 18%-20% by 2027, that's an appropriate target. The market in aggregate is growing faster than that. So we think it's a good target for us and is imminently achievable.

Moderator

Imminently achievable. I like that. Well, I guess we are out of time. That's a perfect ending. Thank you so much for all the time.

Matt Steinfort
CFO, DigitalOcean

Thank you.

Moderator

Thank you.

Powered by