Good morning, everyone. My name is Melanie Strate, and I lead investor relations at DigitalOcean. On behalf of the entire team, we'd like to welcome you to our 2025 Investor Day. We are so excited to see you all here in person at the New York Stock Exchange, and we have an incredible lineup for you today, including a number of presentations from our C-suite, followed by a live customer showcase where you will get to see firsthand what some of our top customers are building on the DigitalOcean platform. Before we jump in, I must disclaim we will be making forward-looking statements as part of today's presentations, and actual results may vary materially from those projected in these forward-looking statements, including our financial outlook. With that, I will turn it over to our Chief Executive Officer, Paddy Srinivasan.
Thank you, Melanie. Good morning, everyone. How's everyone doing? I'm super excited to kick off DigitalOcean's 2025 Investor Day. Thank you all for tuning in virtually, and a very warm welcome to all of you that took the pain to join us here in person in this fantastic Freedom Hall in New York Stock Exchange. Our objective today is to get you out of everything that is happening around us for three hours and get you all excited to focus on the main reasons to invest and get super jacked up about investing in DigitalOcean. Let me start that by telling you a little story and tell you why I was super excited to make the biggest investment decision of my career by joining DigitalOcean a little over a year ago.
In DigitalOcean, I see the potential to replicate the journey that these generational companies that power today's digital economy went through to become companies worth tens of billions of dollars today, because their individual journeys have a lot of common things. For example, number one, they were all founded to serve the digital native ecosystem. They're also founded with some core tenets of embracing simplicity as a way to offer their product or service. They also have a business model that is flexible enough to scale with the startups that they are serving. They also had a core tenet around obsessive customer service and meeting the needs of their customers. For the most part, they all embraced this concept of product-led growth motion to target developers and startups as a way to fuel their growth.
Once they reached a critical mass, they started serving the needs of larger customers, digital native enterprise customers, and started meeting their needs by innovating on product as well as adding new go-to-market motions to complement their product-led growth motion. Finally, along the journey, these great companies also harnessed and even, in many cases, catalyzed new technology and market shifts that shaped their respective domains. When you look at DigitalOcean, we are on the exact same trajectory. Our founders got us from one to three. They did that by starting DigitalOcean by focusing our efforts on the digital natives. They did that by building a cloud, which was extremely simple to use, with a customer obsession that got us tremendous product-market fit.
That's the foundation which we rode using product-led growth to develop mind share and market share with developers and startups over the first several years of our existence and build the great foundation that we have today with a company with over $820 million of run rate, 600,000 customers, and a lot more. That is why when the board recruited me and said, "We want you to come and jumpstart steps four and five," I could not resist, as I had done similar things in the past. Today, you will hear from us how we are well on our way on steps four and five. We'll show you what we have done. We'll show you some of the traction we are already seeing in the market.
I will explain that by starting to talk about what unmet needs we are finding with the customers we serve and how we are solving those needs in a very differentiated and durable fashion in both core cloud, as well as how we are taking the generational shift that we are seeing with AI and how we are democratizing access to AI for our customers. Next, I will tell you how we are accelerating our growth by scaling with some of our larger customers and how we are doing that with a combination of product innovation to serve the needs of these customers and also how we are adding complementary go-to-market motion to supplement our world-class product-led growth. Finally, I'll take all of that and translate all of that into how this is impacting and enhancing our financial profile in the medium term and long term.
Let's get started by looking at the cloud market. As we all know, cloud is a mission-critical public utility today, essentially running today's digital economy. As more and more customers start deploying their applications to the cloud, it has started becoming super, super complex, and it has started leaving some customers behind. Why is that? There are three reasons for that. Number one is cloud has become super complex. Today, to run a reasonable cloud application, you need a lot more than software engineering. You need teams like DevOps to deploy things on the cloud. You need teams like CloudOps to monitor and manage your workloads once they are in the cloud. You need teams like FinOps to monitor and manage the expenses on the cloud. Why? Because cloud has also become super expensive.
In fact, there's a whole ecosystem, a cottage industry that has evolved to help companies make sense of their cloud expenses. When you have to hire a third party to help you understand and walk through the line items of your cloud bill, that's when you know things have gone too far. Finally, cloud has also become super intimidating for most companies because especially the larger hyperscaler clouds can be walled gardens with their proprietary technologies that lack portability. Once you deploy your application, it's really hard to move it. Finally, companies don't get world-class support unless you're spending millions of dollars with the large clouds. In a nutshell, cloud is complex, it is expensive, and it has become intimidating for many companies.
There is one thing that is emerging, which is more complex, substantially more expensive, and super intimidating, and that is AI, which is going to leave more customers behind. That is why DigitalOcean has emerged as a viable alternative to the large hyperscaler clouds by swallowing this complexity so that builders can build. They can focus on innovation. They can scale their companies without having to mess around with infrastructure. That is what has enabled us to build a company of $820 million of run rate with great financials, with 600,000 customers, with a global footprint, and a tremendous developer mind share. As I mentioned in my opening story, we are just getting underway with our own version of the scale-up journey. To explain that, let's have a quick look at the markets that we operate in. You have all seen different versions of this TAM slide for cloud.
It's a big market, $400 billion, estimated to triple by the end of the decade with cloud and AI. The interesting thing is one-third of this is driven by what we call as digital native enterprises. These are companies that offer a service or a product that is technology-oriented. Who is a digital native enterprise? We are a digital native enterprise. DO is a digital native enterprise. I'll give you an example. When we were a startup, we started using Stripe to process payments. We were probably spending a few hundred dollars on Stripe. As we grew, we scaled up with Stripe. Last year, we pumped in around a billion dollars through Stripe. We likely spent close to $20 million with them last year. That's the beauty of digital native enterprises. They start small, but they scale.
When they scale, they just power the companies they leverage. That's why they are our main focus. We focus on serving the digital native enterprises. There are 4 million of them worldwide. They drive one-third of the cloud market. We have 165,000 of them. They have very unique needs in contrast to a traditional enterprise, for whom cloud is an IT enabler. It runs some internal applications. It's typically part of OpEx. It's governed by fancy and complex IT policies because a lot of the stuff is coming from on-premise. Typically, the decision makers are IT and CIOs. Stark contrast with digital native enterprises, whose needs are very mission-critical. They use cloud to power their product. It's very mission-critical in nature.
As it is a core part of their product, it is part of their COGS and often the number one expense line item in their P&L. The decision makers are typically founders or heads of businesses. If you look at the hyperscalers today, they're so big, and they are seeking large, complex workloads. Where are those workloads? They are on the left side. They are in the traditional enterprise segment because more than half of the world's software is still on-prem today. Modernizing them and moving them to the cloud is a big, complex undertaking that the large hyperscaler clouds love. The needs of these large applications in traditional enterprises are very different from the ones on the right side with digital native enterprises. This lack of focus is leaving them behind. Let's look at some of these unique needs and double-click there.
First of all, digital native enterprises run lean teams. They don't have budget to do heavy investments in DevOps and CloudOps and FinOps and things like that. With AI, this is getting a lot worse. They need clouds that swallow this complexity and let them focus on innovation. Simple doesn't mean simplistic. Many clouds offer simple because they are a simplistic cloud. No, no, no. They need enterprise scale. They need enterprise footprint. They also need an enterprise SLA. Essentially, they need a full-feature cloud without all the complexity. Finally, they also need a cloud partner that they can trust, someone that won't box them into a walled garden, someone that will provide them with transparency and predictability of cost because nothing can be more crippling than seeing a cloud bill that is shocking.
Finally, they need world-class support regardless of how much they are spending. Essentially, digital native enterprises need a cloud partner that is large enough to scale with them but small enough and hungry enough to care about their success. How are we meeting the needs? To answer this, let me show you our product evolution journey over the last 12 years of our existence. This is a 50-quarter view. The first eight years, we had founders who were developers themselves. They focused on digital native businesses. They created a platform with simplicity in its core. They provided a lot of transparency and predictability in pricing. They obsessed over customer needs, essentially nailed the product-market fit, and hence, the company grew rapidly. You can see that here. These were the steps one, two, and three in the journey that I started with.
The next four years, 2019 to 2023 in our evolution, was focused on professionalizing the company and tightening the operations. We went public, but we got distracted by our IPO and the COVID distractions. We took our eye off the ball in delivering the scalability and other capabilities that our customers, especially our larger customers, needed during that time, which led to graduation or defection of some of these large companies, which reflected in NDR and a slowing growth rate. As a consequence, we did not execute on step four of the journey, which was to scale and meet the needs of the larger customers. However, we are, over the last three quarters, not only focusing on this, we are catching up with a speed and ferocity that has made me really, really proud.
Just look at the type of releases that we have had, and it's actually falling off the page. I didn't have enough room to fit everything here. Bratin will talk about how in Q3 of last year, we had 42 releases. In Q4, we had 49. In Q1, we have releases in the 50s. These are not just simple bug fixes and stuff like that. These are big, chunky product updates all aimed at meeting the needs of our larger customers so that we can scale with them. You're probably thinking, "That's great. You're catching up. You're releasing like crazy. So what?" I'll tell you so what. Here's a view of our large customer cohorts. In the last earnings call, I talked about our 100,000+ customers. There's more disclosures here. For the first time, we are sharing our 500,000 and our million-dollar customers.
First takeaway is that we are adding a lot of new large customers. The second takeaway, which is even more important, is the fact that the larger these companies are, the faster they are growing on our platform. In fact, we are aggressively taking market share in the segment. Most of these customers are existing DigitalOcean customers, which are now rapidly expanding their footprint on us. Since these are existing customers, we have a great relationship with them. Many of them are here in the audience, and you will talk to some of them today. They are co-innovating with us. They're literally telling us what their unmet needs are. We are doing co-development with them, and they're literally snatching the product from us even before we are able to launch it and market and sell it to them.
That's the power of meeting the unmet needs of these digital native enterprises. This puts us in the same trajectory as the illustrious companies that I talked about earlier, who have made the transition to serving the needs of larger customers and hence scaling their own business. That's the same trajectory we are in. At our current stage of our journey, these numbers are not only comparable but also favorable to many of the other disclosures you might see from other companies that have gone through similar progressions. Let's now hear from one of our newest customers and why they decided to scale on DO.
NoBid is a startup that was started in 2019. Our business model is basically to connect publishers and advertisers and allow them to transact through our platform. We have about 200 billion auctions that we run per month.
That's about 100,000 in one second. We've been with AWS since inception, right, since the last five years. We took the opportunity this last year with our most recent contracts expiring at the end of this year to look around and see what opportunities there were for NoBid. We looked at GCP. We went to look at Azure. We even looked at Oracle. We'd heard some really good things about DigitalOcean. We reached out to DigitalOcean and asked them for a bid. Honestly, we were a little blown away. The price they came back with was literally 20%-30% cheaper than what we had been paying at AWS. DigitalOcean didn't have VPC peering and did not have internal load balancing. We brought this to the DigitalOcean team and said, "What can we do about this?" Amazingly enough, the timing was impeccable.
The DigitalOcean team was already working on those solutions and invited us to be part of their beta program and tested with them for weeks. We're glad to be one of the first people to use it. I think we would like to find really a partner in our business initiatives. A partner is somebody that is beside you when you need them the most. I feel we have that with DigitalOcean. We look forward to working with DigitalOcean on improving our platform and improving the DigitalOcean platform.
Fantastic. These kinds of stories are becoming common every week. A lot of this is happening organically. Let me now switch gears and talk about how we are scaling up and accelerating our growth. Let me start with the exceptional management team that we have assembled. Bratin Saha, who is our Chief Product and Technology Officer, built and scaled multiple billion-dollar businesses for AWS in both AI/ML and cloud. Before that, he ran all software infrastructure for NVIDIA. Larry D'Angelo, who is our CRO, did similar roles at many other public companies of comparable size. He brings a very unique high-velocity inside sales motion that he used to scale companies like LogMeIn from $100 million to well over $1 billion. Matt Steinfort, our CFO, that many of you know, was also a public company CFO.
What is interesting about Matt is that he was a founder CEO of a tech startup for several years before that. It's a quintessential operator. Wade Wegner, who runs our developer ecosystem and product-led growth motions, did similar things at a much larger scale for Azure when he was at Microsoft and Heroku when he was at Salesforce. As you can see, this is a team that has the perfect set of complementary skills to help scale the company and accelerate its growth. Talking about growth, our growth comes in two different vectors. First is accelerating our customer acquisition engine, which really helps us acquire larger customers. Number two is our customer expansion motion, which really helps us with expanding our footprint with existing customers. To me, everything really starts with product. On the product, we have two dimensions.
One is core cloud, where we are continuing to address the needs of the large customers, as I talked about. The second thing is AI, where we are working on democratizing access to AI by focusing on building our IPA stack, infrastructure, platform, and applications. Let me start by focusing or double-clicking on the left side first. As you all know, we have literally been focusing in our past on the bottom right, which is, if you again remember steps one, two, three, right, where we were really focusing our efforts on product-led growth in servicing the needs of our digital native customers. Over the last three quarters, we have changed our focus to address the needs of our larger customers by doing things like adding support for multi-cloud. In fact, this week, we had a major announcement on that.
We are adding things like advanced networking, higher performance queues, and also adding enterprise-grade SLAs, which are along the same par as a hyperscaler cloud. Not only are the large customers consuming this, they're also adopting these features at a very rapid pace. Like almost half of our 100,000+ customers have used one or more of these features in just 90 days after we released them. It's just incredible. What it shows is we know exactly what our customers need, and they're telling us and working with us and adopting these things as soon as we are ready. Now let me switch over to AI. All the headlines that you see are dominated by LLMs, right, and how there's a new version of LLM almost every other day.
Really, when you think about the focus on training, it is going to be dwarfed by the era that we are entering, which is the world of AI inferencing, where we are entering an era where it is going to be AI everywhere. Every software application is going to have AI as a core fabric. What this is going to do is have an order of magnitude importance and need for inferencing compared to training. This is an existential opportunity for customers we serve. For digital native enterprises, this is an existential thing. How are we addressing these needs? On the left side, you see training clouds, which are, of course, you need GPUs, and you need network and storage. All the magic on the left side happens really on the hardware side.
On the right side is our DigitalOcean AI Cloud, which has infrastructure for inferencing. You have a GenAI platform, and you have a layer for agentic applications. The interesting thing on the right side is, as we see our customers use our inferencing product, it literally lights up multiple boxes on the right. What you need is not just GPUs. You need a general-purpose cloud that has both GPUs and CPUs. To tell you the best way to bring the power of GPUs and CPUs, let's hear from the CEO of AMD, Lisa Su.
Thank you, Paddy, and hello, everyone. At AMD, our goal is to push the limits of high performance and adaptive computing to solve the world's most important challenges. This is an incredibly exciting time for the industry. AI is the most transformative technology of the last 50 years and the ultimate application of high performance computing. As AI fundamentally reshapes industries and our daily lives, we are focused on delivering high performance, energy efficient compute engines and enabling the open software ecosystem. Today's most critical digital services and experiences run in the cloud. As software providers embed AI across these applications and create entirely new ones, the cloud will continue to be an essential layer enabling AI at scale. That is why we are so thrilled with our partnership with DigitalOcean, a leader in scalable, full-featured cloud services trusted by over 600,000 customers and over 3 million active developers.
With DigitalOcean GPU Droplets, customers can deploy popular AI models with a zero configuration setup that automatically optimizes both the model and server stack, reducing deployment time from weeks to minutes. We are really excited to equip DigitalOcean data centers with AMD Instinct GPUs, bringing leadership memory and performance to their AI cloud. DigitalOcean is also leveraging the incredible performance and TCO of our EPYC CPUs to run their full application stack. By combining powerful AMD compute engines and open software with DigitalOcean's product suite, we will empower a massive community of digital native businesses to integrate AI into their applications. We're proud to be part of DigitalOcean's growing portfolio of accessible AI tools. Together, we are bringing developers more flexibility and choice so they can move faster and build smarter.
We look forward to continuing to partner with DigitalOcean as we enable developers, startups, and organizations of all sizes with the compute, tools, and support they need to accelerate AI innovation.
We're also super excited to partner with AMD as we continue to roll out our inferencing capabilities because that, in my mind, is a really, really important step to democratize AI for our customers. Let me switch gears and talk a little bit about our go-to-market. This is our classic go-to-market view. On the left side, you have our customer acquisition funnel. On the right side, you have our customer expansion motion. The common thing about these two sides is product-led growth. On the left side, we use PLG to attract and convert customers. On the right side, we use it to digitally nurture existing customers and get them to adapt and do cross-sell and upsell. Let me click on the left side for a second. Our PLG machine is famously efficient. It's famously efficient due to the mind share that we enjoy with developers.
We have over 3 million active developers over the last few quarters. We are able to attract millions of visitors to the top of our funnel and literally convert a big fraction of them into paid engaged customers without spending too much on marketing or Google. What we are doing now over the last few quarters is adding multiple go-to-market motions to augment this product-led growth motion. For example, we are working with technology partners like Hugging Face to open new front doors that drop new customers directly into our funnel. We are also building a new outbound sales team for AI, a small team, to build and develop relationships with the AI-native startups and the venture ecosystem. We are also amplifying our reach with partners, with channel partners like resellers, systems integrators, distributors. These channels, especially number two and three, are bringing in larger customers.
That's a really important point. The combination of these channels, we have only had them for less than a year. In 2024, they deliver 20% of our new customer revenue. It is particularly more impressive when you think about the scale is in tens of millions of dollars. I am super optimistic that as we scale this, it is going to yield more results. On the right side, which is our customer expansion, we have traditionally relied on product-led growth, digital nurturing. We have not had a sales motion. Remember steps one, two, three? That is where we have been living. Now, over the last couple of quarters, we have started adding new enterprise sales motion or an inside motion using named accounts, where we now have 8,000 named accounts. For the top 3,000 of them, we have a pod model.
What that means is we have both a commercially oriented account manager and we have a technical account manager. The combination is intended to serve the needs of these top 3,000 customers. The account manager's job is to build and nurture a relationship and then look for opportunities where we can migrate new workloads onto the DigitalOcean platform. When they find it, they engage with a new direct migration team to help migrate these workloads over to DO. In just a couple of quarters, and we started with 450 accounts, and then we expanded that, and Larry will talk about this, we saw an 1,800 basis points improvement in the NDR of our large customer cohorts, the 100,000+. This was because of the incredible product innovation I talked about and also the fact that we are now engaging with our customers using an inside sales motion.
In a nutshell, we are adding new motions to scale to acquire new larger customers. In this case, we are actually targeting customers within our base. It's nice to have 165 customers to mine and farm because that gives us a very low-risk, high-yield go-to-market motion. Let me switch gears and tell you how all of this is translating into a robust financial profile over the medium term and long term. Before that, I want to give you a little bit of the principles that we use for capital allocation. We use a weighted rule of 40, which values growth at three times free cash flow, which is how typically top public companies are valued in today's market. We use this for decision-making around growth vectors, but also for our internal incentive structure, like our bonus and so forth.
This does not mean that we have to invest three points of margin to get a point of growth. That is not the point at all. It is just a guidance framework to emphasize the fact that we are looking for durable growth vectors. When we look at growth opportunities, we use three Ds. Number one is durability of growth. We will invest behind things that are durable. That is why we were very conservative in the GPU training world, because we were not convinced that that is a durable growth vector for us. It may be for others, but not for us. We have a very different view when it comes to inferencing, because that, as I showed, it needs a full general-purpose cloud, and it adds to our strength. Number two is we want to be disciplined.
We want to be disciplined in looking at the type of customers we are serving, whether this growth vector is really adding to our product DNA. Does it align with our go-to-market strengths? That is the discipline I'm talking about. When we have a checkbox under these two, we will be decisive in going after these durable growth vectors. In 2024, we were a weighted rule of 40 at 27. Our objective in the next couple of years is to push it to upwards of 35% and get to 40% as quickly as we can. In a recap, first, I talked about how we are focused on serving the needs of digital native enterprises and how we are solving their unmet needs in both cloud and now in AI as we enter a new era of AI inferencing.
As I mentioned, this is an existential shift for our customers. They are really looking to us to help them make AI accessible. In other words, to do to AI for them what we did on cloud to help them. Number two, I explained how we are reaccelerating our growth with tremendous product innovation and how we are adding new complementary go-to-market motions to acquire larger customers and also drive the footprint and adoption with our existing customers. Finally, I talked to you about how all of this is translating into robust growth and healthy financials. I have one more thing before we finish. We are in a moment of transition. We are in a moment of transition with AI because natural language understanding is changing the face of software development. AI agents are changing the face and disrupting SaaS applications and the cloud itself.
That's why we are changing the cloud with a project code name DO.Next. Through this morning, I talked about how complexity is killing the cloud. In the current generation of DO, we are reducing the need for DevOps, CloudOps, FinOps, and things like that. With DO.Next, we are just killing the complexity associated with these roles by just simply eliminating them. To tell you all about this and a lot, lot more, let me welcome our Chief Product and Technology Officer, Bratin Saha.
Thank you, Paddy. Hello, everyone. Thank you for being here. I'm Bratin Saha, and I'm the Chief Product and Technology Officer at DigitalOcean. I've been here for about 10 months. As Paddy mentioned prior to this, I was at AWS, where I helped to build and led some of the most successful AWS services, like SageMaker, Bedrock, Amazon Q, EMR, Glue, and others.
I was at AWS for about seven years, where I helped build one of the fastest-growing businesses in AWS history, a multi-billion dollar ARR business. Prior to AWS, I was at NVIDIA, where I led all the software infrastructure for all NVIDIA products. With that context, let's dive straight in and let's get started with my key themes. First, I want to spend some time on how DigitalOcean reduces customers' TCO, total cost of ownership, by at least 30% when compared to hyperscalers. That is because it reinforces and strengthens our position in the market. I then want to get to how our rapid product innovation is accelerating our revenue growth that we expect will get to 18% to 20% by 2027.
I'll also spend some time on AI, especially the transition of AI to an agentic world that aligns really well with DigitalOcean's core strengths and enables us to provide a very differentiated offering to our customers. Then finally, I'll double-click on DO.Next and show how it's going to provide a very different and a very differentiated customer experience for our customers. Finally, I'm also going to talk about our cost optimizations so that we remain efficient as we grow our revenue. Let me start with what Paddy mentioned. The cloud can be too complex, too expensive, and too intimidating for many customers. Let me double-click on this a little bit.
If today you had to go in and do the most basic operation in a hyperscaler, the most basic operation in a hyperscaler, which is go in and allocate some compute, you literally have to go through thousands of decisions. Let me walk you through that. You go to a hyperscaler and you go to their website and you have to go and allocate some compute. You first have to go in and allocate an instance. Then you have to go in and allocate the memory. Then you have to go in and allocate some data transfer rates. Then you have to go in and do something about data transfer rates within the same data center. Then you have to go in and do something for intra data centers.
You have to go in and do something about the NAT gateways. You have to do something about some of the configuration. It goes on and on and on. That is the reason why on a hyperscaler, customers need big teams of CloudOps engineers, because their job is to figure out the nuances of the hyperscaler so you can use them efficiently. By the way, it isn't over yet. It goes on and on and on. Now contrast this with what happens in DigitalOcean. Actually, it goes on. Now contrast this with what happens in DigitalOcean. Keeps going on. On DigitalOcean, what we do is we take your compute, your storage, your networking, your IPv4 addresses, your VPC. We go in and we package all of that so that as a customer, all you have to do is a single click to get started.
In essence, what we do is we swallow all of the complexity so that as a customer, you do not have to deal with the complexity. That is why customers on DigitalOcean do not need a dedicated CloudOps team. Now, we are both cloud providers. DigitalOcean and the hyperscalers, we are both cloud providers. We are really selling very different kinds of products. Let me give you an intuition about that. Suppose you had to go buy a computer. You can do it in one of two ways. You can go to a store and say, hey, I am just going to go and buy all of the components. You can go buy the CPU, the memory, the hard disk, and all of that. Or you can go in and buy a laptop.
Now, if you went in and bought all of the components, you could build any computer you want. But you have to take on all of the complexity of building the computer, and you have to hire the expertise to build and maintain that computer. Now, in the case of a laptop, it's much simpler, a lot more approachable. You may not be able to get any laptop configuration you want. You may not be able to get any arbitrary screen size matched with any CPU you want. For most people, it works way better. Now you ask the question, why this difference in approaches? Why do the hyperscalers have to provide you all of these building blocks? Why can DigitalOcean give you a product that's packaged much better and works much better for you?
The reason is that it arises from fundamental differences in our business models. Those differences give DigitalOcean its durable differentiation. If I had to summarize it in one sentence, I would say hyperscale begets hypercomplexity. Hyperscale begets hypercomplexity. What do I mean by that? Suppose you are a hyperscaler. Your business model requires you to address the needs of every large enterprise in the market. Now, suppose you're a Goldman Sachs or a JP Morgan or a General Motors or a Walmart. You have decades of IT legacy. You have decades of IT legacy from even before the cloud was invented. You have decades of IT legacy from even before modern IT was invented.
The only way, the only way that a hyperscaler can actually service the needs of all of the customers is to provide a building blocks approach, because then your customers can go in and build any configuration they want. That's the only way to be able to support the zillions of IT configurations that arise from the legacy IT enterprise problem. On DigitalOcean, we are really just focused on the digital native segment of the market. By definition, these have modern IT systems. It's still a very large segment, mind you. It's still $140 billion. By being able to focus on the digital native segment, we are able to escape the complexity of the IT sprawl. That enables us to provide a product that works way better for this segment. You get product complexity. It turns out product complexity drives cost complexity.
Over here, I've given just one example of cost on a hyperscaler. I've just taken the networking example, the data ingress/egress, and want to show how that works in the two kinds of clouds. If you look at this example, say, for example, you're using classic load balancer, one kind of a load balancer. The price is about $0.01 per gigabyte. If you use a different kind of a load balancer, then the price is different and variable. If you use the application load balancer, the price is between $0.03 and $0.09. If you use a different service, like let's say you use a database, the price is again different and variable. It varies between $0.05 and $0.09. Now, if you, of course, have to talk to the external world, you need a gateway. The gateway is another variable incremental cost.
What do you need? You also need a CDN. That's another variable incremental cost. Let's contrast that with DigitalOcean. A single flat monthly charge, a single flat monthly charge free inside the data center. This difference, numerous incremental variable costs in one case and a single flat monthly predictable bill in the other case, is the reason why if you are a customer on a hyperscaler, you need a dedicated FinOps team. This team isn't building infrastructure. They're not building software. They're just looking at your bill and trying to optimize it. If I had to summarize this, it really boils down to this. Hyperscale drives high product complexity, and that high product complexity drives high cost complexity. Now, that high product complexity also drives a high CloudOps cost. That high cost complexity also drives a high FinOps cost.
It is no surprise that in a recent survey by Flexera of all cloud customers, TCO emerged as the number one problem for cloud customers by far. It turns out that almost 70% of customers need a FinOps team to manage their cost. 70% of customers need a FinOps team to manage their cost. That is why DigitalOcean has a unique position in the market segment, because we go in and address the number one problem of cloud customers. According to a study by Forrester, DigitalOcean reduces customers' TCO by at least 30%. Now, some of this, of course, comes from a lower infrastructure cost, but a lot of it comes because customers are more productive and because they do not have to manage all of this overhead of CloudOps and FinOps and so on.
Forrester also found that when a customer migrates from a hyperscaler to DigitalOcean, their payback period is less than six months, less than six months. To learn more about this, let's listen to this video from Market Circle. They produce CRM applications and productivity applications so that their clients can manage their projects and sales and customers in a single integrated platform that is also integrated with Apple software.
I run a company called Market Circle, and we make a product called Daylight. Initially, the product was an on-premise license-based product. We then evolved to the cloud, which is when we decided to work with DigitalOcean. We provide a proper native application with local data, and we synchronize the data from your device to our servers.
Just generally speaking, cost-wise, at least a third we're saving as if we were to go with somebody else like one of the big names. Additionally, from like a monetary savings, there is a big time savings. When you look at the simplicity that DigitalOcean's UI offers versus a different provider, it's drastically different. You've got DigitalOcean's simplicity, which is here, and then you have someone else where it's like up here. You get that time savings when you're just trying to provision a resource or understand the cost of a resource or anything. It's just you've got that simplicity versus that complexity. There is that time savings as well. And brain cell savings.
Trust me, I feel happy as well when I'm saving costs.
Let me now get to the next key theme of my talk, and that is how rapid product innovation is accelerating our product growth. This slide shows the number of features that we have been launching every quarter. You will see that our product velocity has increased by almost 6x in the last year. This is not happening because we are adding R&D costs. This is really happening from productivity improvements. This has two implications. As Paddy said, we had actually stopped innovating for some time, and that led to things like customer churn and defection. As a result of our product velocity now, we have been able to close pretty much all the known causes of customer churn. Not just that, we are now able to innovate on features that more enterprise customers want.
Paddy talked about a few of them, like multi-cloud support and NFS and so on and so forth. Let me double-click on our PaaS services for a bit, because PaaS services are so important for larger customers. This slide shows how we are innovating on databases, databases that are really important for large enterprise customers. You will see that the out-of-the-box performance of databases on DigitalOcean is about 30% better than the nearest hyperscaler, 30% better performance than the nearest hyperscaler. The cost efficiency or the throughput per dollar is about more than 40% better than the nearest hyperscaler, more than 40% better than the nearest hyperscaler. You can see how a rapid product innovation is helping us innovate so that we can get more value to our customers.
I'm really excited about some of the PaaS and data cloud features that we have coming later this year, and happy to take more questions on them later on in the Q&A session. Customers do not just want enterprise features. They also want enterprise SLA. Over the last year, we have improved. We have reduced our service downtime by three and a half times, three and a half times through using things like AI. We are using AI now to predict when servers may go down so that we can get ahead of it. As you can see on this table, we are now internally operating at an SLA, at an availability that is better than hyperscaler SLAs, better than hyperscaler SLAs. This year, we can now go to customers and say, look, you are going to get hyperscaler experience. Remember the database performance.
You're going to get hyperscaler experience. You're going to get hyperscaler SLA, and you're going to get 30% lower cost. What's not to like about it? I can tell you from my personal experience of having talked to hundreds and hundreds and hundreds of hyperscaler customers, this message will resonate very loudly. Now, all of this innovation and all of this availability, all of this is nice and dandy. What does it mean to our business? This slide shows you how our product innovations are accelerating our revenue growth. I'm just showing the features that we launched in the second half of last year, just the features in the second half of last year.
As you can see in this slide, if you look at our Q3 revenue and you look at the features that we launched approximately in Q3, they contributed about $6 million or 3% to our quarterly revenue. In Q4, these features contributed $8 million or about 4% to the quarterly revenue. In Q1, we expect the contribution to be much more. You can now see how our product innovation is adding to revenue growth every quarter. In fact, in Q4, almost 30%, almost 27% to be precise, of our incremental revenue came from the newly launched features just in Q3 and Q4. You can imagine as our product velocity keeps churning more of these, that accelerates our revenue growth. Let me now get to AI and specifically how AI is moving to a place that's really well positioned for DigitalOcean.
Paddy talked about IPA, the Infrastructure Platform and Application. Let me start there. If you look at transformational technologies, they always follow this pattern. The action always goes from the infrastructure to platforms to applications. Let's look at the internet. The switches and the routers were always there. It's really the emergence of the platform technology, the browsers and HTML, that led to all of these applications, the e-commerce sites, the search sites that drove that option of the internet. If you look at the PC, the chips and the memory and all of that was always there. It was really the emergence of the platform, the Wintel platform, that led to the creation of the applications, Microsoft Word, PowerPoint, Adobe Apps that became a part of our lives and drove that option of the PC.
If you look at smartphones, it was really the emergence of the platforms, iOS, Android, that drove the apps and that became a part of our lives. AI is also following a similar transition now. That transition is happening and will happen because of agents. All of you will probably have heard of agents. 2025 has been the year of agents. In a sense, you can think of an agent as a piece of software that can pretty much act and think like humans. Because of that, these agents are going to drive a massive wave of automation in digital enterprises. Let me show you how. If you think about a normal customer service call today, a normal 10-minute customer service call today, a lot of that today can be handled by generative AI agents.
If you look at the cost of that call with a human agent, it's about $3. With a generative AI agent, you can do it for a cent, for a cent. Now, the cost of AI is going down by 10x year- over- year. The quality of these models is improving exponentially year- over- year. You draw the line. In three years, the conclusion is inevitable. A lot of human workflows in these businesses will get automated or get assisted by these agents. That means that digital native enterprises will really have no option but to reinvent themselves with agents so that they can serve customers in the best possible way. That is going to drive a massive amount of cloud and AI time expansion.
Because if you think about it, the time for these agents is effectively the human salary time, and that's trillions of dollars. That means these agents, as they get adopted by digital native enterprises, are going to drive an expansion of tens of billions of dollars. At DigitalOcean, we fully intend to participate in this time expansion by providing a very differentiated offering. Let me explain why. For this, let's dive a little bit deeper into agents. If you think about agents, they're really a sequence of four steps. They take a command from the user to do certain tasks. They then get all of the data they need to do their task. They then do some planning and reasoning, how am I supposed to do the task and so on. They finally go in and perform the actions.
What are the building blocks that agents need? For the language commands, they need a language model. It's really a foundation model that does language interpretation. For acquiring all the information, they need access to a database and need to retrieve the data and do some data processing. For all of the planning, they need a planning model. That's again a foundation model, but that's more optimized for planning and reasoning. They need some application logic so they know how to do the job. Finally, to perform the actions, they need some software orchestration code. Given these building blocks, what are the infrastructure requirements for agents? The language model needs a GPU, but these GPUs are going to drive inference, not training. All of the data processing needs databases and CPUs. The planning model, again, needs a GPU.
Again, this GPU is driving inference, not training. Finally, all of the software code needs CPUs. Let me pull all the infrastructure pieces together. This diagram is really important to understand where AI is headed and what it means for cloud companies. Two things should come out of here. One, the future infrastructure is a lot more than GPUs, a lot more than GPUs. You need the full application infrastructure, the CPUs, the databases, and so on. That is something that we at DigitalOcean have perfected over the last 10 years. The second is that the GPUs are going to be used for inference. In fact, inference is likely going to be more than 10x. Let me relate this back to what Paddy was mentioning. AI had training, but the puck is moving towards inference.
The reason is that you can build all the AI you want. Ultimately, customers need value out of it. You get value out of something when you're embedding the AI in a certain application. That is why we did not necessarily chase after the large training workloads, because that's really all about GPUs. As the puck moves towards inference, that plays to our strengths. You can't build an agent without a database. Remember the database performance? 30% better than the nearest hyperscaler. That is a strength. That is what we have mastered over the last 10 years. As AI moves towards this agentic world, it plays really well with DigitalOcean's key strengths. That is what enables us to build a really differentiated offering. You can go about this as a cloud company in two different ways.
You can be a niche cloud company and put in some GPUs and say, I'm ready for AI. Or you can be like DigitalOcean, where we have fully featured data centers that can run both the application stack and the AI. You can see that if you're a niche cloud, you are actually really inefficient, because what you have done is you have separated the application and the AI, and you've introduced a lot of hops. If you're a DigitalOcean, you are really well suited for where AI is headed. This takes me to how we are going to differentiate. We are building a purpose-built generative AI infrastructure. As I mentioned before, it has a lot more than GPUs. There's vector databases, there's search index, and all of that. We are also purpose-building this to provide the lowest TCO for inference. Remember, inference is 24/7.
It's production workloads. Low cost is critical. We are providing the lowest inference by customizing the network for inference, by customizing and optimizing the software so that you get customers. This is already up this quarter. Customers will get up to an 80% lower TCO on DigitalOcean. This is not all. We are also innovating at the platform layer. In January, we launched the preview of a generative AI platform. It's loaded with features. I'm not going to go after each of these boxes. There's like agent routing, multi-agent routing, knowledge bases, guardrails, and so on and so forth. The key point, though, that I want to communicate is that there's a lot of DigitalOcean innovation on top of the GPUs and on top of the LLMs. That is the source of our differentiation. A lot of innovation outside of the GPUs and the LLMs.
You need this to be able to build agents and applications in a meaningful way. What does this mean for customers? If you compare it to AWS Bedrock, it takes half the time to build an agent. We support 50% more content types, and you get 10% more accuracy. Trust me, a lot of customers' brain cells will be pretty happy. Not just this. Because of the amount of innovation that we are able to pack into that generative AI platform, into that software, our economics at the platform layer are actually much more compelling. Every dollar of GPU revenue drives more than a dollar of other revenue. Not just that. For the GPUs at the platform level, our gross margin is actually twice, and the payback period is one-third of those of the GPUs at the infrastructure level.
What are customers saying about this? We have not hit GA. We have not even started go-to-market. In just eight weeks, we have more than 2,000 customers who have built more than 6,000 agents, a lot of them in production. I have a number of use cases here, and I made this slide very busy because I get a lot of questions from people on what are customers doing. They are doing some really interesting production use cases, like e-commerce invoice processing, like helpdesk chat, like analyzing financial documents. To learn more about this, let's listen to SonarHome, a customer that is integrating agents inside real estate offerings.
SonarHome is a platform to provide users with free home valuation.
The long-term vision for SonarHome is to make sure that we help the homeowners and home sellers to make sure that they make the best decision selling their property, buying the property, and maintaining the property. There are three things, I think, that helped us to decide to go with DigitalOcean. First, it's relatively inexpensive, especially for the cloud solution. Secondly, it's quite straightforward. It's much easier than AWS or other solutions. Third, actually, the simplicity of the usage. In terms of the GenAI value prop that we get from DigitalOcean, I believe that AI can play a significant role in helping agents to prepare the materials, analyzing data, recommending the price, benchmarking. I think the AI agents will be helpful to the real estate agents to make this work done properly.
This is how we believe the market will evolve over time, and we want to be the leading player of this evolution in our geographies. Using DigitalOcean, GenAI was extremely easy to prove and to test it. DigitalOcean as the infrastructure partner and also right now as the partner for GenAI, I believe that SonarHome will help the home sellers and homeowners to meet their goals through technology.
The way we work is going to change dramatically. Now, you have a choice. DigitalOcean generative AI platform, 50% less time to build, 80% lower TCO for inference, 10% better accuracy, 50% more content types. Where do you think customers are going to go? That is why I think that DigitalOcean will play a big role in this AI and time expansion. Let me now get to DO.Next and how we are fundamentally reinventing the customer experience.
Paddy mentioned that we are at a seminal point in technology. There are really two paradigm shifts going on. One is natural language becoming the new UX, and second, agents automating all of the work. When you have these paradigm shifts, there are winners and there are losers. The winners are the ones who can use these paradigm shifts and reinvent themselves. Losers are the ones who do not reinvent themselves. Let me show first, with a product that is now in the hands of customers, how we are reinventing the DigitalOcean experience.
Cloudways is the leading web hosting solution and was just ranked as the number one cloud web host for developers by CNET in 2025. Website hosting today is complex, and debugging is a manual process. When issues occur, troubleshooting follows a time-consuming process, which includes manual log checking, support consultations, and implementing fixes.
Total resolution time can take up to one hour. As a part of DO.Next, we're embedding AI agents inside Cloudways to completely reinvent web hosting, making it more intelligent and automated. Disk usage is critically high on server. The user clicks the alert, and AI instantly explains the issue. User curious about the fix. AI provides remediation steps and an even better option: let AI agent handle it. With a single click, AI jumps into action, optimizing resources and stabilizing the server in real time. What used to take an hour before now just takes a minute. This feature is currently in public preview with customers, and their feedback is very encouraging. The number one hosting solutions is now even more intelligent with Cloudways AI agents.
I want to emphasize this is a real product in the hands of customers that we should be launching in approximately a quarter.
I hope a few things came out. The developer is no longer mucking around with code, trying to deal with their web hosting issues. They're just talking to their web hosting system. The second, debugging and fixing the work done by very experienced engineers is getting completely automated out by an agent. This is one of the most sophisticated users of AI anywhere in the industry. You can now imagine how much easier, how much simpler, how much more approachable we make the cloud. Not only does this give us a more differentiated product, it actually opens new revenue streams for us. Today, customers, for example, spend up to $200 a month fixing these system issues. That gets automated away. We will be offering these agents as a paid offering. Not just that.
Today, annually, DigitalOcean gets 300,000 hours, 300,000 hours of customer support for these issues. A lot of that gets automated away. We get a lot more efficient as well. If you go back to the three questions that Paddy mentioned, our position in the market, revenue growth, and getting more efficient, you can see how DO.Next kicks us into a higher gear on all of those vectors, providing a highly differentiated product, opening new revenue streams, and opening new highly differentiated revenue streams and making us a lot more efficient. Where are we headed? Of course, we'll embed these agents in all of our services to provide that experience to our customers. There is also one other thing I want to point out. That is, unlike hyperscalers, our customer workloads and our customer workflows are much better suited for this kind of innovation.
The reason is the same. When as a hyperscaler, you're dealing with decades of legacy and dealing with legacy workflows that came even before the cloud came, it's going to be very, very difficult to do this kind of innovation on those customer workflows. Let me now get to the other side of the weighted rule of 40, how we remain efficient so we can return more cash to our shareholders. This year in March, we opened the India R&D Center. Going forward, a lot of our incremental hiring will be in low-cost GEOs. That significantly reduces our cost per head and allows us to invest without increasing our expenses. We are rapidly using AI to improve our productivity. In fact, our developers now use generative AI for coding. We have seen that developers using generative AI are now turning out 40% more code, 40% more code.
We are also using generative AI for our system operations, and our productivity has improved by 37%. I expect these productivity improvements to keep increasing. We are also improving our gross margin. We are using AI, for example, for sweating assets longer, for using assets for longer so that we get better depreciation. That should yield an annualized 240 basis points improvement in our gross margin. We are also working on data center optimizations, things like network optimizations, data center consolidation that should yield 100 basis points in annualized gross margin improvements. You should expect us to keep continuing to work on these cost optimizations in subsequent years. Let me wrap up with the key takeaways from my talk.
DigitalOcean provides at least a 30% better TCO than hyperscalers, which helps us address the number one problem that cloud customers have and reinforces and strengthens our position in the market. Our rapid product innovation is now accelerating our revenue growth that we expect to reach between 18% to 20% by 2027. AI is moving to an agentic world that aligns really well with DigitalOcean's core strengths. That enables us to provide a highly differentiated offering to our customers. I hope you saw that our AI strategy is not just about buying large forms of GPUs, but really about innovating on top of them so that we give a lot more value to customers. Finally, with DO.Next, we are really reinventing the cloud experience to provide a much better, more differentiated product. All of this while we continue to improve our cost structure.
Building the product and innovating on the product is just one part of the story. The other part of the story is how we take our innovations and take them to customers. For that, I'm pleased to welcome Larry D'Angelo, our Chief Revenue Officer.
Thanks, Bratin. As a sales leader, you can imagine how excited my team and I are to sell all the recent innovations and product newness that Bratin and his team has brought. Good morning, everyone. My name is Larry D'Angelo, and I'm the Chief Revenue Officer here at DigitalOcean. I've been with the company for about eight months. Prior to that, I ran global sales for a public company called LogMeIn, where during my journey, eight-year journey, we took sales from about $100 million to $1.4 billion.
What's exciting about that journey is it has many parallels to the journey we're on today that Paddy outlined in his presentation. LogMeIn had a great product-led growth motion, but we had to layer on a high-velocity inside sales model, account management, up-market product capabilities and up-market motions, enterprise-like treatment, increasing our funnel through channel partners and technology partners. All those same principles that we had then we're applying to DigitalOcean. That's why I have such confidence that we're going to make all the innovations work and drive growth as Bratin meant 18% to 20% over the next couple of years. What I'm excited about is the pace of recent innovation and progress that we've made as a team and as a company. We've been together for a relatively short period of time, most of us less than a year.
As I mentioned, I have a passion and a lot of experience around great high-velocity go-to-market models like the one we have here, and just really an interest in trying to simplify the lives of our customers, help them realize more value, and grow their businesses. I think all of us look forward to the great growth and opportunity that we have ahead. To start, there are three items I want you to take away from this session. First, as Paddy mentioned, we have a world-class product-led growth engine, and we're going to build new motions to complement it. We're adding a named account team or named account strategy to put touch on our highest and most important customers. We're going to scale the go-to-market organization without materially changing the financial profile of the company. We want to remain highly efficient as we scale.
Now, our full-life cycle PLG engine, its full cycle meaning it doesn't just bring in leads, it actually converts, as you know. That has driven most of our growth to date. Today, it drives 4 million unique visitors, 150,000 sign-ups, and that results in about 60,000 customers at any point in time that come in to try the platform or actually begin their journey on building their business. Historically, customers have started small, less than $50, but many of those customers have actually built their businesses. We have examples of thousands of customers still on DO today that have driven their revenue into the hundreds of thousands. With all this traffic, the PLG engine, it's super efficient. We only spend 7% of sales and marketing cost expense to revenue, and our magic number of 2.2 is well ahead of any industry benchmark.
As you know, the magic number is really the incremental ARR over prior period sales and marketing cost or expense. It's a great way to look at sales productivity and sales efficiency. I would think you have to be wondering, how do you have this great engine, all this traffic, and you're only spending 7% on sales and marketing? I can say that we've actually immersed ourselves into the world of the developer. It starts with our great developer community. Our community, it's rich in content. We have thousands of articles, thousands of pieces of content. It attracts millions of visitors every year. We sponsor great open-source events like Hacktoberfest. I'm not sure if folks know, but DigitalOcean actually started Hacktoberfest, and we still govern it today. 65,000 developers across 171 countries come in every year and use the platform.
Our overall voice and awareness in the marketplace continues to expand, attracting and engaging customers, whether it's at events or communities. All of this development, all of this developer engagement is what powers the funnel and feeds that left-hand side. Now, once we acquire customers or fill that left-hand side, there are key reasons why they stay with us. First, customers can continue to build their business and innovate on DigitalOcean, whether it's core compute, AI, or both. They only pay for what they use. There are no hidden fees, as Paddy mentioned. Customers do not get a bill and need a FinOps team to try and disaggregate it. Customers realize the strong total cost of ownership that Bratin mentioned. That is really important, not just for their existing work, but as they decide to grow or bring new workloads over. We have great white-glove support.
We provide a high touch, a high class of touch that the hyperscalers simply can't. That not only draws customers in, but it helps customers stay. The proof that the funnel's delivering really can be seen in our growing globally diverse customer base, 638,000 strong. This may or will surprise you. We're actually the third largest cloud in terms of customer count behind AWS and GCP. More importantly, you can see the growth in the number of our high-spend customers. High-spend customers are those customers that are spending $600 ARR or more. You'll hear these names we classify as builders and scalers and scaler-plus. More importantly, you see the growth of our scaler-plus or our 100,000+ customers growing 85% over the past few years.
Now, at our inception, while we did focus on the developer and DO was still a great place for developers to come in, try the product, and start to grow the business, our focus is really on the high-spend customers. They make up 88% of our revenue, growing at 16%. The revenue they generate, it's different. It's stable and it's durable. I'll get into exactly what that means, but that's important. If you take one piece away from this slide, it's the larger the customers are, the more durable the revenue. Here's what I mean by that. They have overall higher revenue growth. They consume more products. Those more products they consume make them stickier. When they're stickier, they have a positive impact on NDR. If you look specifically at the 100,000+ AR customers, you see 37% year-over-year revenue growth.
With all this, we still have a huge opportunity to drive expansion. We only have 5%-6% expansion penetration in our larger customers, and that's where we're going to point our resources. The net is this. We have a very strong track record of growing large customers. As Paddy mentioned, 165,000 digital natives, 500 customers paying more than $100,000 ARR. Here are just four examples of customers who have substantially grown their business on DigitalOcean: PriceLabs, Acura.ai, Blum, and Picap. Let's hear directly from Blum in their own words about their journey on DigitalOcean.
Yeah, hi. My name is Vladimir Marslyakov. I'm the CTO of Blum.io. We are building next-gen trading apps in Telegram and Web3 space. We're close to 100 million users right now.
I think DigitalOcean did a great job helping us scale our infrastructure in 2024 and continue to do so in 2025. I mean, for us, the main thing is reliability and support of our cloud provider, right? That is exactly what we get with DigitalOcean. The bigger cloud providers usually struggle with technical support. In the case of DigitalOcean, we were very surprised how good and how the timing of the support was perfect. I think we had some requests for helping us configure load balancers that we could not do ourselves. We waited for a couple of months. We got it, and everything was fine. Speaking of DigitalOcean, if you compare even with the big cloud providers, I think it has a niche of very balanced approach with the UI/UX, with the price, with the provisioning time, with SLA stability.
I mean, you are in the sweet spot of all these criteria. We scaled a lot in the user base last year. This year, we're scaling product-wise. Basically, I hope that most of the challenges that we have on this product we'll be able to solve with DigitalOcean.
Now, with 4 million digital-native enterprises available, as Paddy mentioned, less than 165,000 that we have, we have a huge untapped market. That's just where we are today. Now let's shift to the new drivers that are going to help us drive expansion and growth, both new customer acquisition and into our customer base going forward. For 2025 and going forward, we have to improve on two dimensions. First, it starts with a product. As you heard from Bratin, we have all these great up-market capabilities that we're now delivering.
That is important because customers can now stay and grow with DigitalOcean. They do not have to leave the platform to grow. We have all these great capabilities that allow them to continue to grow their business. On the go-to-market side, we have to do two things. We have to drive new customer acquisition, and we have to expand our existing customers. To start, let's focus on the customer acquisition side, that left-hand side of the funnel. We have four new growth drivers that we are going to embark upon. The first, it is more of an improvement. It is increasing the yield of the existing funnel. Even though it is world-class, we still think we can improve conversions and yield. Second, we are going to leverage technology partners to widen the funnel or widen the aperture, where they will actually drop customers into DigitalOcean directly.
We are layering on new dedicated motions, as Paddy mentioned. We have an outbound motion specific to AI-centric companies. Finally, we will invest in channel partners. Channel partners will do two things. One, they will just bring us deals directly. Two, they will bring in deals that are more qualified, and they will bring them in in the later stages of the pipeline. We will go into each one. To improve the already great PLG engine, we are intensifying our focus on high lifetime value customers. What does that mean? We are capturing more relevant information at sign-up so we can better qualify customers early in their journey. We can kind of create digital but bespoke onboarding processes to help them consume products faster and hopefully grow faster on the platform.
Just as we did for the cloud, we're going to do the same for AI when it comes to content. We had all this rich, thousands of pages of content for the cloud. We're doing that for AI. We continue to author rich, AI-focused content that's attracting and engaging developers. We feel that there's a lot of opportunity to grow the community through that. For in-person engagement, whether it's our global meetups or how we show up at events, we'll be more deliberate in our approach and attract and engage customers that fit more in the profile of who we're looking to serve. For technology partnerships, this is a great opportunity. We want to leverage the millions of high-value curated developers that these partners have. These partners widen the funnel. They create new front doors directly into our product.
An example is Hugging Face. They have a massive collection of AI models, over 1.5 million. We made it dead simple for developers to directly connect to some of their most popular models with a single click. Same with Laravel. Laravel is a leading PHP platform, and developers can seamlessly click and activate Laravel Forge right through DigitalOcean. So far, we have 5,000 active developers that have come through this partnership. These partnerships, yeah, they're great because they give us new front doors and a new top of funnel, but it also kind of extends our technology ecosystem, gives more value to the customers so they can continue to build their businesses. For new local acquisition, we're starting with AI. We spun up a very small, lean team to focus on AI. We've kept that team fairly lean. The team remains extremely efficient and highly productive.
They target early-stage venture-funded companies that are AI-centric early in their journey. We want to grow as they grow to further penetrate the AI market. You can look at the team. They drove 160% of ARR growth in Q4 alone, and we expect that to continue. In channel partners, they give us great leverage and really bring high-quality customers into the funnel, as I mentioned. Partners like GMI, ShadeForm, StorageA, they're more on the AI side. They either provide value-added services on top of our platform, or they offer marketplaces where they resell AI-centric technologies. Versus customers or partners like Persistent or Aquazeel, they're more systems integrators, SI, where they help facilitate workloads from the hyperscalers and bring new customers in as part of that relationship. Some of these partners have been in place for weeks, some for months, some a quarter or so.
We've already driven nine new scalers, which are 100,000+ customers in the time that they've been associated with DigitalOcean. We have a pipeline filled with many more. To recap, we have new motions or channels to fuel customer acquisition and augment this best-in-class PLG funnel. Even though some of these started, as Paddy mentioned, in the second half of 2024, you can see the early success. 20% of customer revenue in 2024 came from these non-PLG channels. When you look at growth from new customers in the first 12 months, that's accelerating. That's what we're looking for. This is really without yet having realized the full impact of any of these drivers you'll see on either the acquisition or the expansion side.
Now let's move from the left-hand side of the funnel, the customer acquisition side, to the right-hand side and focus on expansion drivers in terms of tapping into our customer base. We are obsessing on the right-hand side of the funnel. We have a huge expansion opportunity here, and that's where we have our new resources pointed. We're assigning our biggest spend customers and our highest potential spend customers, and we'll get into those details. We created a migration services team to either bring workloads from the hyperscalers directly or to work with our partners and help them facilitate workloads to continue to grow our business and drive expansion. We've assembled a team who is mining unassigned accounts, either customers in the early stage of their journey or later stage, but accounts that are not unassigned from a named account basis.
In the first half of 2024, we began focusing on the top 450 accounts. We picked 450 and said, "Hey, let's assign what we call technical account managers." Their job is to drive nurture and adoption. Think of like classic customer success. Let's assign technical account managers. We want to provide them some mechanisms so that they can act proactively with these customers. We did three things. First, we instituted a health score, look at usage, adoption, support tickets, product upgrades, product downgrades. The health score gives us an indication at any point in time what's kind of the temperature of the customer. We created what's called the growth room. We said, "Of these 450 customers, what products can they grow into?" We worked with marketing to drive campaigns to try and push those products. We created a war room.
The war room was really think triage. You have product, engineering, sales, support, marketing. Anytime any of these customers had an issue or a hiccup, we kind of swarmed on it, made sure we created a remedy, and we pushed it out quickly. We saw the performance improve in that top 450. In the second half of the year, we expanded to 1,500. Providing that same treatment, we saw a 120 bps increase in NDR of these largest customers from the net expansion that they drove. That gave us great confidence to further expand. For 2025, we increased kind of that circle to 3,000. We have our technical account managers assigned to the top 3,000 high-spend accounts, but we also instituted a brand new growth account management team. Their job is to drive really the commercials, the upsell, the cross-sell.
We pulled our solution architects in closer to those 3,000. We created a pod-like structure to provide a little bit more enterprise-like treatment, but without enterprise cost and complexity. In addition to that, we then looked at the next 5,000 customers, but not by spend, by potential. Through propensity modeling and analytics, we said, "Okay, what customers have early characteristics of the top 3,000?" We picked 5,000, and that is where we specifically have growth account managers assigned. Their job is to farm those accounts, create upsell, create cross-sell, and bring them up to high spend. We are confident that as we penetrate each of these rings, we now have 8,000 that are assigned. We can continue to assign accounts and drive increased expansion.
To help support these efforts for the technical account managers and the growth account managers, we had to institute a migrations team because part of the expansion effort, or a lot of it, as Bratin spoke about, is moving workloads from the hyperscalers. With the migrations, customers that we engage with are very excited to come into DigitalOcean. Why is that? Because once you go through the value proposition, customers, whether it is starting with a test workload or a dev workload, are excited to try. Why? As they go through this migration, as Bratin mentioned, they realize an immediate cost savings upfront. They realize as they go through the total cost of ownership how we are less dependent on the functions and formality you need in a larger cloud.
We have a program where we shield customers from the cost in the work, meaning that if we're doing the workload move, we'll do that at no charge. If a partner is doing the workload move, we'll actually fund the partner to help do the work, and we'll try and minimize as much on the customer as possible. The idea is to try and relieve the burden. Once customers come to DigitalOcean, they have a completely different support experience. It is white glove. It's high touch. It's something that the hyperscalers can't afford to apply to the types of customers that we're bringing in.
I kind of can go on and on about the greatness of our migrations, but I think what may be better is to hear from another customer, Picap, who went through this process, and they can talk about the benefits of moving to DigitalOcean.
We initially used another provider, but found it way too complex and costly. We were working with AWS and with that platform, Azure too. We switched to DigitalOcean because it's more straightforward in everything we do all the time, and it's more affordable in terms of pricing. We focus on motorbike-based ride-hailing, micromobility solutions, I would say, tailored specifically for Latin America at this moment. We offer also logistics and last-mile delivery services. Yeah, I think the support has been great. The thing that you can expect from DigitalOcean is that you guys are always there for the company. It doesn't matter how big or small it is. This is something that you cannot find, obviously, with one big cloud provider. It's something really great to help you shape the product, to help you shape the future also of DigitalOcean.
Also, something great that I say for myself is that actually we are a really big shark in DigitalOcean. Our voice is heard. So that's something that matters.
You can see as customers engage with a company, it is a different experience, right? It is low cost. We still have to have the capabilities that Bratin's team were building, the upmarket capabilities. When they get into the support and they realize that we're there to help them grow their business, it's a completely different experience. Not only Picap, you'll hear from others in terms of videos, and you'll hear from our customers directly. Our final driver is really around farming the base for the next 100,000+ accounts. This is a very low-risk, high-yield effort. We created models to look at customer behavior and usage to try and identify those that had early patterns that they could basically grow large.
The idea was to, as I mentioned before, look at the unassigned accounts and leverage these signals to kind of create a forced prioritization for the reps. We'll look at something as, hey, someone just attached an SSH key within a certain period of time. They had a workload on two data centers that spread to three or four. Or they're actually consuming more product or new features faster kind of than the mean of their population. Any of those are indications that, hey, someone may be looking to grow. It may be looking to procure more. We take those and we send those directly to the reps.
A rep, we send in the morning, opens up their dashboard, and they're like, "I have 15, 20, or 30 active opportunities that I can exercise." The reason why they're so valuable is because the customer is at some high point of influence of doing something. We have been able to take advantage of that. When you look at kind of the long tail of the funnel, we're very confident we can continue to staff this as we show success. We are confident that these drivers are going to drive great revenue in 2025. The early results show that we're on track. We realized an 1,800 bp s increase in NDR from 100K customers in 2024. As I mentioned, that's where we have our focus.
The NDR of these customers, it's still improving and still accelerating to the point where 37% of ARR growth of 100K customers was realized in Q4 2024 alone. Just like on the acquisition side, these efforts are, they're not mature, these motions. Some of these have just been started a few months. Some of these started in the back half of last year. It's no coincidence when you look at the products aligned to the upper right here that the growth is also aligned with innovation. All the stuff that Bratin's team has been doing, the go-to-market team has been consuming and trying to use to drive customer growth and revenue. If you recall why this is so important, everyone loves large customers. No one's going to say they don't want a large customer.
The value of our customers when they're large, the revenue is more durable and stable. They stay longer. As Paddy mentioned, they grow faster and they retain better. The great news here is that we can scale without materially changing the financial profile of the company. We can do so while remaining highly efficient. Let me underscore that because that's a very important point. We just reviewed seven new growth drivers across customer acquisition and expansion. We kind of have this philosophy. We want to nail all these before we scale it. We want to invest into growth, but we don't want to get too far over the tips of our skis. At $100 million in 2025, we made modest additions to the sales team. We had a lot of roles that we actually repurposed to point towards revenue.
We operate a high-velocity inside sales model. As I mentioned, I have a lot of experience with that, and that's really what is going to power a lot of our new motions. Our change in sales and marketing spend or the change in cost is minimal relative to what we can generate. We're also leveraging AI and a lot of propensity modeling to drive demand for the team. In closing, what I want you to leave here is knowing that we have a world-class product-led growth engine, and we're going to build new motions to complement it. We're very confident in those based on our past history and success. We're adding a named account team to focus on our most valuable and most important customers.
We're going to scale the go-to-market again in a very highly efficient way and not financially or not change the financial profile of the company. We'll continue to remain highly efficient as we do so. Thank you. We're actually going to take a 15-minute break, and then when we get back, our CFO, Matt, will kick us off.
What I'm saying, trying to catch the beat, make up your heart. Don't know if you're happy or complaining. Don't want for us to win. Where do I start? First, you want to go to the left, then you want to turn right. Want to argue all day, make love all night. Push you up, then you down, and in between. Oh, I really want to know, what do you mean? Oh, when you nod your head, yes, but you want to say no. What do you mean? Hey, yeah, when you don't want me to move, but you tell me to go. What do you mean? Oh, what do you mean? Said you're running out of time. What do you mean? Oh, oh, oh, what do you mean? Better make up your mind. What do you mean? You're all for protective when I'm leaving.
Trying to compromise, but I can't win. You want to make a point, but you keep preaching. You had me from the start. Won't let this end. First, you want to go to the left, then you want to turn right. Want to argue all day, make love all night. Push you up, then you down, and in between. Oh, I really want to know, what do you mean? Oh, oh, oh, when you nod your head, yes, but you want to say no. What do you mean? Hey, yeah, when you don't want me to move, but you tell me to go. What do you mean? I want to know. Oh, what do you mean? Said you're running out of time. What do you mean? Oh, baby. Oh, oh, oh, what do you mean? Better make up your mind. What do you mean?
Oh, when you nod your head, yes, but you want to say no. What do you mean? Hailing when you do not want me to move, but you tell me to go. What do you mean? I will be more straightforward. Oh, what do you mean? Said you are running out of time. What do you mean? Oh, oh, oh, what do you mean? Better make up your mind. What do you mean? If you want to run away with me, I know a galaxy and I could take you for a ride. I had a premonition that we fell into a rhythm where the music does not stop for life. Glitter in the sky, glitter in my eyes, shining just the way you like. If you are feeling like you need a little bit of company, you met me at the perfect time. You want me, I want you, baby.
My sugar boo, I'm levitating. The Milky Way, we're renegading. Yeah, yeah, yeah, yeah, yeah. I got you, moonlight. You're my starlight. I need you all night. Come on, dance with me. I'm levitating. You, moonlight. You're my starlight. I need you all night. Come on, dance with me. I'm levitating. I believe the truth for me. I feel it in our energy. I see us written in the stars. We can go wherever, so let's do it now or never. Baby, nothing's ever, ever too far. Glitter in the sky, glitter in our eyes, shining just the way we are. I feel like we're forever every time we get together. Whatever lets me lost on Mars. You want me, I want you, baby. My sugar boo, I'm levitating. The Milky Way, we're renegading. Yeah, yeah, yeah, yeah, yeah. I got you, moonlight. You're my starlight.
I need you all night. Come on, dance with me. I'm levitating. You, moonlight. You're my starlight. I need you all night. Come on, dance with me. I'm levitating. You can fly away with me tonight. You can fly away with me tonight. Baby, let me take you for a ride. Yeah, yeah, yeah, yeah, yeah. I'm levitating. You can fly away with me tonight. You can fly away with me tonight. Baby, let me take you for a ride. Yeah, yeah, yeah, yeah, yeah. My love is like a rocket which you blast off, and I'm feeling so electric dance my arse off. Even if I wanted to, I can't stop. Yeah, yeah, yeah, yeah. My love is like a rocket which you blast off, and I'm feeling so electric dance my arse off. Even if I wanted to, I can't stop. Yeah, yeah, yeah, yeah, yeah.
You want me, I want you, baby. My sugar boo, I'm levitating. The Milky Way, we're renegading. I got you, moonlight. You're my starlight. I need you all night. Come on, dance with me. I'm levitating. You can fly away with me tonight. You can fly away with me tonight. Baby, let me take you for a ride. Yeah, yeah, yeah, yeah, yeah. I'm levitating. You can fly away with me tonight. You can fly away with me tonight. Baby, let me take you for a ride. Yeah, yeah, yeah, yeah, yeah. I got you, moonlight. You're my starlight. I need you all night. Come on, dance with me. I'm levitating. You, moonlight. You're my starlight. I need you all night. Come on, dance with me. I'm levitating. We get it almost every night when that moon is big and bright. It's a supernatural delight.
Everybody's dancing in the moonlight, dancing in the moonlight. Everybody's feeling warm and bright. It's such a fine and natural sight. Everybody's dancing in the moonlight, dancing in the moonlight. Everybody's feeling warm and bright. It's such a fine and natural sight. Everybody's dancing in the moonlight. Dancing in the moonlight. Everybody here is out of sight. They don't bark and they don't bite. They keep things loose. They keep them tight. Everybody was dancing in the moonlight, dancing in the moonlight. Everybody's feeling warm and bright. It's such a fine and natural sight. Everybody's dancing in the moonlight. Dancing in the moonlight, dancing in the moonlight. Everybody's feeling warm and bright. It's such a fine and natural sight. Everybody's dancing in the moonlight. It's a human sign when things go wrong, when the scent of her layers intent isn't strong. Cold, cold heart, heart done by you.
Something's looking better, baby, just pass it through. I think it's gonna be a long, long time till touchdown brings me round again to find another man they think I am at home. Oh, no, no, no. This is what I should have said. I thought it, but I can't take it. Cold, cold heart, heart done by you. Something's looking better, baby, just pass it through. I think it's gonna be a long, long time till touchdown brings me round again to find another man they think I am at home. Oh, no, no, no. This is what I should have said. I thought it, but I can't take it. Cold, cold heart, heart done by you. Something's looking better, baby, just pass it through.
I think it's gonna be a long, long time till touchdown brings me round again to find another man they think I am at home. Oh, no, no, no. This is what I should have said. Touchdown brings me round again. I thought it, but I can't take it. Oh, no, no, no. I took the page out of your favorite book. You sold me lies just by the way you look. Taught me a language that I'll never speak. Baby, that ain't for me, that, that ain't for me. I dug my grave watching the way you move. You took me higher than I ever flew. Too many times gave you a second chance. Baby, I'm just a man, I'm, I'm just a man. No more thinking about you late night. No more running around with your friends now.
They're picking the pieces of my soul up off the floor. I said I would die for you, baby, but how can you take this pain, oh? I thought I was willing, but tonight I saved my life when I showed you the door. I don't want to lose you, baby, but how can you play this game, oh? I thought it would kill me, but tonight I saved my life when I showed you the door. You never thought this day would ever come, but I looked you in the eyes and pulled the rug up. You tried to take away my sanity. Baby, that ain't for me, that, that ain't for me. Oh, no more thinking about you late night. No more running around with your friends now. They're picking the pieces of my soul up off the floor.
I said I would die for you, baby, but how can you take this pain, oh? I thought I was willing, but tonight I saved my life when I showed you the door. I do not want to lose you, baby, but how can you play this game, oh? I thought it would kill me, but tonight I saved my life when I showed you the door. When I showed you the door. When I showed you the door. Tonight I saved my life. I said I would die for you, baby, but how can you take this pain, oh? I thought I was willing, but tonight I saved my life when I showed you the door. I do not want to lose you, baby, but how can you take this game, oh? I thought it would kill me, but tonight I saved my life when I showed you the door.
All right, thanks everybody for coming back. I really appreciate that everybody's here on a rainy Friday during a, let's just say, a very eventful week. I'm very grateful for you attending and listening to our story because I think we've got a great one. My name is Matt Steinfort. I'm the Chief Financial Officer at DigitalOcean. I've been here two years, and I'm the longest tenured executive that you've heard from today. Thank you, thank you. I'll try to make it through the day, and we can keep that streak going. The reason that I'm excited is when I joined the company, I saw DigitalOcean as a phenomenal opportunity. Sitting or standing where I am today, I think it's an even bigger opportunity.
With the new team that we have, with the new focus that we have, the strategy, the increased pace of execution, I think this is a very different company than it was just two years ago. I think that the opportunity in front of us is even bigger than I had contemplated. Paddy and Bratin and Larry have talked a little bit to you about the strategy and the changes that we're making and the progress that we've made. I'm going to spend my time with you talking about the financial implications of this and how that translates into the medium-term and long-term financial outlook. I'm going to start, and I'm going to frame my conversation using the same kind of three takeaways that Paddy started the conversation with earlier this morning.
We have an incredibly compelling position in the market, and I'm going to share some statistics that basically give that credibility. We're accelerating our growth. Larry talked about a lot of the levers that we're pulling, and I'm going to dig into how those levers are going to translate into that higher growth rate for us over the medium term. We talked about on this path to higher growth and reaching 18% to 20% over the medium term that we would do that in a really disciplined and profitable way. I'll dig into all of the profitability drivers to give you a sense of how these will evolve over the coming years. To start, to me, the biggest evidence for the position we have in the marketplace and the durability is the scale and scope of our customers and our revenue base.
We exited last year with $820 million of ARR, so clearly we're serving someone's needs. We have over 638,000 customers, and there's no real concentration. We're distributed around the globe. We've got 60% of our revenue comes from outside North America, 70% comes from outside the United States. We have no real concentration from a size of customer, a use case, a vertical perspective. We've got a highly diversified customer base. We've grown this business in a very profitable way. You've seen us drive improving EBITDA margins. We generate a lot of cash, and we've done a very good job of returning value to the shareholders, increasing non-GAAP earnings per share over 400% since IPO. We've built this business on a highly durable product-led growth engine that Larry described, that Paddy introduced at the beginning of the day.
We add almost 13 percentage points or more from growth from our product-led growth engine, just adding new customers to the business. We do this in perhaps one of the industry's best cost-effective models where we only spend 7% or so of revenue on sales and marketing. It is a very, very efficient model. Once we get customers, they stick with us, and they stay with us for a long time. Our churn has been very stable over the past several years. It is in the 10%-11% range, and it just does not move. Like it moved a little bit at the end of COVID, and it has come right back. When our customers come, they are very loyal, and they want to stay with DigitalOcean. You have seen that from some of the examples that folks have shared with you through some of the videos.
We have now added an entirely new growth vector that will enable us to grow our revenue even faster with AI. While we built this business, we have done this in a very profitable and, I'd say, disciplined manner. When Paddy talked about the weighted rule of 40 earlier and how that is a guidepost for us, that is not an academic exercise for us. It is actually how we measure the company. The entire employee base's bonus program is based on the weighted rule of 40. The executives' equity plan and performance equity is based on the weighted rule of 40. We believe that balancing growth, which is incredibly important for us right now, with profitability is the right way to run this business over the long term. We feel like we are doing a pretty good job of it so far.
We exited 2024 with a weighted rule of 40 at just under 28%. That was on the back of 13% revenue growth and 17% free cash flow. As you've heard from the other folks today, we have the ability to drive that number up and prove it meaningfully over the medium term. Before I get to the medium term or the long term, let's just focus on the near term for a second. It's April 4th, and we're a couple of days into the new quarter. As you saw in Q4, we had very strong results. We've started to show some really good momentum. That momentum carried forward into the beginning of 2025. We just want to make sure everybody understands we're reiterating our guidance for both Q1 and for the full year.
While you may be disappointed that, okay, we're just reiterating, you're not giving me a lot of color, it's only four days in. We're going to be talking more about this in May when we get to our Q1 earnings. What I'll tell you is we're highly confident in our numbers and in our ability to perform versus expectations as we've set in our recent track record. This sets us up for a compelling medium-term outlook. We believe that we can drive weighted rule of 40 to 35%+ over the next several years. We'll do this by driving revenue growth to 18%-20%. As Larry talked about, the contributors of this will be driving more new customers and boosting the front of the funnel and growing expansion with our existing customers. We'll do this in a very profitable way.
We'll continue to keep the margins in kind of the same ballpark that they are today on both an adjusted EBITDA and an adjusted free cash flow basis. We're very focused on improving the balance sheet strength. We have an incredibly strong balance sheet, but we're looking to further delever as we execute over the next several years. I have talked to you about the foundation. I have given you a little bit of statistics about why I think that we have such a compelling role in the market. I have talked about the momentum that we're generating, and I have talked about the near-term outlook or the medium-term outlook. Now I'm going to break this down a little bit more and describe some of the levers that we're using to grow this business. It all starts with re-accelerating organic growth.
By organic growth, I mean growth that's not driven by M&A and it's not driven by across-the-board price increase. It's growth from your customers and the addition of new customers. This is an incredibly important thing when you think about the durability of a business. You can always do M&A. You can always, every once in a while, maybe you can do a price increase, but the real measure of a business is how effective you are at growing organically. The challenge that the company has had, and Paddy alluded to this and Bratin had talked about it, is from the period of around, say, 2019 to when we went public and maybe for the first couple of years after, we did not invest in product innovation and did not kind of keep up with the pace of our customer needs.
As a result, we saw a decline in our organic growth. This was masked a little bit because we did do a price increase in 2022. We also bought a big business, Cloudways, in 2022, so that was able to keep our growth up for a bit. When you look at the actual organic growth in 2023, it had dipped to about 7%. Fortunately, with Paddy coming on, Bratin and Larry and the rest of the team, we've made a tremendous amount of changes, and we've been able to invert that and get that back to where we've driven growth from 7% in 2023 to 13% in 2024. We feel very good about the trajectory that we're on based on all the initiatives that we've put underway.
To peel this back a little bit more and just to talk about the two components of organic growth, they're pretty simple. The first one is just getting revenue from new customers. We have an incredibly strong product-led growth engine that we've all talked about. When you look at that and you think about that in the context of Paddy's five stages of evolution in the digital native enterprise ecosystem, you think about, okay, someone who's got a really strong product-led growth engine, they can count on it all the time. It just goes up and up and up. That's a phenomenal base to build on. You can see that in our results from 2021 to 2023, where it just steadily crept up. We didn't spend any more money.
In fact, we spent more money probably than we should in some of the earlier years, and it just still continued to creep up. The problem with that is if you're relying solely on product-led growth, the % growth that you get from that doesn't increase with the size of your business. You start to cap out. That is what you saw in our growth rates. Our growth rates started to decline. We were adding more incremental dollars every year, but it just wasn't moving the needle as much. What you need to do is you need to add additional growth vectors to that. That is what we've done. We've added three new go-to-market motions. We've also introduced a new product capability with the emerging AI capabilities.
As a result, you see in 2024, we had a 26% increase in the dollars associated with new customer growth, which brought our percentage growth back to like 14%. The key for us on a go-forward basis is to continue to invest in those new go-to-market motions, to take advantage of the AI opportunity, and to use that to continue to drive about 13 percentage points of growth from new customers. The second part of that formula is getting expansion from your existing customers. You have to, in our kind of a business, a cloud business, you have to grow your existing customers. You can't have a leaky bucket where customers are shrinking on average over the period. You have to be able to drive growth. The company was very good at that in its early stages.
As you saw, and I talked about in the earlier slide, the lack of product innovation, the fact that we hit a choppy market for a little while at the end of COVID, which I'd argue was probably more of our own execution, not innovating for our larger customers, and then we hit them with a price increase, we drove expansion the wrong way. We actually had expansion as a headwind for the company in 2023. As I said before, we've largely turned that ship around and driven that such that we've made a huge improvement in 2024 around the net expansion. We're positioned to have net expansion on the back of the progress we've made with these large customers be a positive in 2025.
As we scroll that forward, not only will we continue to improve the performance of the core cloud and its retention with our customers, we also have the benefit of the emerging AI capabilities. As we get into inference cloud, as we get into more GenAI, these are more recurring, predictable businesses, and the demand will grow versus a training workload where you do not know whether you are going to have the same kind of demand from one month to the next. That will also contribute to expansion. Those last two points are incredibly critical. I am going to pause here for a second because I think they are just so important. If you say, okay, why are you up there saying you are confident you are going to grow 18% to 20% over the next several years when you exited last year at 13%?
What I say is I say, come back to the conversation that Paddy had earlier this morning about the five stages of a company's evolution, serving digital natives. We've built a company with over $800 million of ARR, with 638,000+ customers, with 500 customers that spend more than $100,000 in ARR, with 25 customers that spend over $1 million on a single growth vector, the product-led growth. That's the only growth vector we really had for the majority of the company's existence. On top of that, our product strategy was almost exclusively focused on small developers, which was great when the company started, but it ignored the needs of the larger customers and created that leaky bucket where we had defection challenges and we just weren't growing with our existing customers.
Fast forward to today, just 12 months, over the last 12 months, we've introduced three new go-to-market vectors, which are already gaining traction. We've refocused and re-energized the product innovation. You've seen all the great things that Bratin and team are cranking out and how well our customers are responding to that. We have an entirely new growth vector with AI/ML. As I stand here and as we sit here as an executive team and as a company, we feel really good about our ability to accelerate growth because we're a phenomenally different company than we were even just 12, 18 months ago. This leads us to a very clear revenue growth algorithm, and it just comes back to what I've said. We need to improve the rate at which we add new customers, and we're doing that. We're doing that by adding AI.
We're doing that by widening the funnel, bringing on channel partners to bring in new, bigger customers, widening the technology partners to bring in a lot of volume into our self-serve funnel. We're doing it by also expanding the rate at which we can grow our existing customers. You've seen that in the improvements that we're generating in NDR. This positions us well for 18% to 20% growth by 2027 and also puts us on a path to get back to, hopefully, at or exceeding the market-level growth around 20%. We've talked about the growth levers, and now we need to talk about the profitability side of this. As we've said, we're definitely focused on accelerating growth, and we believe we have a real clear path there, but we need to do it in a profitable way and continue to make disciplined investments.
I'll take you through the cost side of the equation. To start, the company has a very strong track record of controlling and managing costs. I mean, I think that's pretty obvious. If you look at all of the key profitability metrics, they've all improved over the course of time since the IPO. We've driven improvements in gross profit despite the fact that in the recent years we picked up more AI. It's still in its infancy. We're still in startup mode, and the margins are clearly not as high yet as they are in the cloud, but we've been able to absorb that and still drive gross margin improvement. We've driven a lot of adjusted EBITDA improvement, over 1,000 basis points of improvement there, clearly effective at kind of dialing our costs in and getting them under control. We've driven a lot of CapEx efficiency.
You can't see it. I'll talk to it in a slide, but if you think about the improvements we've made in core cloud, it has enabled us to increase our investment in AI without really changing the CapEx intensity of the overall business. That's a pretty big feat in today's market. We've also paid a ton of attention and been very, very focused on stock-based comp and cash flow generation so that we're returning value to shareholders. I'm going to click through each one of those and give you a little bit more detail. Bratin talked about gross margin opportunity. We have material opportunity over a multi-year period to improve our cost of revenue. The two biggest drivers of cost of revenue are colo and power and depreciation, which is basically just how the CapEx flows into the cost structure.
From a data center and power standpoint, you've heard and seen us executing on the beginnings of our data center optimization strategy with our Atlanta facility that we just brought on in this past quarter. You may not have heard, but we're also consolidating one of our London facilities. We have a very expensive footprint, and we can manage that cost down over time while also being able to provide a lot more geographic coverage and do it in a more cost-effective way. That is part of our strategy going forward. Bratin and team have also enabled us to get better utilization out of our existing fleet of infrastructure, both on the CPU side and the GPU side. We expect that utilization to drive margin improvements over time as well.
In addition to improving the unit costs, we have the ability to benefit from a mix shift in our business. If you think of the unit cost of any one product that we sell, our PaaS products, our platform as a service products, those have higher value. We charge more for those. There is more value added to the customer. They are growing five times as fast as our infrastructure as a service products, which means within our core cloud products, that will be 30% of our revenue in the next several years. If you look at it within AI, and Bratin talked about this, as we move to more platform and application layer products, they have higher value. They have higher margin. It will be a benefit to us in overall margins as well.
The fact that when we sell a GenAI product, it tends to pull through cloud revenue, which also has higher margins. There is a lot of mixed benefit that we believe that we'll get in over time. In the immediate term, clearly the ramp-up of infrastructure on the AI side is a counterforce to that. We believe with all of the benefits we have and optimization potential we have, we can mitigate that margin pressure. From an OpEx standpoint, we clearly have room to continue to drive operating leverage. As Bratin said, we've hired over 50% of our new roles that are in lower-cost, high-talent markets. We are now at a point where we've got over 60% of our, or about 60% of our headcount is based outside of the U.S. in some of these markets.
We've also done a really good job of controlling overhead, dropping G&A as a percent of revenue from 17% to 14% over the last several years. We haven't even started to really benefit yet in a measurable way from the financial standpoint, from the automation and the AI that Bratin and the team are trying to implement within our organization to make ourselves more effective in how we operate the cloud. With all of these opportunities, we're not only looking for kind of unit cost improvements around OpEx and CapEx, we're also looking to constantly reevaluate all of the dollars that we spend to prioritize those investments towards the highest priorities we have.
You will see us constantly looking at re-examining what we spend and reallocating resources to the kind of highest priority areas, which is a way of helping us to invest in new parts of the business and the higher growth opportunities without increasing the overall costs. This all leads to higher EBITDA margins. As I said, over 1,000 basis point improvement from 2021 to 2024. We believe that with the revenue growth that we are driving and with the investments that we are going to make in the business, offset by the efficiency that we are still driving, we can maintain EBITDA margins in approximately the same range over the next several years. From a CapEx standpoint, and I alluded to this earlier in my talk, we have driven a lot of improvement in capital efficiency in the core cloud. You can see that in the 2021 to 2022 decline.
It is masked a little bit, even the amount that we improved in 2023, because we started to have AI spend as early as 2023. What you have seen is we have reinvested a lot of that efficiency gain into the AI platform. If you look over the last several years, we have not fundamentally changed the cost structure of the company. We are certainly spending a bit more capital now than we were, and we will continue to spend about this level in the high teens, low twenties, but it is to drive growth that you are seeing on the results from our recent progress. All of this delivers solid free cash flow, and we are committed to continuing to generate healthy free cash flow. What I would leave you with is we are very, very focused on growth.
If we see an opportunity to invest that has a good return and that has a compelling impact on our growth, we'll make that investment. The plan that we're articulating to you right now is a perfect example of that. Getting from 13% growth today to the 18% to 20% growth over the next three years is five or seven percentage points of revenue growth. Again, if you say rule of 40, what is that going to cost you? It's only going to cost us a couple of three, say, three, four points of free cash flow margin. Even without weighting it on a straight rule of 40 basis, we're driving more revenue growth than we're investing from a free cash flow margin standpoint.
We will continue to make decisions like that as we execute over the next several years to maximize the opportunity for this business. We spent the majority of today appropriately talking about the organic growth opportunities. That is clearly our number one capital allocation priority. I am going to briefly talk to you about the other three priorities just so you have a fulsome picture. Those priorities are share repurchases, M&A, and maintaining balance sheet flexibility. From an equity and dilution standpoint, I think we have a very good track record of putting shareholder interests at the forefront.
We've reduced our stock-based compensation by over 600 basis points over the last couple of years, while at the same time attracting and retaining and building a very, very talented executive team and talent across the technical organization and throughout the whole company that you're witnessing the benefit of and the impact of here today. We've also taken a lot of shares out of the market. We've done over $1.5 billion of repurchases over the last several years, reducing the share count by 13%. We will continue to focus on these priorities going forward to offset dilution and continue to return value to shareholders. From an M&A standpoint, I'd say that M&A is certainly a part of our history. We've used M&A to accelerate revenue, as we did when we acquired Cloudways.
We've used it to accelerate the product roadmap, which we did when we acquired Paperspace. We will continue to look for selective, accretive acquisitions to be able to accelerate our plans. The plans that we've laid out to you today are based on organic growth only. We're not predicated on acquiring our way into that. If we were able to find an opportunity that was accretive, we would certainly take a hard look. From a leverage standpoint, I love this business model that we're in, coming from where I came from before in a very different industry. We're delevering right now, and I really enjoy it. We're generating cash. We're growing EBITDA, and we're on a path already to get to less than two and a half times leverage by the end of 2027.
Previously, I've stated a range, a target range of two and a half to three times is our long-term target. As we've evaluated our opportunities and taken a look at the market, we think that we can safely drive leverage under two and a half, and we think that's a good target for us to have. People ask, and I'm sure there's a bunch of folks in this room that are going to ask in just a few minutes, "Well, what about your current debt? You've got a billion and a half that's due at the end of 2026." Again, every time I get this question, I start with, "This is the best debt instrument I've ever seen in my entire life." We have a billion and a half of debt with zero coupons, so we pay nothing.
In fact, we make money on interest that we keep that cash in the bank. People are paying us to borrow their money. I don't know if we'll be able to do that again, but it's a great thing to have. What I can tell you is we're very cognizant of that maturity coming up at the end of 2026, and we anticipate addressing some or most of that, or even all of that maturity at some point this year before it goes current. I'll close with just kind of a recap of the plan and why we're really confident in this plan. Again, the strong performance we demonstrated in Q4, the momentum we've seen as we've headed into 2025 gives us confidence in our guidance for 2025, and we've reiterated that.
As you think about the new go-to-market motions that we've added, you think about the product velocity that really started to increase maybe just halfway through last year, and the fact that those impacts have just started to impact our business. There's a lot of runway there. That gives us confidence in our medium-term outlook of driving 18% to 20% growth with mid-teens kind of free cash flow. If you think about it from a longer-term perspective and say, "Well, why do you think you can get back to at or above market-level growth?" I think of the TAM. I think it's a giant market we're in. It's a giant market. Companies that are much bigger than us are growing faster than we are. We should be able to grow at least at the pace that they are.
I think about the large customers where we're really just starting to get. Larry talked about the concentric circles, and we're touching more of these customers. We haven't even really cracked the surface of that. I think about AI as a, "Holy cow, you weren't even doing this 18 months ago." It's an entirely new growth vector. We sit here thinking we are well-positioned to get to growth at least the industry or better. That's why we're hoping that as you see the potential shareholder value that will be created as we go on this journey, that you'd want to participate with us. With that, I'm going to bring Paddy back on to close out the day for us, or the session.
Thank you, Matt. I'm not going to talk to this slide. I'm going to just take a couple of minutes and speak from my heart. Hopefully you had a really good two and a half hours, and I want to bring us home by drawing a full circle to the journey that I started describing to you. I was brought here to really execute on steps four and five that I showed you in my first slide. Essentially, what that means is, number one, accelerate our growth by scaling with our largest companies because we know we have them in our base. We have 165,000 of them. As you heard from Bratin, how we are starting to execute on our product development roadmap and how fast we are launching some really material features that these customers are absolutely hungry for.
You also heard from Larry how we are adding complementary go-to-market motions to really go after and service these large customers and make them successful on our platform. With that one-two punch, I hope you've seen enough to give you the conviction that we are well on our way to accelerating our growth on the back of our scaling customers that are expanding rapidly on our platform. That's number one. Number two is there comes a moment in time in every company's history where there is a big seminal opportunity in front of us. For us, that is the world of AI, where our customers are facing this existential question around how is AI going to change their world. Right now, AI is super complex and super expensive for them.
They are literally asking us to democratize and get them access to AI in a way that they can consume. To me, that is the world of inferencing. You heard from Bratin how we are thinking about developing a full-stack cloud. In fact, all of the boxes that you saw today are all in the hands of customers, and we will be going GA in a few weeks. We are getting tremendous feedback, great adoption with AI. I feel very confident that over the next several quarters, we are going to be executing on our infrastructure platform and agentic layer of our AI stack. I hope you got the conviction around we have a great team, we have an even better strategy, and the most important thing is we are executing. We are executing with speed and velocity like I have never experienced before.
To think about it, our executive team has been together for only two full quarters. We're entering our third quarter together. We had two and a half hours of content to tell you about what we have accomplished in literally two quarters. With that, I hope you have enough conviction to go and buy a little bit of DigitalOcean and spread the word. We are really excited to, first of all, host you. Thank you so much for coming, especially in a week like this. We really appreciate that. We do not take it for granted. Now what we're going to do is set up a few chairs here so that we can transition to Q&A. I have been asked to say the way we are yes, you can go ahead and do it.
The way we are going to do the Q&A is we're going to have a couple of mics. If you have a question, please do raise your hands, and I will try my best to play emcee. We're going to have all the speakers up here on stage, and we'll be happy to take as many questions as we can in this compressed timeframe. As you all know, our earnings call is right around the corner. We'll be happy to report our earnings. Matt talked a little bit about our Q1, so we'll be happy to answer questions there as well. With that, let's welcome the speakers back on stage, and we'll transition over to Q&A.
Yeah.
Thanks for doing this. Yeah, Mike Cikos from Needham. Two questions here. The first, on the 5% to 7% growth that we're looking for from those existing customers, how do we think about where that's coming from across the customer base? Should we be thinking about like 80% +, 90% + coming from the Scalers and Scalers+ , just given the amount of dollars driven or the product adoption you guys are seeing? Or is that not necessarily the right way to be thinking about it when you're providing some of these targets here?
Let me start, and then Matt, you can add in. There are two dimensions I think about, Mike. Thank you. First of all, thank you for your question. When I look at the expansion, the 5% to 7% Matt talked about from our existing customers, those come from two different dimensions. One is customers, as you said, like Scalers +. As Larry talked about, we're pushing builders to Scalers, Scalers to Scalers +, Scalers+ to $500,000 +, $500,000 to $1 million +. Right? We have a tremendous on-ramp of graduating customers from within our base and get them to do the cross-sell and up-sell of our existing product features. Bratin had a slide which talked about how the new product features are driving more incremental revenue every quarter. That graph is going up to the right.
The interesting thing there is not all features are inherently monetizable. Right? Some features we charge for, and some features are just part of doing business with the DigitalOcean platform. That is one vector. The second vector is getting more workloads from existing customers. The third one is also we have a fairly robust set of AI customers. Especially in inferencing, as they start scaling, we are also starting to see that footprint growing from our AI customers. That is how I look at it. Matt?
Yeah. No, I think you hit it right on the head, Paddy. I think the learners population, that big bulk of customers, tends to be they just kind of hover. Most of the growth that we'll get in terms of expansion will be from the builders, the scalers, and scalers plus, and the AI customers as we weave that into expansion over time.
Can I ask just one more? I think it was going back earlier in the presentation, but there was 240 basis points assigned or expected from being able to sweat your assets harder, prolong the asset life using AI. It is a two-parter here. The first is, is that in any way contemplated in this calendar 2025 guide, or is this all just on the come as we think about 2027? The second piece is, I have not heard a company be that specific as far as the expected benefit coming from AI. It reminds me of, if you go to calendar 2022, a lot of software companies were saying, "Hey, we are in a different macro, but we are being more cautious on spending. We are being more thoughtful.
We're optimizing. Nine months later, all these software vendors that were saying they were doing that, their customers start to do it. Right? If I think about the growth targets that we have from you guys, are you in any way contemplating your customers getting smarter and how they're using you because of AI as well? Does that make sense? I'm sorry. Long-winded question.
There are two parts to the question. Right? One is the sweating of the assets. The second one is, maybe Bratin, you can start with the first one, and then I can take the second one.
Yeah. I think we are being very deliberate and methodical in how we are using AI. You saw all of those numbers, the productivity improvements. That is allowing us to sweat assets for longer. That is what is leading to that basis points improvement, the 240 basis points that we talked about. You asked a question about others having been that specific. Our motivation, our operating modus has been we got to instrument everything we do, and we got to measure everything we do. Otherwise, you do not know what you are driving. Right? That is just as Paddy mentioned, Larry and Matt mentioned. It is just a new operating discipline we have now where we are really measuring everything we are doing and making sure we are not just investing for the sake of investing.
The second part of your question, Mike, is we anticipate, expect, and we are enabling our customers to be smarter about how they're using our platform. The demo that Bratin showed about the Cloudways Copilot is exactly that. As I was explaining to some folks over the break, one of the biggest drains of time for us, as well as our customers, is when there is an incident because cloud stuff happens on the cloud. When there is an incident, people generally spend hundreds of man-hours trying to figure out what is happening and where things are going wrong, trace it back to the development check-ins and things like that. That is what we are attacking first to say, "Okay, where is the most human effort going?" That is what we are impacting.
We are actually helping our customers to use us better and stop wasting time. We are going to be a big catalyst and an enabler for our customers to use us more efficiently, as well as build tools to reduce wastage and be more productive in their own lines of businesses.
Oh, great. Good morning. Thank you all for having us. We really appreciate all the detail in the presentation. Maybe for Paddy and Bratin, I want to ask you about the durability of growth.
The what?
Many of the durability of growth. So many of the dynamics that you're talking about, the value prop you have for your customer base, were true five years ago. Help us understand what's different this time in being able to translate that value prop to growth. On the AI piece specifically, I know you stripped that out of your NRR specifically because you tend to see variability in experimentation. 160% year- over- year, help us understand the durability of the AI piece in particular. Thank you.
Okay. Great. I'll start the first question, and then Bratin can surely answer the second one. The first one, what is different this time? This is different. All joking aside, what is different? That is why in my recap, Gabriela, I said, what really matters is execution. I think on the execution piece, we feel, given that all of us are real, very deep technologists, we feel we understand the needs of our customers at a very detailed level. Not only us, we have assembled a team of real world-class tier-one technologists, and many of them are here in the audience today. One, I'm very confident that we understand what our customers are going through. Number two is, from an execution point of view, I think in two quarters, we have shown what we are capable of doing.
In fact, Bratin is accelerating his product delivery. That's number two. Number three is, we've never had a real go-to-market outside of our product-led growth. In the short time Larry has been here, I always say high-velocity inside motion is really, really hard to nail. There are very few people who have nailed it and scaled it. That is what we are trying to do. We have a unique luxury that most companies do not have, which is our incredible customer base. There are two things I had to take out from my presentation. Number one is the share of wallet we have with our existing, even our successful customers. We have a lot of room to grow with them. As we are bringing out, like this week we released the multi-cloud support, we can now connect to on-premise.
We can connect across different clouds. These are really meaningful capabilities we never had. This is what will enable us to get a bigger footprint with our existing customers. As Larry mentioned, we are also doing things in a very efficient, low-risk, high-yield manner, both farming our base, but also bringing in channel partners. He talked about how we have added nine new Scalers, like 100,000 + accounts from just three partners in, what, 90 days? Yeah. I think these are all the reasons why I believe this time is different. I mean, the previous management team did incredibly great things in different dimensions. In terms of accelerating our growth by scaling with existing customers, I think we are absolutely on the right track, and we are picking up a lot of momentum.
The number two thing is something that Bratin can answer from an inferencing point of view.
As we were saying, we are also focused a lot more on inference. That's a production application. It's much harder. You wouldn't want to usually take a production application out and go away. We have had some really interesting situations where people have seen us. Going back to the previous question, there's actually a customer we are talking to now. They've seen us use AI for a particular thing that we're doing, and they actually want to adopt it inside their own company as well. These are, as we move more towards, let's not just build AI, let's use AI. The use is about getting business value. Once you've gotten business value, it's production. Once it's production, it doesn't get taken away.
That is really the reason why we did not chase after those big training workloads, as Matt was talking about, but really more focused on the inference side of the house.
Thank you.
Hey, Raimo.
Hey, Raimo Lenschow from Barclays. Two questions. First, if you think about your moving up market, and obviously then you have more complexity with the customers, how do you decide how far up you want to go? Kind of what's the competitive landscape then? Is it just taking low-end of AWS, et cetera? There are also other clouds that are playing in there. I had one follow-up.
Yeah. I do not think of this as low-end and high-end. What our customers want, we will deliver, regardless of however high-end they are. I feel like we have the self-confidence that we can measure up to any cloud provider. We are winning cloud workloads from any hyperscaler all day long. You saw Bratin's presentation where we are not going to be shy about landing a few punches. I feel we will scale up and down the right side of digital natives. The other beautiful thing about digital natives is that it is not bogged down by the complexity of the size of the customer. You can be a very small company, and you have four customers of ours outside in the customer showcase, their spend and sophistication on the cloud is the same as any large enterprise brick-and-mortar company that you will encounter.
It is really about understanding the needs of those customers and being customer-obsessed. For example, we are working with a customer who called me and Bratin over the weekend. We are talking, that CEO has his texting with us. They are not even that big of a customer. Right? I think the way we are scrappy, hungry to get these workloads with these large companies, I do not think there is any limit in terms of going up and down the stack in terms of complexity. We also feel like we have the right inside sales motion to be able to do that without having to bring a bunch of new skills into the company.
I'll just add one other thing. Just from having dealt with a lot of cloud customers, if you're giving them lower cost and similar SLA and similar experience, they'll come. I haven't yet met a customer who said, "Hey, you know what? I'm paying too little. Let me..." The other thing Paddy mentioned this in his answer is just like this week, we enabled multi-cloud support where a customer can now run a workload securely on a hyperscaler on a NAS. What that means is it makes Larry's migration motion a lot easier because you now don't need to do a lift and shift, lock, stock, and barrel. You can start moving some of your workloads in the newer workloads. For us, it's really about listening to the customer, iterate as quickly as we can, and bring value to them.
You kind of lead straight into my follow-up. You mentioned earlier, and that was the discussion we had with the previous team as well, that you kind of need more product. You're a tech company. You need more. How much do you need to lean in on marketing and kind of showing the customer base again that you are a different beast compared to, as you said earlier, the last few years, there was a little bit of a you fell down a little bit there. How's the perception in the market? What can you do there? Thank you.
Oh, great. Great question. Thank you. I was hoping someone would ask this question because we have such an incredible developer mindshare. Not only that, all the stuff that Bratin is building. We will have a chance to talk to Wade in a second. My challenge to our marketing team was to say, "Hey, how do we translate?" Two things. One is we have to be able to tell the story of all the great features we are building. We are now averaging at least one deep-dive technology webinar a week. Along with that, we publish like three or four articles every day. You can go look at our blog. It is just an incredible repository of technology information. The second thing is we are publishing. My challenge to the team was, "Hey, I want one customer case study every week." The team is over-delivering on that.
You can go look it up, and it is a rich repository of different use cases, different workloads, different things. We are waving our flag very, very aggressively with our existing customers. The last thing I leave you with is we have just started measuring this thing called share of voice. It is measured by a third- party. As I think it was in one of the presentations, I think Larry's presentation, we are actually bouncing and punching above our weight class pretty consistently. Like most weeks or many weeks, we are in the top three of cloud along with the hyperscalers above many hyperscalers. We are in the top three in terms of share of voice in both cloud and AI.
I feel we are right with the big boys out there in terms of controlling and changing and dictating the narrative when it comes to cloud and AI adoption.
Great. Thank you. Jaiden Patel from JP Morgan, thanks for taking the question. To follow up on a previous question, in that NDR of 105%-107%, what's the assumption for the cohorts below scalers? And then I have a follow-up.
Matt, do you want to answer that?
Yeah. Could you repeat the question?
Yeah. The question was about 105%-107%. First of all, it's not 105%-107% NDR. It is 5% to 7% from expansion, which has NDR and also expansion from other customers that are not part of NDR, which is like the AI workloads and things like that. Now, the question was the makeup of the expansion number.
Yeah. And Paddy, you hit the main points. I mean, clearly, we're working aggressively to get the NDR for the whole business above 100. We got very close to that. At the end of last year, we talked about the core cloud, traditional cloud business was above 100 in Q4. That number needs to be in the low hundreds, right, for you to get five to seven points. The balance of that is going to come from the AI expansion. As we get AI customers, like if you said today, "Why don't you include it?" We've answered this a number of times. One, a lot of those customers aren't even a year old, so they wouldn't even be in the NDR cohort anyway. Two, the majority of our early customers were training workload-oriented, and it's just more sporadic usage.
Between those two things, getting core cloud NDR into the low hundreds and getting the benefit from the more recurring revenue from inferencing and our GenAI solutions, that's how you get to the 5% to 7%.
Great. Thank you. As a follow-up, how do you think of the guide and the long-term model given the current developments in economic policies? What are you baking in around that?
Go ahead, Matt.
I think everybody in this room is probably doing a similar assessment of, well, what does this mean? I mean, it's been a couple of days. The landscape is shifting. You don't know what the implication is going to be. What I can tell you, as we've thought about it, is the majority of the focus on tariffs appears to have been in more hard goods and manufacturing. We're a digital native business ourselves. Our customers are digital native businesses. There's a lot of software. There's a lot of kind of technology, but there's not a lot of physical goods. When you think about it, they say, "Well, that's your first order and maybe second order effect, but what about components? What about servers? What about data center?" We're still going through that kind of assessment.
We're in active conversations with our leading suppliers, many of whom have manufacturing facilities in the markets in which we operate. For our U.S. data centers, we're buying gear that's primarily built in the U.S. We can do similar with a lot of our companies that we buy from; our global companies, and we can procure from in-region locations. I'd say it's way too early for us to conjecture. What I can say is that if you look at the customer base we have and you say, "Well, what happened the last time there was a disruption?" You'd say, "Well, you guys went backwards in terms of expansion." I'm like, "Well, a lot of that was us. We weren't innovating. We weren't kind of delivering on our bigger customers. We did a price increase." Right at the same time, everybody was optimizing.
That was a lot of self-inflicted wounds. If you kind of peeled the onion a little bit, you'd looked at the core customers. You're like, "You still added customers at pretty much the same rate you were doing before." The core NDR outside of the really big customers was pretty consistent. We feel like the diversity of our customer base, the fact that we're not concentrating in any one region, that we're hopeful that it's not a huge impact on us. I can tell you, we're still evaluating, and we'll keep everybody updated as we learn more about what the potential impact would be.
Okay. Yes. Go ahead, please.
Thanks for the question. Tom Blakey with Cantor. Great presentation, by the way. Maybe double-clicking on the large cohort, you guys seem to have done a great job of funneling that down even to the 8,000 named accounts and whatnot. I'd be curious to kind of see after I know you've only been together for a short period of time, but one of the unique parts about this story is that you could be mining an existing customer and keeping that customer from churning and getting that double benefit of having them expand on your platform. What does that look like when you go from 630,000 to 165,000 to 8,000? Is it 100 customers that could spend $10 million? Is it 1,000? Would love to just double-click on that.
Yeah. Maybe I can start. And then, Larry, you can talk about how we selected those customers. I do not think we need to put a cap on how many customers can spend how much. I mean, I'll just give you a number. Our share of wallet has a lot of great opportunity for us. Even with our larger customers, there are so many more workloads that we can be migrating to our base. Larry, why do you not talk about how in the 8,000, we have 3,000, which are our top spending accounts. You want to talk about how we selected the other 5,000?
Yeah. Basically, we looked through hundreds of thousands of customers and said, "Of those customers, let's look at what attributes drive growth." You have the top 3,000, and you say, "Okay. They have certain attributes. They use so many data centers. They use these types of products. They grow at this rate or velocity. They're in these types of markets, have these types of domains." You take that, then you kind of create an anatomy, and you push that through the hundreds of thousands of customers. The ones that rise to the top have the characteristics most like those 3,000. We just said, "Hey, let's start with 5,000 and kind of back to our nail it before we scale it." As we continue to prosecute those accounts and drive expansion, we can continue to open that aperture.
We picked the 5,000 given that we had, based on the account managers we had, we felt it was a pretty good ratio of coverage. As Paddy mentioned, as we start to realize growth and expansion, we can continue to expand that.
Just to be clear, when you look at this large, that's a pretty large cohort of customers, they could spend seven figures on your platform, the current existing product?
Absolutely. Yes. Yeah. We had a disclosure on that today. Yeah. Absolutely.
Maybe just double-clicking on the free cash flow margin, the longer-term target of 20% growth, obviously investing there to get there with the mid-teens free cash flow margin. Just maybe rank order or talk about the top kind of drags in terms of the investments needed there that would pull back the free cash flow margin to hit 20% plus growth. Maybe specifically, does the free cash flow margin kind of framework that you've set include the refinancing of the convert? Thank you.
Yeah. That's a great question. All of the numbers that we showed here were unlevered. The margin targets we're talking about, the rule of 40, we're talking about unlevered free cash flow. It doesn't include the interest. If you think about the interest, though, again, when you take the leverage target that we're talking about and the growth rate at which we're performing, that'll be kind of low single digits over this kind of period of time. If you assume we're not refinancing the full $1.5 billion, then we're going to bring that down, and we're going to continue to drive leverage down. Your first part of your question was around, what's driving the decline from where you are today to mid-teens? It's giving ourselves the ability to increase capital for AI.
It's giving ourselves the ability to invest in R&D to accelerate the product roadmap. It's giving ourselves a little bit of opportunity to bolster the sales and marketing investments if we see demand. Basically, what we said is we think we can deliver the 18% to 20% revenue growth and only consume about that much incremental free cash flow. Now, can we tell you, well, did it come out of gross margin, or did it come out of OpEx, or did it come out of CapEx? It's a little bit early for that. We're still doing a lot of nailing before we scale. We just want to make sure we set expectations that it is going to cost us a bit, we think. As I said in my presentation, this is a plan that we think delivers 18% to 20% growth.
If we saw an opportunity to accelerate that or to get there faster and it had a good return and it had good economics, we'd make that investment. We'd clearly communicate it to folks. The plan that we're on right now, we believe we can get to 18% to 20% with only going down to mid-teens free cash flow.
Hi. Josh Baer with Morgan Stanley. Great presentation and detail. Wanted to ask one on total cost of ownership. Obviously, a very compelling piece of the value proposition, but was hoping you could unpack that. How much of that is a structural cost advantage versus a pricing decision to take lower margin? The follow-up would be, why is 30% the right level of benefit versus the hyperscalers? Why not 40% or 20%?
Yeah. I can start, and then you can talk about the actual structural things. 30% is typically what we see from our customers. There are several customers that have significantly more cost savings realized. There are some, depending on the nature of the workloads, how they use certain aspects of our platform, it could vary by the workload. I think the most important thing is I think Bratin had a slide which talked about the fact that there's cost savings, like pure savings based on our pricing model and our packaging. The second thing is cost avoidance based on certain things that you do not have to do with us compared to a different, more complicated cloud.
It is about half and half. The cost of the infrastructure, the lower cost out of that 30% is about 15%-ish. The remaining part of that is the cost avoidance. We have many customers who get way more than 30%, like Picap that Larry talked about. They get like 70% cost savings. That 30% is kind of a low watermark. We have lots of customers that get lots more.
I think the other part of maybe your question, Josh, was around we just want to be explicit. We have not changed our pricing, right? We have changed some packaging here and there, which every company does. In terms of pricing, after the last price increase, it's not that we have changed the pricing. We have lowered the pricing to get more discounts to our customers. That's not how we have accomplished this. This is truly, when you look at the bundled offering of a Droplet where you get a little bit of compute, network, storage, there are some fundamental differences in how we package our cloud versus some of the other cloud providers.
Thanks. I just wanted to clarify the 240 basis points of sweating the assets and 110 from other optimization. What time frame is that? Is that looking through 2027?
Do you want to take that, or do you want me to take that?
I think that's a combination of the impact that we've already had in terms of increasing the useful life of the gear. It's also a view kind of into the future over the next couple of years. What I can tell you is in the 2027 guide, we haven't fully baked in the cost efficiencies that we think we can drive in gross margin. We haven't baked in the full data center optimization initiatives, and we haven't baked in a lot more in terms of the utilization improvements.
Yeah. Those numbers are, in some ways, the tip of the iceberg of what we can get.
We have time for one.
One last question, please. Yeah. Go ahead.
Great. Kingsley Crane from Cantor. On products, you have 50-60 meaningful releases per quarter. Velocity has increased 5x over the past year. It's amazing. It's clearly helping some of your largest customers expand. If we think two to three years out, the platform could look a lot different. I'm curious how you balance that innovation with ensuring that the product can stay simple.
Yeah. Great.
Great question. We have a very good process established to make sure that we pay a very high amount of attention during the design, during the conception, during the shipping to certain fundamental product tenets we have in terms of what the user experience should be. How quickly can you set something up? What does it take you to set your access control rights? What does it take you to allocate all of the resources you need? That is just part of the process of how we are developing the products and how we review the products and how we continue iterating on it based on customer feedback. A good validation of that is if you look at a generative AI platform, it takes half the time to get it done.
That shows that as we launch more features, we are going to continue to stay true to that tenet.
Great. Thank you, Bratin. Unfortunately, that's all the time we have for Q&A. We are going to be around. Let me just wrap this up by saying thank you, panelists. I'm going to welcome Wade Wagner, who's our Chief Ecosystem and Growth Officer, just to give us a little bit of idea on how the customer showcase is going to work.
Great.
Thank you, Paddy. Hello, everyone. My name is Wade Wegner, Chief Ecosystem and Growth Officer. Today, I am actually your ambassador to our customer showcase, which is going to take place right outside here. We are very fortunate to have four customers who are going to spend time with all of you answering questions, talking about their business, and talking about how they are growing on DigitalOcean. First, we have Autonoma, which offers an all-in-one solution to digitize machines, processes, and the customer experience. We have NoBid, who connects publishers to advertisers through an advanced auction optimization platform. We have Picap that is the leading rideshare company in Latin America, also providing logistics. Scribe uses AI to make process documentation effortless. Here is how it is going to work.
If you all take a look at your badge, you should have a number on your badge on either side of it. That number is going to correspond to a customer, and that customer is all set up out there in a booth. When we get started, you're going to hear a bell ring, and please proceed to that booth where you're going to get a short presentation from the customer, and you'll have an opportunity to ask some questions as well. When the bell rings again, we're going to shift clockwise to the next booth, and pretty steadily, you'll be able to hear from all of our customers. It should be a lot of fun. We really hope you enjoy talking to these customers. We actually now also have a 10-minute break before we get started. We have some food and light refreshments.
Please avail yourself of that, and we'll get started soon. Thank you so much.