DigitalOcean Holdings, Inc. (DOCN)
NYSE: DOCN · Real-Time Price · USD
95.87
+0.66 (0.69%)
Apr 27, 2026, 11:53 AM EDT - Market open
← View all transcripts

Goldman Sachs Communacopia + Technology Conference 2024

Sep 11, 2024

Gabriela Borges
Analyst, Goldman Sachs

All right, I think we can kick it off. Good morning, and thanks for joining us at the DigitalOcean session today at the Goldman Sachs conference. I'm Gabriela Borges. I cover this talk here at Goldman, and I'm delighted to have on stage with me the DigitalOcean management team. We have Paddy Srinivasan, CEO, and Matt Seifert, CFO. Thanks for your time today.

Paddy Srinivasan
CEO, DigitalOcean

Thank you, Gabriela. It's a pleasure to be here. Good morning.

Gabriela Borges
Analyst, Goldman Sachs

So, Paddy, I wanted to start with your impressions over the last six, seven months or so since becoming CEO. You've done a number of investor meetings, and you've also done a number of internal DigitalOcean meetings to better understand the company. I'd love if you could share with us your observations on the perception versus reality.

Paddy Srinivasan
CEO, DigitalOcean

Mm-hmm.

Gabriela Borges
Analyst, Goldman Sachs

What are some of the things that you've noticed from the outside that investors keep asking about that perhaps doesn't truly reflect the reality of what's going on at DigitalOcean?

Paddy Srinivasan
CEO, DigitalOcean

Yeah. Thank you. Excellent question to get us started. Good morning, everyone. Thank you for being here. The first thing I would say is a positive perception that I didn't appreciate until I got in, which is the power of our community. We have a community of professional developers that are so passionate about DigitalOcean. I knew it before I walked in, but I didn't know the extent to which they were passionate about DigitalOcean and everything we stand for. That I just wanted to take a nod to that incredible phenomenon because not many companies can say that. I think in terms of perceptions versus reality, I want to talk about two different things.

One is this notion that DigitalOcean is a cloud that is meant only for individual developers or very small SMBs. I think, that's how the company got started. Incredible foundation, and to date, our incredibly efficient self-service funnel is a testament to the brand equity that DigitalOcean has and the following it has in the community, as I was saying. While all that is true, we are in a different stage of our evolution and our lifecycle as a company of DigitalOcean, and not many people realize that, we have 18,000, what we call, as Scalers, which are larger customers in our portfolio, and these customers spend on an average $25,000 per year on us.

So by any measure, that is a different makeup of customers than what people are generally associating DigitalOcean with. And these companies are actually running very mission-critical software on us. And as I have tried to explain, these eighteen thousand, a vast majority of them are cloud-native customers in the sense that they were born on the cloud, and they are offering cloud software to their customers. So we have tons of companies that run gaming systems, multiplayer games, children's games. I was talking to a South American games provider that's exclusively on DigitalOcean, that has more than twenty million monthly active users on our platform. These kinds of stories are not getting told as much as I would like to. So that's one example.

We have tons of VPN providers, running globally on our platform. We have even adtech providers, which are notoriously latency sensitive. So we have a lot of these mission-critical applications running on our platform. So that's myth number one. The misconception or the myth number two is that we are super heavy on the small and medium business, and rightfully, analysts and investors associate us with, "Oh, the market has to improve, the macro has to improve for us to drive further growth." While that is true to a certain degree, we feel, just even looking at the 18,000 Scalers we have, which drive 57% of our revenue and growing at a significantly faster rate than our general growth rate, these customers are also multi-cloud.

When we talk about multi-cloud, we think about Goldman Sachs or Procter & Gamble or the big, large enterprises because they have policies and compliance and things like that, that drive them towards multi-cloud. But the Scalers that we deal with, they also are multi-cloud, and we have a tremendous opportunity to expand our share of wallet and bring some of the workloads that they might not be already running on DigitalOcean to us based on the virtue of our products and the performance of our stack. So, so I think will the macro help? Yes, absolutely, it'll help us drive up our growth rate, but I think there is an opportunity for us to also expand our wallet share with the existing customers we have.

So those are the two big ones that I would say really have been big positive lessons learned in the seven months I've been here.

Gabriela Borges
Analyst, Goldman Sachs

Absolutely, and during that time, you've announced several high-profile additions to your team with impressive backgrounds. So I'm thinking of Larry D'Angelo, Chief Revenue Officer, Bratin Saha, Chief Product and Technology Officer, Wade Wegner, Chief Ecosystem and Growth Officer. What was your approach to building out the bench, and what growth areas are you collectively excited about that is perhaps unlocked by some of this new talent?

Paddy Srinivasan
CEO, DigitalOcean

Sure. So I would say, the executive team has been very deliberately built, on the foundation of, two main pillars. One is, a deep understanding of who we are and what we need, and the second one is, where are we going? So, I'll start with, the product side, because that's the fundamental thing that we offer as a tech company is our technology. So we really wanted someone who has operated, a public cloud at a massive scale.

We wanted someone who can also take us into the future with AI and really look at the unique value that DigitalOcean can provide to our customers and democratize the access to AI, which is now only for the rich and the privileged, so to speak, because of the skills that are needed and the horsepower from a GPU perspective. So, Bratin fits that bill perfectly on both dimensions. He has operated some of the largest platform businesses for other providers, and then he's also an expert in the emerging world of AI. So that was really important for us to nail.

The second thing is, once we have the product velocity, which we have picked up over the last few quarters, it is important for us to reconnect with our ecosystem, because as you know, we thrive and take a lot of pride in having the, I believe, one of the world's best product-led growth machines. So our top of funnel is entirely fed by self-service and product-led growth.

So, making that investment in the ecosystem, acquisition and growth, being very technical in our developer marketing and making sure that we invest the right skills and bandwidth towards developer relations and developer advocacy is a very, very important part of our strategy, and that's why we got Wade, who has built two of the most memorable or the most vibrant developer ecosystems, one for Microsoft Azure and another one for Heroku. So we were very lucky to have him come take over that. And finally, with Larry D'Angelo, we have someone who is very skilled at adding sales-led growth to product-led growth motions. This is very, very important.

I mean, there are great CROs out there, but it is very unique to have someone who has very deep experience in taking a well-oiled freemium type of product-led growth motion and scale that motion using sales-led growth, whether it is direct, indirect, using channel partnerships, but also looking at driving expansion within the family of customers we have. So that was the rationale behind it, and I'm super excited, along all three dimensions. As I talked about in the earnings call, we have picked up product velocity. You will start seeing us more and more in the developer ecosystem, even more than you're used to seeing us, and number three is the sales-led growth is going to take a little while for it to flesh out.

We won't be rushing into finding solutions, but doing it in a way that complements our existing strength in PLG is going to be really important.

Gabriela Borges
Analyst, Goldman Sachs

Absolutely. So I want to spend a little bit more time first on the product side and then on the go-to-market side. So with product, DigitalOcean has talked about your current GPU-as-a-Service offering with Paperspace. I think the even more interesting part of the discussion is what you're doing now with fractional GPUs and your longer-term roadmap for LLM-as-a-Service. So starting with GPU-as-a-Service that you have today, how do you think about your differentiation versus some of the specialized GPU providers and some of the endpoint and inferencing specialist providers? And then maybe we can talk about roadmap in a little bit, too.

Paddy Srinivasan
CEO, DigitalOcean

Sure. Yeah, it's a great question. So from a pure GPU-as-a-Service, we have multiple offerings that are emerging from our portfolio. First one is we have bare metal access to GPUs like many other providers do. A couple of weeks ago, we also announced an orchestrated DOKS service, which is DigitalOcean Kubernetes for GPU-as-a-Service, which is a mouthful, but essentially what it means is not every company has the ability to take GPUs and manage the overarching life cycle of GPUs. It is. It's not for the faint of heart. There's a lot that goes into it, and most companies are familiar with managing environments using Kubernetes.

So we said, "Okay, we will provide the same ease of use and ease of managing the life cycle using Kubernetes for accessing GPUs." So that was that offering. We also announced GPU Droplets, which is another layer of abstraction to allow companies to actually come and expect the same simplicity we brought to CPUs, but on GPUs. So what does that mean? It means when you instantiate a GPU instance using the Droplets concept, you not only get access to the GPUs in a very efficient manner, you also get some of the other things that you expect in the environment, like a PyTorch environment and all other machine learning tools. You also get the ability to manage the whole life cycle.

And all these device drivers for the GPUs are in a stage where they are kind of flaky and not very stable. So you have to manage the state, take snapshots, and it's a lot of grunt work that goes into just managing the life cycle of these machines. So customers can come in, just say, "Yeah, just give me a Kubernetes abstraction, and I'm fine with it." Or they can say: "Hey, I need a little bit more hand-holding, so I'll go with the Droplets way of doing things." And we are first in the market. I'm sure there'll be a lot of companies that will offer not just the fractional access to GPUs, but also on-demand access to GPUs.

So I think for us, that combination, the one, two, three punch of fractional, on-demand, and abstracted, or virtualized access to GPUs, that's what our customers wanted. We are not doing this because we want to beat our competition. We are doing it because this is what our customers tell us that they want to make their lives simpler. So that's the one, two, three punch we believe will democratize the access to GPUs.

Gabriela Borges
Analyst, Goldman Sachs

When you think about—y ou mentioned you're targeting what customers want. At any given customer, are they aiming for a balance across the one, two, three, or is it a specialized, they will do mostly one out of the three?

Paddy Srinivasan
CEO, DigitalOcean

So I would say, it depends on the customer, but our customers want all three, to be honest, because-

Gabriela Borges
Analyst, Goldman Sachs

Interesting. Okay.

Paddy Srinivasan
CEO, DigitalOcean

So this class of customers that will need access to bare metal or the raw horsepower of GPUs are typically taking a model and fine-tuning the model. So they're focused on enhancing the model weights and things like that. And these customers are not necessarily looking to spend a bunch of their grunt work. Like, there is a famous thing in machine learning where, like, 75%-80% of the time is spent on managing the data pipeline, right?

Gabriela Borges
Analyst, Goldman Sachs

Sure, yeah.

Paddy Srinivasan
CEO, DigitalOcean

So we've all heard that. So that, but we don't want the repeat of that in model fine-tuning, right? So that's the class of customers. If you look at a different class of customers who say, "Yeah, I, I think I can go with the Llama 3.1. It's great. That's good enough for me. I just need an LLM endpoint," we have another set of services that we are working on, which provide another layer of abstraction, which is moving up the stack to what we call as a "Platform-as-a-Service"- type of connotation for companies that are not looking at fine-tuning, but they're just taking a model, and they just want to use it and extend it using their custom data, right?

So, those customers, what they need is not just LLM endpoints that they can get from many inferencing and other clouds. But what they want is, once they have access to an LLM endpoint, they want to compare different models for accuracy, for total cost of ownership. Once they have that, they want to be able to inject their own data, create custom data pipelines, and then once the LLM gets the customized data, they want to change the answers using RAG. So building RAG capabilities on top of these LLMs. And once you have all of that, customers are not satisfied just getting an answer from the LLM, but like, "Oh, so, you're giving me an answer, why don't you just take action?

Like, book me the ticket, or, take action in in rebooting that machine that is about to fail, or free up memory." All of these things. So you go from just answering the question to taking action, right? So you need to be able to take, the LLM and point it to a, an API and pass the parameters and start taking action. Once you start taking action, you go into the world of agents, right? Or even before that, you want to also put guardrails and toxicity filters and things like that, which are, very unique to your environment. Like a bank's, guardrails might be different from a retailer's guardrails, when the bot is talking directly to consumers.

And once you have actions being performed by your LLM, you want to have stay persistent, and you go into the agent workflows and things like that. So there's a ton of value our customers need, and we are working very furiously to build that abstraction to our customers. So as you can imagine, we are moving up the stack to help our customers who are not as deep-pocketed in terms of the machine learning skills. Like, if you have 20 machine learning engineers, you can probably do all of this. But if you only have two, how can we democratize the access to these complex technologies for them? That's how we are looking at building the various offerings.

Gabriela Borges
Analyst, Goldman Sachs

Absolutely. There's some good detail there. So, Matt, I wanted to ask you a couple of follow-on questions on this topic. So the first is, the customers that you have that are using some of the AI products that Paddy was just talking about, what does that cohort expansion look like? I know it's early, but you have many years of data on what typical customer expansions look like with DigitalOcean. How do the early AI cohorts compare to traditional app builders?

Matt Seifert
CFO, DigitalOcean

So I'd say that typically the profile of the customer that's on our AI platform today is a little bit further along in the development of their own business than they might have been, as you know, an analogous one in our core business. They tend to be Series A or seed- funded. They tend to have more funding than our typical—a lot of our existing customers are bootstrapped. Again, 70% of them, you know, come from outside the U.S., so the core customers are, a lot of times, you know, they start as individual developers, they start with a smaller business, and they grow on our platform.

The AI customers tend to come in bigger and they spend quite a bit more, given that, you know, it's still expensive relative to just coming in and buying handfuls of Droplets. So they tend to be larger, more well-funded, and we are seeing them grow, and they're the customers that we've seen come in have continued to seek more capacity. So we get a lot of our growth and that we've described in the last kind of twelve months and six months even is coming from existing AI customers that are coming in, and they're getting a small toehold, and then they want more, and they want longer commitments and larger capacity.

Gabriela Borges
Analyst, Goldman Sachs

And talk to us a little bit about the associated CapEx decisions that you're making on the back end. You've talked about an incremental $30 million, essentially in CapEx this year for both AI services and then R&D for AI services. And then you've announced the Atlanta data center in Q1 2025. So how are you making these incremental CapEx decisions, and how are you balancing your footprint across perhaps some of the more leading-edge GPUs versus a broader footprint of silicon?

Matt Seifert
CFO, DigitalOcean

Yeah. The nice thing about the strategy that Paddy and Bratin have articulated and that we're pursuing is it's not a one predicated on buying large quantities of GPUs and building massive, you know, GPU farms, where you have to make, you know, commitments on massive data center space and order, you know, hundreds of millions or billions of dollars of GPU capacity well in advance. Ours is, I'd say, you know, much more capital efficient, and we're able to see the demand, react to the customer requirements, and order on a more kind of regular basis than super lumpy. We did take down the Atlanta data center you talked about, that'll come online early next year.

That's not only to serve the AI capacity, but it's also to help as our part of our long-term data center optimization strategy. So we'll move workloads from more expensive markets into that Atlanta data center as well. But we have the ability to look, you know, even just five, six months ahead and still be able to order. Like, we could decide today we wanted to add incremental GPU capacity, and probably six months from now, we'd be in a position to turn that up and make that available. With, you know, half that being time to receive the gear and half that being time to kind of turn it up and get it configured.

Gabriela Borges
Analyst, Goldman Sachs

The other part of the conversation that's relevant here, I think, is the industry moving from training use cases to inferencing use cases. Most of your customer base today, as I understand it, they're not training models on the DigitalOcean infrastructure, but they are leveraging inferencing use cases and some of the other solutions that you provide. How do you think about the performance of the DigitalOcean network as inferencing use cases pick up? Where does your network fit versus something like the idea of on-device inferencing? And you made a really interesting comment in the last couple of weeks about how, well, actually, the latency that's already in the DigitalOcean network is incredibly low because of the need to be able to run the app seamlessly. So talk about how all of these concepts intersect as the industry moves towards more inferencing use cases.

Paddy Srinivasan
CEO, DigitalOcean

Yeah, that's a, that's a really good question 'cause we think about it a lot. A lot of our customers are doing fine-tuning and a lot of inferencing on our platform already. And as I was saying, we have some use cases. I picked a couple of workloads to highlight and bring home the point that we already have customers on our traditional workloads that are extraordinarily latency sensitive. So gaming, obviously, doesn't need me to explain why latency is important for them. AdTech, again, notoriously latency sensitive. Same thing with VPN providers and so forth. So we have a cloud that gives us a lot of confidence that we'll be able to handle the latency requirements of inferencing needs and some.

The other thing that I want everyone to take away is that inferencing is not just offering an LLM endpoint. Like, a lot of the things that I mentioned, which are around building data pipelines, building knowledge bases, getting RAG right for your inferencing solution, and also many of the cloud primitives, like, basic compute, storage, network, of course, all of these things are also really important for our customers who are looking to build AI into their core cloud software. 'Cause you need all of that, plus you need an LLM endpoint. So, I think the decisions are going to be made in favor of infrastructures that offer them all of the above, including the LLM endpoint.

To me, the LLM endpoint is only one part of a bigger set of needs that our customers need for serving up AI, LLM-powered AI features into their application. So, for example, if it is a supply chain management solution that is using an LLM-based demand forecasting module, it is only one part of the module or the application, rather. You still have to do all of the other things that require. Their customers are buying a supply chain solution, they're not buying an LLM solution.

Gabriela Borges
Analyst, Goldman Sachs

Matt, there's a pricing and gross margin component to this, too. You've been actually very forthcoming in talking about the three-year payback period on gross profit and some of the numbers that get you to that math. How do you think about the swing factors there? You've got pricing, where the supply part of the equation, and then you've got the pricing part of the equation, and you've talked about how there's more tightness in the supply than there is on the cost. How do you solve for a price to get to the three-year payback? What's the biggest risk to being able to achieve that?

Matt Seifert
CFO, DigitalOcean

Yeah, I think you hit the nail on the head. Think about it on both fronts. So on the cost side, you know, we're in an odd market condition right now, where there's massive demand, and yet there's a single supplier, and there's certainly pricing is and cost to us is higher than we think it will be over the long run. As you see other providers come in with comparable solutions, as you kind of advance through the generations of the current technology, we expect that cost curve to come down. But I think the bigger picture is the second part that you mentioned, which is pricing.

Which, you know, if all you're selling is straight, you know, GPU rental, and it's a price per hour, and all those things I described in the supply chain, you're certainly worried about, okay, what happens to pricing over time, and does that become commoditized? How quickly does the price per hour of an H100 go down when the H200s come out, and the Blackwells come out, and AMD comes in? And there's a lot of concerns. But for us, as Paddy was describing, the layers of software that we build and the abstractions we build on top of the GPUs, we're not just leasing GPUs on a per-hour basis to our customer, which gives us the ability to value-based price.

That's where, you know, we're most focused, is we believe that the economic for us will be different. We'll get better margins because we'll be providing, you know, bundles of value and software layers and abstraction, and should be able to command more than the commodity kind of, you know, cost per hour on a just a piece of the infrastructure.

Paddy Srinivasan
CEO, DigitalOcean

And just to add to it, so what we mean is at the infrastructure layer, it's dollars per hour, but then in the platform layer, it's token- based pricing and other types of pricing models that will emerge. And in the application layer, no one has started doing it, but I would imagine it's definitely not dollars per hour. It could be requests per or price per request and things like that. So, the business model. And that will give us a couple of layers of abstraction between how our customers pay us for the value that they're getting from us, versus just looking at consuming raw GPUs.

Gabriela Borges
Analyst, Goldman Sachs

Absolutely. I want to switch gears and spend a little bit of time on the core business as well. So Matt, based on some of the disclosures you've given on your expected contributions from Paperspace and Cloudways, you can back into the core business growing, call it mid-single digits. How do you think about what normalized looks like for the core business? And maybe as part of that, you can level set us on the macro and what you're seeing in terms of willingness of customers to spend with DigitalOcean.

Matt Seifert
CFO, DigitalOcean

Yeah. The core business is clearly not growing as fast as it had historically, or it hadn't been growing as fast. It's the primary headwind we have right now is the NDR. NDR is less than 100%. It did historically before COVID and the kind of software surge, it was in the, call it, low hundreds. Spiked to about 118% during that period, and then it's come down to sub 100%, just barely, 97%. The good news is, the self-serve funnel that Paddy described, you know, every day, right this minute, somebody somewhere signing up for our service and putting in their credit card, and we didn't have to talk to them or do anything, and that's an incredibly efficient model.

That'll contribute around 8% growth this year. And so that headwind of, you know, a couple percentage points of growth out of the core, the cohort, is what's putting us in that range. But we're, you know, we're very optimistic that the work that we're doing to increase product velocity, to improve kind of the engagement with our customers, that we can get that NDR back above a hundred. And so that'll flip from a couple point headwind to a couple point tailwind. And that puts you at a kind of a, you know, low teens growth rate in the core business. Then you couple that with Cloudways, and our managed hosting business continues to grow faster than the core, so that'll add some incremental growth.

The AI business clearly is adding, you know, three- plus points of growth. If you think back to the core of the business, we think there's a tremendous opportunity there for us to reignite the growth. If you think of the overall cloud market growing in the low twenties, we feel like that's a great aspiration for us. We're not even close to that, so we won't talk about that. We'll talk about getting into the mid-teens, the low teens, and see, you know, what we can do to accelerate from there.

Gabriela Borges
Analyst, Goldman Sachs

Just to be clear, none of those numbers that you just outlined assume macro getting better?

Matt Seifert
CFO, DigitalOcean

No. No.

Gabriela Borges
Analyst, Goldman Sachs

Okay.

Matt Seifert
CFO, DigitalOcean

The macro would be, as Paddy talked about earlier, you know, people assume that we have to grow, the macro has to improve for us to improve. We're focused on what we can control.

Gabriela Borges
Analyst, Goldman Sachs

Yeah.

Matt Seifert
CFO, DigitalOcean

A nd that's increasing adoption by offering features and capabilities that really meet the needs of our larger customers. One of the products that Paddy and the team have cranked out in the last even couple of months has 40% adoption in our top 100 customers within, like, the first couple weeks of deployment. And that's because it's a scaling feature that they all need. And so we're really excited about, you know, the impact that will have. And some of those aren't like immediate revenue generators. They're not a SKU with a price associated with it, but it's a capability that enables a customer to scale, keeps them from contracting, and so you'll start to see that impact NDR, you know, as we get into next year.

Gabriela Borges
Analyst, Goldman Sachs

So we've talked about all of the factors you can control. You also have some really interesting data and data science teams that focus on the health of the existing customer base. Is there anything you could share with us on the health of the SMB and the developer ecosystem, and how that's trending, or how that's trending in quarter-to-date?

Matt Seifert
CFO, DigitalOcean

Yeah, I think it's been, for us, very consistent. So the churn rate, we haven't seen customers kind of going away, which you would indicate is either, you know, loss of their business, going out of business or dissatisfied with us. It's been very, very stable. Contraction, which you could point to as a indication of our customers, you know, optimizing or they're seeing their own businesses go backwards. That's been very, very stable. It's still a little bit elevated, so I'd say there's still some extra pressure in the market, but it's steadily just kind of returning back to the mean, back to the long term.

I think the biggest indication for us, and this has been very consistent for the last eighteen months, is our customers just aren't growing as fast as they were previously. So they're—y ou know, they used to grow on our platform. The customers that grew would grow in the mid- to high thirties, and that's in the low twenties now, and it's been very stubborn. The good news is, it's very stable in the low twenties, but it just hasn't really picked up. And I think that is in, what we view as most indicative of what's going on in the market, is that, you know, the customers are hanging in there. A lot of our customers, as Paddy said, they're digital natives.

They need our platform to provide their service, so we're not discretionary per se, but their businesses just aren't growing as fast as the collective has historically. And it's very consistent. There's not a specific region, not a specific vertical, not a specific industry that we've seen be very dissimilar to those macro trends.

Gabriela Borges
Analyst, Goldman Sachs

You mentioned there the points of growth contribution from self-service, and Paddy had talked earlier about Larry D'Angelo building out the direct sales team alongside of that. What have your learning lessons been from DigitalOcean's efforts to build out direct sales in the past? Suddenly, that was coming at a different time in the macro in 2022, when there was a, like, down in demand. But how do you think about direct sales complementing what has historically been a product-led growth motion?

Paddy Srinivasan
CEO, DigitalOcean

Yeah, I think we look at it as a tremendous upside opportunity. So it's interesting you framed the question the way you did. So we are looking at every attempt in the past at DigitalOcean that some of it worked, some of it didn't, but we are inspecting all of those things to make sure that we are taking the right lessons and going forward. So I look at it as three different pillars. One is what can we do from a sales-led growth point of view to augment the new business acquisition engine? So that's Pillar One. Pillar Two is what can we do from a customer success/account management perspective to expand the relationships with the Scalers that we currently have, and we have eighteen thousand.

So we are not just gonna throw bodies at it. What kind of technologies should we invest in, so that we get the right signals, so that we can contact the customer at the right time to help them expand their footprint on our platform? And the third thing is this whole notion of we've never been able to figure out the channel partnerships for DigitalOcean. So that, to me, is a net new thing. I mean, obviously, we will be smart about doing these three things. Getting the right order of operations is important. So for me, pillar number two, which is really helping our customers expand their footprint with us, is priority number one, and the other two pillars will be a close second behind it.

We are looking at investing the right type of technology, plus humans in the loop, to make sure that we are able to advance all three. Sequencing is really important, and we are taking a very mindful and thoughtful approach to making sure that we are able to articulate the unit economics of scaling this before we invest a ton of human resources. Also, as we've all seen in the last few days, there's a lot of debate in the market around the leveraging of AI for understanding customer behavior and things like that.

So it's a really good time to be building a brand-new go-to-market motion, and we have the luxury of standing on top of our incredible PLG motion.

Gabriela Borges
Analyst, Goldman Sachs

Why do you think channel hasn't worked or hasn't been a part of go-to-market in the past?

Paddy Srinivasan
CEO, DigitalOcean

I don't know if it has ever been a big focus area for us, given how efficient our self-service acquisition channel has been. So that's something that we are looking into very seriously now.

Gabriela Borges
Analyst, Goldman Sachs

Excellent. Any questions from the audience? Matt, we'll end with your favorite topic on convertible debt. The buyback has become more measured. The CapEx has gone up. We talked about that already. 0.25% convertible notes in 2026 as that date approaches?

Matt Seifert
CFO, DigitalOcean

I'll say again that I think we're in an incredibly strong position from a balance sheet standpoint. We dialed the buyback back to give Paddy and Bratin the opportunity to not be constrained in terms of the strategy that we employed. And fortunately, the strategy that they've articulated and we're pursuing is less capital intensive than other strategies you see in the market. That means we're sitting on a lot of cash, we're generating a lot of cash, and we can still invest in growth and maintain decent, you know, free cash flow margins. So that gives us a lot of flexibility over the next, call it, 15 months before the convert becomes current, to sort out exactly what we're gonna do.

I can tell you we're actively, you know, thinking about it and talking about it, and we have the luxury of time, so we have, you know, the ability to watch, see what happens to rates in the coming, you know, months and quarters, and be opportunistic about when we take advantage of it. But I feel really good. You know, our target long-term leverage remains 2.5x-3x on a net levered basis, and we'll have no shortage of opportunities to either use additional converts or term loans or other traditional debt to refinance that. And we also have, you know, the quantum to refinance that we can. We've got some flexibility around how much we need to refinance.

Gabriela Borges
Analyst, Goldman Sachs

Very good. Please join me in thanking Paddy and Matt for their time. Thank you, gentlemen.

Paddy Srinivasan
CEO, DigitalOcean

Thank you.

Matt Seifert
CFO, DigitalOcean

Thank you.

Powered by