Hey, everyone. Good morning. I'm Pinjalim Bora, a software analyst at J.P. Morgan that covers SMID Cap . Delighted to have here with us Paddy Srinivasan, who's the CEO of DigitalOcean. Paddy, welcome to the conference.
Thank you, Pinjal. Nice to be here.
Let's start with maybe a brief introduction about yourself, and maybe some... You know, say something about DigitalOcean as well. I'm sure there are some people that don't know the story properly.
Great. Thank you. It's wonderful to see everyone here bright and early. As Pinjalim introduced, I'm Paddy Srinivasan, CEO of DigitalOcean, and I've been on the job for about 90 days. Previously, I was the CEO at GoTo, also known as LogMeIn, and I've spent most of my career working at tech companies, including Microsoft, Oracle, and Amazon. So, I'm super happy to be at DigitalOcean. For those of you who are not familiar with us, we are a cloud computing platform. So much like the hyperscalers, we cater to the developer. Essentially, we call ourselves the developer cloud. So we focus on attracting and helping scale entrepreneurs and startups, essentially, going from people who want to explore, learn new technologies, and then scale to real-world applications.
So that's who we are, and happy to talk about now what we see in the future.
Yeah, great. So, since you're new in the role, in DigitalOcean, maybe talk about what attracted you to DigitalOcean in the first place. And then, since you're here now, what are the positive surprises? Did it kind of fall into the expectations that you had? And maybe talk about some of the areas that you think can improve within the company.
Sure. Sure. So what attracted me to DigitalOcean? So, as I said, I'm a developer at heart. Been doing this for 30+ years. So when I looked at this opportunity, there were two things. One is, where do we play, and how do we play? And where do we play is an area that all of you are familiar with. Public cloud market is. And I had a startup in the public cloud market in the early days, starting in 2008. So I followed this really closely. It is probably the biggest technology market ever, right? And it is growing at a very robust pace.
So obviously, you have the hyperscalers, but we play in what is typically called as the SMB segment of the public cloud market, which is about one third of the $350 billion+, according to IDC. And even in that, we have a small sliver. So it's a massive, massive market, growing very, very robustly. So that, to me, was very attractive, where we can carve a niche out. So, talking about the second dimension, which is: how do we play? What types of competitive modes do we have? What makes us unique, and why do developers come to us? There are a few angles which make us very, very unique. Number one is our durable competitive advantage of making things super, super simple for developers.
So any type of new workload, which, typically in the world of technology, starts out being very complex. There's an overwhelming number of choices. It is super complicated to get started. We make that journey very simple, A, to learn, B, to, to develop and deploy application, and four, to scale and operate those applications at scale. We make it super simple. Number tow is, as part of all of this, the ROI, the price to value, as well as the transparency of the price to value. As an example, even this morning, we had... We announced, a new refresh to our App Platform, which is our platform as a service.
Even in that, when you look at some of the early customer feedback that we got was how appreciative they are in taking something as simple as auto-scaling and making it extremely transparent in terms of cost. So any cloud provider you go these days, the bill is like almost like a telco bill of the nineties or eighties, and it's really hard to say how to interpret those line items. And we make it super simple, super transparent, and puts the power or the control back in the hands of developers. So that's number two: simplicity, transparency, and a compelling price to value. And the third thing, which I think is our biggest weapon, is the power of our community. We have a very large, thriving community of developers that help each other.
So when someone is trying to learn a new technology or trying to deploy a complex piece of software or figuring out, "Should I go with this version of this open source library or that version?" Even before we can jump in and help the developer, our community gets in and really helps. So I think these are three very differentiating angles for DigitalOcean, and also very durable. It's really hard to make simple software for complex technologies. It is. It takes a certain type of commitment to price to value and transparency for every feature we enable. And building trust with developers and scaling it to millions of developers, that is a really hard thing to do. So I think those are the things that makes us unique, and that was very attractive to me.
As I step out of my first 90 days in my role, the positive surprises have been the love the customers have for our platform. They just absolutely love us, and they love the simplicity. They love the fact that there is a tremendous amount of peace of mind, and one consistent feedback is how easy it still is to explore our platform and get started and start scaling their application. So that is really important. And number two is the community. Again, that has been a very positive surprise. I always knew, that's how I got to know DigitalOcean, but it has been a really positive surprise in how much passion there is with our community.
Yeah. So taking a step back, whenever I talk to investors, and especially somebody who's new to this story, right? They're trying to understand, what, how do I characterize the growth story of DigitalOcean? Is it a new customer acquisition story, or is it more of an expansion story? Because you have landed with a lot of customers at this point. So maybe talk about that. How do you characterize the growth levels going forward, or maybe it's a mixture of both.
Yeah, new customers or expansion. I'm gonna surprise you by saying both. It's really exciting to be DigitalOcean these days because when you look at the new customer base, so there are about 25 million -26 million professional developers in the world, and about 1 million developers are added to this pool of professional developers every year. So that is a massive opportunity for us because we have this concept of learners, builders, scalers, where most of our paid customers start by paying some really small amount just to learn and be hands-on with our platform. So that's a great opportunity for us.
Number two is, even when you look at, and a lot of our customers are, especially the scalers, are companies, are tech companies that run their software application on us, and, and that application is the, the way they make money. So in other words, they're independent software vendors. And when you look at independent software vendors, even the small ones, the concept of multi-cloud is not just for the Capital Ones and the Procter & Gambles of the world. 5 or 8 years ago, multi-cloud was a big buzzword for CIOs, but now even the smallest of customers, they want to have optionality. They don't want to get boxed into a corner with a hyperscaler. So the concept of multi-cloud has started, expanding. And the third thing is, I can't believe I've gone 15 minutes without talking about AI.
That is a new emerging opportunity for a platform provider like us. I think it's a really compelling way to get new customers that look like our old customers, but customers that are trying our platform, walking through the front door of AI. So the new customer logo acquisition is very alive and vibrant for us. When I talk about the expansion, there are two ways I look at it. One is just getting our existing customers to consume more, and for that, we are...
If you have paid close attention to our product release velocity over just the last six weeks or so, it has picked up considerable steam, and that's because of the push the team is putting on, both on our platform as a service and infrastructure as a service, to make our platform a little bit more broader for our, quote, unquote, scalers, which are about 17,000 in number. They're our biggest customers, and as I said, most of them are tech companies. So as they look to expand their footprints, as they're trying new workloads, we want DigitalOcean to be the number one choice for them. And also, as they are looking at moving workloads off of a big hyperscaler, for whatever reason, we want DigitalOcean to be the preferred secondary cloud option for them.
So for all these reasons, I feel. And from an expansion point of view, the AI customers that are walking through the front door using our platform and infrastructure services for AI, quickly they realize that after they get past the model training and model fine-tuning stage, they have to deploy it. They need the same cloud primitives. They need compute, they need storage, they need networking, they need all of the stuff that we already provide. So there's a latent and a very nascent opportunity for us to cross-sell cloud primitives to AI customers. So that's why I think it is, it's both new customers as well as expansion opportunities.
So, pulling that thread on expansion, scalers, for the benefit of the audience, it's customers who are spending more than $500-
Mm-hmm
... with you per month, I believe. But you have something like 3% of your customer base as scalers and contributing to majority of your ARR at this point. We understand product breadth, or expansion product breadth probably is one way to get those people from learners to builders to scalers, the graduation of the customers up the chain, right? But is there anything from a process point of view, go-to-market point of view, that you can change, where you think there could be some improvement to drive that graduation, cadence?
Absolutely. I think there's opportunity on both sides. And I think you hit both the main points, which is number one, what made us great is our focus and our ability to take the 80% that most tech companies need from a public cloud platform and do it really, really well. Now, that 80% can be 85% or 90%, so we are expanding. Especially as new workloads emerge, we are expanding the breadth of our platform to make sure that we are able to cater to their needs. And so there's a share of wallet expansion possibility with scalers.
And also, as we try to graduate the builders to scalers, how do we make some of these higher level abstractions, like platform as a service and database as a service, much, much easier for customers to consume? So I think those are two really important opportunities for us. And outside of R&D, shipping, and increasing the breadth of our offerings, as you said, our go-to-market motion, we are... For the new investors that are not as familiar with DigitalOcean as others, we are incredibly product-led, growth-oriented. We have a very efficient customer acquisition engine. I would say there's super early innings from a go-to-market point of view. What really works is a product-led growth engine. So we drive a lot of top of funnel.
Most of our top of funnel is driven by organic search. With the power of the community, we have millions of technology articles, and you won't believe how many customers have actually told me that, "I started my development journey learning how to code Node.js using your applications and or your tutorials." And we sometimes have better tutorials and guides than actual the technology provider themselves. So that's how we start the top of the funnel. Now, from a go-to-market point of view, we have opportunities up and down the stack. So we can get a lot better at identifying. So we have literally tens of thousands of developers that sign up and even pay us at the top of the funnel every month.
Every month, we have tens of thousands of developers that sign up to try one of our services. We can do a much better job of identifying the ones with the most propensity to expand on our platform and then follow them through the journey of month 1 to month 3, and then month 4 +, in optimizing the conversion of those cohorts. And then, of course, we have a very latent customer success program, which is fairly new and we are working super hard to add more sophistication to that, drive more technology and machine learning-based propensity models to identify... 'Cause we have the law of large numbers. We cannot do it manually.
We have to use technology to be able to identify and communicate with them. Our customers are developers. They don't want to get a phone call from us. So we have to be smart about using different types of technologies, both inside the product and outside the product, and through the community, to tell them about the latest and greatest of our platform, to help get them to try new things using very tasteful product-led triggers, so that they start consuming more of our platform. So there's a lot of opportunity in the intersection of product and go-to-market, which we are yet to untap.
Where can that scaler percentage go, though? I mean, is there? There has to be some, not. It can't go from 3%- 100%, right? There is probably some, a lot of, learners and builders that will never scale up to scalers, right?
Mm.
How to think about that kind of-
Yeah. So I don't know if we, well, we have increased our scalers by 50% in just the last 24 months, without a lot of the stuff that I talked about. So I think, there is a lot of potential for scalers to become a bigger part of our, composition, customer composition. But also, I look at the share of wallet, because, I feel there is a tremendous opportunity within the scalers where, we can make it super compelling for them to, use us more for their—as their primary cloud, as their workloads expand. So I think, I look at it from both dimensions, just increasing the sheer number of scalers, but also increasing the depth of penetration we have in their workloads.
Yeah. Let's switch gear a bit on macro. I mean, Q1 prints, when we look across the board on the SMB side, the demand environment is not that great overall. But for DigitalOcean, seems like trends are stabilizing, maybe slightly improving in some of the NDR metrics. Talk about what is your sense of kind of the macro environment at this point from talking to customers.
Yeah, from talking to our customers... So, do our customers look like SMBs? Yes, they do, if you just look at the number of employees, less than 500 employees. But there is a big difference. Our customers are tech company. A lot of our customers are tech companies. So even if they only have 80 employees, a vast majority of them are either building their software product or supporting their software product. So, even though a small company is a small company, and they have all of the same headwinds in terms of access to capital, liquidity, and wage inflation, and all of that stuff...
Their reliance on technology is a lot more existential than most other small and medium businesses that might be using technology to make a phone call or using technology to do digital marketing. For our customers, technology is how they make their money. That technology is their business. So there is a little bit of resiliency there, but I think our customers are facing many of the secular headwinds that other small businesses face.
Yep, understood. Let's talk about AI/ML, because I feel like that's one of the most exciting part right now with Paperspace acquisition. I understand it's still very small, but it is pretty exciting. But there is a little bit of a skepticism when I talk to investors, right? In terms of why does an SME need to have access to pure GPU? Why can't they use an already prepackaged application that has AI in it? So talk about that opportunity from your point of view, and maybe throw in some custom examples that would help.
Yeah. That's a great question, and I think this is where my previous nuance is really important. A lot of our scalers are tech companies, right? They are. Are they small? Yes, but they're tech companies, and for any tech company, they cannot be immune to what is happening in AI. I'll give you some examples. A lot of our customers who are tech companies do things like online gaming or ad tech. Let me use the ad tech example. So, I was talking to a customer three weeks ago. It's an ad tech company, and for them, they not only need to embrace AI. AI can be a can potentially just completely disrupt them. So the way they operate is they have a library of 100,000+ assets.
They take an audio ad and create a video version of it. That's their... And then they do micro-targeting. And with generative LLM exploding now, and think about the fact that they have 100,000 video assets is almost meaningless. They need to adapt and change their business or change the architecture of their application to do this automatically using LLMs and generate applications. So it is the same business, but the way they use technology has completely changed. So, that's one example. We have many, many other examples of companies that do online gaming, for example. So, creating images and video on the fly, doing fraud detection. So when we talk about AI, most of the headlines is hogged by LLMs.
But, a lot of our customers are telling us that, LLMs are only the tip of the spear in terms of their AI needs and AI adoption. And most of our customers, are our traditional DigitalOcean customers, are going to be AI extenders and AI consumers. So they're not gonna be building the latest and greatest model, but they will, they have a lot of appetite to go, discover open source models for fraud detection or demand forecasting or, natural language processing and things like that, and, take it, inject their own custom data, in a way, to fine-tune the open source model and run it in a cost-efficient manner, which is where DigitalOcean comes in.
So, I think it is false to say that small companies won't need AI or won't be AI hungry. Will they all be in the business of building models? No. They'll be in the business of having to fine-tune models, extend the models, and leverage the output from these models. So, and we also have, with the Paperspace acquisition, and especially with our infrastructure as a service, using GPUs that we announced in January, we also have companies, small startups, which are model builders that are GPU hungry, that are walking through the front door of our GPU infrastructure as a service. So we do have both.
We have our traditional customers that are going to be AI extenders and consumers, and then we have the next generation of startups that are coming to us because it is just so easy to get started to build new LLMs or take an LLM and build a new version of it, like a verticalized example of an LLM. So we have a company that is one of the leading coding assistants, not named Copilot from GitHub. This is the other one, and they are running on our platform, and they're taking a couple of existing open source LLMs, and they're creating a verticalized coding assistant LLM that other developers can use. So I think it is... it's very compelling for small companies.
I think, you know, I would imagine that they are going to be super aggressive adopters, both on the extension and consumption side of AI, and also in very nuanced horizontal or vertical applications using LLMs.
That is interesting, coding assistant. I guess I have to find out the name now. But on MLOps, I mean, the acquisition of Paperspace, when I first saw it, the thing that made me interested in it was the Gradient platform, which I feel nobody talks about, at least in the investor community. Maybe explain what does this MLOps platform offer? And I don't know if you have any customer examples at this point, but how are people using it?
Yeah. We have 15,000 customers using the Gradient platform. So let me explain. Let me take a step back. So in January, we announced our GPU-based infrastructure-as-a-service platform. So there are two variations of that, but it is basically for model builders and model trainers to get access to raw GPU power, either on a bare metal basis or using infrastructure abstractions, using hypervisors, right? Depending on the appetite and the speed and the scale that they want to operate in. The Gradient app, application, or that is what I call as platform-as-a-service. So Gradient is the platform-as-a-service we acquired from a company called Paperspace last year. So the way that is different is that that provides full lifecycle support for any software developer that is building a machine learning AI-enabled software application.
So, we go from discovery, build, test, deploy, operate, so the whole lifecycle. So what do they have to discover? So let's say we are a supply chain application. We want to have a demand forecasting module, which needs some predictive forecasting. So, we have hooks to model zoos like Hugging Face and others, where you can actually go and discover the latest and greatest open source model that you can import into the Gradient canvas, and you can start building a data pipeline using your data to modify it, right? So AI is only as smart as the data you feed in, especially these types of applications. So we enable a data pipeline into it.
So data prep, cleanse, all of that, all of those steps to be able to fine-tune and modify that model. And then once you do that, how do you invite other developers in your network, build collaborative notebooks and things like that? And then you go start deploying it. You start looking at the accuracy, you tune some parameters, you change the weights, you do all of that stuff, and then you finally publish it and operate it. So Gradient provides a platform-as-a-service abstraction to be able to do all of that in a nice, elegant manner. So that's the power of Gradient. So those customers are typically not trying to invent a new model.
They are taking a model and extending it, fine-tuning it, and consuming those models, and operating those models using a public cloud on DigitalOcean. So that's the difference between the two. As I said, we have 15,000 customers on Gradient, and that is that. And we are adding a lot of new capabilities to that. So think of that as being our WYSIWYG model building and fine-tuning service. And also, as we go into the future, as these models get deployed, we will support a lot of inferencing using Gradient. So inferencing is the... So you have training and then inferencing. Inferencing is when you deploy a model into production, and you use the outputs to feed into your application.
Yeah. Is that a big differentiator for you versus, like, the Lambda Labs and the CoreWeaves of the world?
See, Lambda Labs and CoreWeave of the world provide infrastructure as a service, right? And they have even their super bare metal; their customers are the big model builders. So they need raw horsepower, and they need bare metal performance to move very large volumes of data between different parts of the GPU fabric. What we have, we also have that. We have a bare metal offering, which many of our model building customers take advantage of, but Gradient is totally different, right? It is not for someone who needs raw access to GPUs.
It is for someone who is early in their journey, which is a majority of our 630,000 customers, that want to take an already existing model, they want to understand it, they want to collaboratively build on that, they want to inject their own data, and they want to do certain things at an application abstraction layer, versus get raw access to GPUs.
Yep, understood. So you are seeing good kind of traction in the AI side as well, right? It's small, I understand, but there—I think you said what was the stat? 67% month-over-month growth in GPU hours. So, but you're not increasing the CapEx needs at this point. I think two quarters or three quarters ago, you said $50 million. We are still holding on to that $50 million for AI, ML CapEx. Why not? Why not go all in if you are seeing a good amount of, you know, demand?
Yeah, we are seeing good green shoots, and we want to take a very calibrated approach to this. We don't want to go after someone else's strategy. We wanna come up with our own strategy, which stays true to our ICP, our target customer, which is the developer who is a tech company developer that is trying to make money selling software. So we will stay very true to our core developer, and we are starting to see green shoots in multiple areas. And we want to carefully calibrate our investments to pursue the right growth, which is in line with our core target customer. And there are three layers in which we are looking at the green shoots. Number one, how does the customer look like for AI?
Do they look like the customers that we are used to seeing? Can we predictably acquire them using the product-led growth motion that we have? How do the unit economics look for that customer? Number two is, what are the needs of that customer, both on a platform side as well as on the infrastructure side? And number three is, can we start accelerating our development of Gradient and also our infrastructure solutions to satisfy those needs? And we are looking for green shoots on all those three pillars. And as we start seeing more durable green shoots on those, we will surely look at ways by which we can accelerate our growth story while staying true to our strategy to serve the customers that we think are very sticky and very loyal to our platform.
Yeah. We have less than five minutes left. I want to see if anybody has questions.
Yeah, go ahead.
I think there's a mic.
Yes.
Can you describe your GPU infrastructure? How much do you have? Is it mostly A100s, or are you updated with H100s? And what are your CapEx plans to build, you know, Paperspace if GPUs are so important as part of the platform as a service offering? Thanks.
Yeah. The question was around our footprint in terms of the hardware as well as our future plans. So yes, we have all of the above. We have H100s, we have A1000s, and... Yeah, I don't think it changes almost on a daily basis. We are still fleshing out our capacity, and we have a good diversity of these hardware platforms. And we are looking at the changing landscape. We are looking at all of the different vendors and the needs of our customers, and we will keep matching them. So I don't think I want to go into the exact numbers of one SKU versus another because they're constantly evolving. But we are very competitive, and we are very competitive from what we are offering to our existing customers.
There's also a difference between model training versus model inferencing in terms of whether the customer needs a H100 or a A1000 or even some of the other series. You should expect us to follow the trends of the industry and also keep our eyes and ears wide open in terms of some of the emerging architectures and the emerging hardware vendors in this space as well.
You know, obviously, you have potentially large customers that might move to hyperscalers, while at the same time, some of those hyperscaler customers might move to you as a multi-cloud solution. Can you talk about how that yin and yang has played out, up to now and how that might change over time?
Yeah, great question. So the question is around the movement of customers from us to hyperscalers and hyperscalers to us, and how I foresee that playing out in the future. So I have seldom seen a customer move everything from one cloud to another. So right now, it is almost every tech company in the world, whether big or small, is in a multi-cloud environment. And I think that it's only going to... The density of your cloud is only going to get more and more dispersed. I think it is going to be super democratized, and we see a lot of our customers adopt similar strategies.
So someone might be 80%-90% on a hyperscaler today and 10% on other clouds, and in a few years, that'll be 60% on the hyperscaler and 40% outside of the hyperscalers. And I think that is for a number of reasons, including transparency, including technology lock-in, and many other competitive concerns the companies have. So we think we are just in a fantastic position to take advantage of this because most of our customers like us for our simplicity and the community support we provide them. And we are quickly emerging as the alternate cloud because we have the size and the scale and the foundation.
We have 630,000 customers, so it is not a custom- it's not a platform where you have to worry about the durability and the performance of the company or the platform. So we feel very optimistic. Going back to Pinjalim's original question around the share of wallet, I think we feel very confident that we can ride this secular dispersion of cloud workloads across the various clouds to our advantage.
Anybody else? Yes.
Quick one. Within scalers, is the rate of churn pretty homogeneous among the heavy users in the scalers and the lighter users in the scalers?
The question is around the churn rate of these companies within the Scalers. No, I think I don't know. I can't comment on their churn rate, but when we look at our customers that are coming in, they don't typically churn 100% from a hyperscaler, and the same thing is true for companies that might leave us or one of the other clouds to start adopting hyperscalers. It is the migration of new workloads that go from one cloud to another.
So it is hard to say if there are any strong attributes of churn one way or the other, but I feel multi-cloud is becoming a thing even in smaller companies as they look for elbow room and have optionality.
Okay, I think we are out of time. Thank you so much, Paddy.
Thank you, Pinjalim . Yeah.