Right. And the story sounded like what they've been doing, so it was just, you know, again, the normal, like, how do you do it and all that kind of stuff, but.
All right. Good morning. I'm Sanjit Singh. I run the infrastructure software practice at Morgan Stanley. We're super excited to have the management team at Datadog. Datadog's joined the conference every year since it became public. I think even one, like, one year when you're private. So thank you once again. We have Olivier Pomel, CEO and Co-founder, and Chief Financial Officer David Obstler. Ollie, David, thank you again for joining us at the TMT Conference.
Thank you.
Thank you.
Awesome. So let's get through some disclosures and we'll dive into the story. For important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales representative.
So let's start, sort of level set, where we are in the business. Datadog had another solid year, growing the business 26%. You're at $2.7 billion in revenue, 25% operating margins, a great financial profile, and you have 30,000 customers. Ollie, I think what stood out to me about this year was this wasn't a year of easy comps per se, and yet the stability of growth really came through. What were the factors that enabled that stability of growth? Where did you think the team executed well, and where do you think there's areas of improvement?
Yes. Thank you. So, I mean, look, in general, it was a year where most of what made us successful today remained true. You know, so in particular, cloud migration is still happening. It's still happening at a good clip. You know, it never really stopped. There were some ebb and flows, during COVID, after COVID, et cetera, et cetera, but it's still there, and it's still very, very early.
We're still getting an outsized share of existing and new workloads. You know, we're getting share pretty much from everybody else in there. We're the leaders in observability, and yet we only have about 10% of that market, so there's still quite a bit of it to be had, you know, which should allow for, you know, very healthy growth rates in the future.
Another thing we did well, I think, is we kept executing well on the multi-product part of our business and consolidation, so many customers have been consolidating on us. They've been buying more products from us. We've been building more products successfully and shipping them, so we mentioned in the last earnings call, you know, we have about $3 billion in ARR of that. $1.25 billion is in infrastructure, $0.75 billion on the logs, $0.75 billion on APM, and $200 million in some of our other products that are also growing very, very fast, so we're very happy with that motion.
Now, in terms of what we could have done better, I think if I look back at the whole year, I think we did a pretty good job at growing the sales and the engineering teams in the second half of the year, but we got started a little bit late at the beginning of the year to scale that up, you know. And so when I look at where we are today, I wish we had more sales capacity online, and I wish we also had more engineers to build all the things we need to build.
That's great. That kind of dovetails into my question for David. When we think about the guidance, you guided Q1 to about 21%, full year 18%-1 9% . Margins come down a little bit, versus last year. Can you give us sort of the underlying guidance assumptions? You know, beyond that, is there any sort of factors that stand out beyond sort of you discounting the sort of net consumption trends on the revenue guide? And then on the investment side of the house.
Yeah.
What parts of the business is poised to get incremental investment in 2025?
Yeah. On the revenue side, it's gonna be a bit of a boring answer. We did what we've done every year, since we've been public to start the year. We've taken the drivers, which are net retention and new logo accumulation, and discounted them, and we took the same approach this year as we did in previous.
One thing that we've been asked is we've pointed out that we have a small but rapidly growing AI tools customer base, and we've talked a little bit about that, and we essentially didn't assume that was going to continue to grow at the same rate like the other sectors. We took that down. We don't know what's gonna happen, but we put some risk management on that, and in terms of the investment side, I think this has been going on, as Ollie mentioned, for some time.
We said sort of towards the end of 2023 that given the long-term opportunity and the fact we had pulled back on the back end of COVID and some of the optimization, that we were gonna invest more both in go-to-market and in R&D, and we started successfully doing that in the second half of 2024 and are gonna continue that this year.
In the R&D, as I'm sure Ollie will talk more about, this is about capacity to invest in the platform and the new product areas, and in go-to-market, it is really about coverage. There's a lot of international places we don't have optimal coverage, as well as investment in channels in some specific areas like Fed, security, as well as in marketing in order to cover that around the world.
Yeah. That makes total sense. Sticking with David, I actually would love to get both of your perspectives. Sort of taking a step back, you've been sort of three years into a higher rate, more cost-conscious budget environment. What's been the storyline in terms of how has the competitive environment sort of changed? And as the sort of dust has settled, in this sort of a tighter budget environment, what would you point to for investors that Datadog is coming out of this cycle in a stronger competitive position?
I'll go first. I mean, to me, the competitive environment is not actually very different from what it's always been. We've always had some incumbents on the higher end, on the enterprise end, that tended to be more on-premise than we were and a number of I would call it a rotating cast of new companies on the low end, you know, in between the do-it-yourself and the very low end of the market.
That's not different today. I think we're generally the leader, still in some areas behind some of the incumbents, you know. So, for example, we're still smaller than Splunk on the log side, which I think is a big opportunity for us. You know, overall, I would call the competitive environment very similar to what it's been in the past.
As I mentioned earlier, we have about 10% of the market. We're the leader. Plenty of opportunity that market's growing by 20% year- over- year. So we can see ourselves keep taking share over everybody else and have a very substantial growth for the foreseeable future just on our core market.
Awesome.
I think, just to add, we've said it all along. The platform sale, the consolidation opportunity, the gaining of market share, it's only come into more of a bright light as you have the natural utility and productivity of having the platform with also the opportunity as you consolidate, and about half of our deals on the larger side are consolidation. You can, you know, save money and cost manage at the same time you're getting a better product.
As we think about where we are in the state of the business, $3 billion in scale, for investors who haven't or are not as familiar with the Datadog story, maybe they've heard of Datadog, looking at, as an investment opportunity maybe for the first time or revisiting the story. Ollie, what gets you excited? You're at $3 billion in scale. What are the opportunities that you're really excited about that can essentially move the needle in terms of growth at this stage of the company?
Right. Well, if you haven't invested yet, now's the right time to invest. So there's a number of things to be excited about. I would classify them in two halves. There's AI and there's the rest. The rest is probably what's going to move the needle first, you know, which is everything we're doing on top of our existing observability business and on the edges of it.
We have so much to do. Like, when we look at what we're pushing for this year, we have a lot of opportunities with our logging product. I mentioned earlier. We're not the incumbent there. There's plenty of market to be had. We have a product called Flex Logs that is doing extremely well in the market, and that we expect to scale quite a bit.
On top of this, product called Flex Logs, we are starting to see an inflection from our SIEM product, our Cloud SIEM product, that we expect also can make a significant dent this year, and we think it's a great product that we're putting a lot of effort behind. We have a ton of opportunities in APM. Like, there's a long fuse in instrumenting and observing every single application out there, and we've seen steady growth there over time, and we keep investing to deliver that.
There's a real opportunity with new products in service management, and in particular with our On-Call product, for which we had extremely strong reception even before it was released in GA, and so we also expect that product to get strong uptake fairly quickly from our customers.
And we see great expansion possible in analytics and, going beyond, core observability, on top of our Real User Monitoring product, you know, which is growing very fast off of a large base at more than $100 million in ARR. And again, we think that's a product we can really lean into. So a lot of directions to go into on our existing products, and, obviously in our core. And now if you switch to AI, I would break down AI into, like, let's say, also two halves.
The first half is helping monitor applications built around AI, and most of that is core observability. Most of that is logs, APM, metrics, infrastructure, Real User Monitoring, that sort of stuff. I would say the other half is interesting. There's opportunities in both helping companies understand what how the models are performing.
So we have a product for that called LLM Observability, which helps customers understand whether the LLMs are working properly and what they're delivering for the business. And we're starting to see real traction for it, which is exciting. And the other part of it, the last part, is how we can automate the whole process for our customers, and be in the business of not just observing and giving tooling for engineers, but also in giving them agents that can help them perform some of the tasks automatically, you know.
So the dream there is, instead of waking you up at night telling you to fix something, or telling you to investigate to fix something, you get a text the following morning that said there was an issue. It was fixed. You should look into it. That's a lot more attractive.
We're not there yet, but we can clearly see the path there, you know, from the technology. And that's also an exciting development for the future. And from our positioning, from the vast amounts of data we have, the fact that we sit right into the production infrastructure and that we're in the right workflows for investigating, detecting, and solving issues, we think this opportunity is ours to lose. So a few ways to get excited. That's at least what excites me.
I loved how you framed it between, like, you know, the core opportunity, which still has a lot of runway, as well as how AI can drive growth. Some of my industry conversations last couple of months, there's this lot of framing of sort of observability 1.0 versus 2.0. It's framed in different ways. But if we sort of just take 1.0 as, you know, the application that we've been monitoring for the last several years, you know, applications running in the cloud or applications built on cloud-native infrastructures, what does the next era of observability look like from the Datadog perspective?
Yeah. So the first thing I'll say is observability 1.0 is already monitoring 3.0, you know. So it's been there for a while. And when we started Datadog, we were in monitoring 2.0. So we've already got one major version.
Version 3.0.
That's great. From where we started, you know, when I look at where the market's going, where it is today, it's interesting because it went in many of the directions we thought it would go, and we shipped the company to support. The first thing that happened is that what used to be many different categories have really consolidated into one larger category, which is observability, and we've built towards that, and we think that's great.
The second thing that happened is that we've seen a lot of democratization of observability. It used to be something that a few system admins would do in a corner on their own, and now observability is a core part of every developer today, and there's broad use within the companies, pretty much across workloads and across teams and across individuals, which is very exciting.
There's a third trend we've seen with observability, that was always present but is getting even more important, which is that the value is being shifted from writing the code into understanding it, running it, operating it, securing it, and understanding whether or not it's doing the right thing for the business. It used to be the case that trend has been there in the past because it was getting easier and easier to write code you didn't understand.
You know, you had more libraries. You could use open source, SaaS, cloud, the internet with Stack Overflow, and you name it, you know. So all of that was there, and that helped developers gain a lot more productivity over time. Today, we see that happen with the coding models. You can actually write code very quickly today, using AI assistance.
You know, it used to be that if you wanted to write a new application using a specific API, you'd spend the day reading about the API. You'd watch a few YouTube videos to learn about it. Then you'd do some trial and error, and you'd get it to work in the end. Now, instead of doing that, you can ask the model to do it, and in five minutes, you'll have a version of it that works. It's great. You get to production a lot faster.
On the other hand, that information doesn't go through your brain, and so you just have no idea of what's going on. And so the value, again, has changed from writing the code to testing it, understanding it, figuring out whether it works as you thought it would for the business, and doing everything that happens next, which is what we do.
Yeah. Is there a theme there in terms of shifting left in some sense, you know, as the value sort of moves away from code generation to operations? And any sort of thoughts there on how or maybe give us an example of how Datadog is making that shift left?
Yeah. I think it's not so much about us shifting left, shifting left. It's more about the value being more on the right side than being on the left side.
Right.
So it doesn't mean we have to write the code. You know, we still think that writing the code and what developers use on a day-to-day basis is still going to be a fairly fragmented ecosystem. There's going to be lots of tooling, lots of personal preferences that come into play. Whereas the systems that power observability and management in general tend to be more platformy, you know, in that in a company, maybe developers will use 15 different ideas, but there will be one platform for observability. I think that's where we live. We think more value is going to be delivered that way as opposed to what happens on the coding side.
Yeah. One of the interesting things I find covering this space is just, you know, all of the different sort of technology debates. And because it's a category that's kind of defined by a fast pace of innovation. And so, you know, when you think about some of the emerging technology trends, GraphQL, for example, the Query Data, with advances in sort of database technology to store and process data, AI/ML approaches to analyze data, I guess the question is, Ollie, is how is Datadog ensuring it remains at the forefront of the technology curve as you guys continue to move more and more upmarket?
Yeah. So it's a great question. I think there's, there's different parts to that. First part is, is just cold math, you know, which is we reinvest about 30% of our top line in R&D, and we can do that because we built a super efficient business and a very efficient go-to-market, which means, we have high gross margins.
We have room for healthy margin that we can return to investors, and then we, we can pay for go-to-market and then we still reinvest 30% in our, in R&D, which is significantly more than all of our peers, and I think, you know, we, we invest like two or three times more than our, or two publicly traded competitors combined in, in R&D. So that's one part. The second part is, and that's structural to the business. We have a very, very broad customer base.
We serve 30,000 customers, as you said, and we also have many, many free customers, free users on top of that, and about the bottom half of that 30,000 customers only represents about 1% of our ARR. So we don't have those customers because we make a lot of money off of them.
We have those customers because they are small companies, new companies, and they take us into the future in terms of the technologies they use and how they build and what they do with it. We've mentioned a few times over the past few quarters our cohort of AI-native customers. And I know there was, you know, quite a bit of focus recently, maybe on one or two of those customers that have grown faster than the others.
But even if you look at the rest of that cohort, like, these customers are the who's who of the companies building the future of AI applications, and they're building companies in a different way, and they're building software in a different way. They use different components, and they pull us into that direction with them as they build. So that's a key part of our business.
Yeah. That's a fantastic sort of feedback loop into the product engineering organization. One more question, for you on the sort of technology side of the house. This relates to OpenTelemetry. And for those in the audience who didn't know, OpenTelemetry is an open-source standard for collecting telemetry data. It seems OpenTelemetry gets more and more popular. You guys have supported OpenTelemetry quite extensively.
You're a top five contributor to the project. But in terms of, you know, as OpenTelemetry gets more capable, is there a concern that it'll make it easier for customers to insource their observability capabilities? And to what degree is OpenTelemetry becoming a threat to the growth equation?
Yeah. So we've always been big believers in the data collection to be open source. From day one, our agents and everything that lives on the customer infrastructure has been open sourced with a very copyleft license that lets them basically do whatever they want with it. And so if there was ever a threat of customers just taking that and saying that to themselves instead, that's always been there.
We're super happy with the evolution of OpenTelemetry because it works really well. Customers that are opting it, it's making it a lot easier to instrument workloads. And in the end, you know, if the end goal is to have more instrumentation in more places, more penetration of APM and all those things, I think it helps towards that goal.
I think where we had quite a bit of work to do was to make sure that the path was as easy and as straightforward for all customers, whether they were using core Datadog instrumentation from day one or they started with OpenTelemetry or they mixed the two of them. So we spent quite a bit of time working on that, but I think we got to a very good place there where we can offer everybody the best experience, whether they are 100% on our own instrumentation, 100% on OpenTelemetry, or half and half.
You mentioned in one of your previous answers in terms of framing out the opportunities sort of being underpenetrated in the log opportunity and our recent conversation, there seems to be a sort of re-emphasis on getting logs right. Can you talk about Datadog's evolution in terms of its log management product and your recent acquisition of Quickwit? To what extent do you think buying criteria is shifting more and more towards how to effectively manage, process, and analyze logs?
Yes. So we see in logs in particular, we see two drivers for customers right now to upgrade and maybe move from an incumbent to a or a homegrown solution to a platform like ours. These two drivers are being cloudy and modern and well-integrated with everything else they do. And the second one is being cost-efficient. Just because the fundamental theorem of observability in general and logs in particular is that, you know, for every amount of data with a limit you set, any application can generate more data than that, and, you know, there's nothing you can do about it.
Mm-hmm.
So it's very important to have the right cost-effectiveness of those products. With our Flex Logs products, we're hitting those two things. Obviously, we're very cloudy and very modern, and you know, our platform in general is very well-integrated, and customers feel good about that. But also, with Flex Logs, we give them economics that help them reduce their spend from whatever they were using before, but also scale you know effectively into the future.
And that's something that is working very well. We've also, as you mentioned, made an acquisition more recently of a company called Quickwit, and the goal there is a little bit different. So the goal is that company was focusing on had built a data store and search engine that can be deployed on-premise for extremely efficient and simple access of log data.
And our focus there is to try and bring that into the Datadog platform so we can serve customers with specific needs in regulated industries or in countries, you know, where data just can't get out and things like that, so that we they can also use our platform. It hasn't been a need that has been very present, I would say, over the past five years. But looking at the way the world is evolving from a geopolitical perspective.
Right.
And also in the way more and more primary data might end up in log files. You know, say, for example, if you interact a lot with an LLM and you want to log all the interactions, like, those interactions will actually contain some personal information, some highly sensitive information. We think it's likely that some of the data might be held on-premises, even though the heart of an application and even a lot of the compute might be held on the cloud.
Yeah. Let's talk about prosecuting the enterprise opportunity. The enterprise business has been one of the leading growth segment in the business for the past year and a half. In terms of the enterprise go-to-market sales motion, Ollie, when you think about 2025, what's the sort of magnitude of change? What do you have plans for the enterprise go-to-market motion? How does that compare to last year? And to what extent are partners becoming a more important piece of the puzzle?
We're investing, and David, you can speak to that if you want. Like, I know you love the topic of investment, so.
Yeah.
And yeah, I mean, I think we essentially made it clear that enterprises are underpenetrated. They're earlier in their cloud journey. They expanded less than cloud natives in the COVID period. And when you look at our penetration, yes, we have a lot of customers, but we were not really fully penetrated.
So what we're doing is, one, we're trying to expand our quota capacity in enterprise selling. And that includes all geographies, but there are many areas of the world in APJ, Latin America. A good example is we had nobody two years ago on the ground in India. We were covering it from Singapore. And now we have, you know, 30, 40, 50 people in all the functions in India. So there's a lot of, you know, coverage to be had. And then you're right.
In the channel side, in some of these markets, some of the first motion go-to-market motions is our channel-led, in Japan and in some of the areas where currency is an issue in Brazil. And, you know, we didn't have as much coverage in the channel. And I, you know, I think in terms of both sales engineering and support and marketing, we know that we need the whole ecosystem.
So, starting at the beginning of last year, but really taking effect in the very end of last year, we began to accumulate quota capacity, and are going to continue ramping that. Now, it takes a year or two for that to become productive, but, you know, this is a sign of optimism that we think there's a big opportunity and we're investing behind it.
Yeah. To pick up on that question, one of the questions that we're getting is that as Datadog, you know, moves upmarket, prosecutes that enterprise opportunity, does the unit economics, these, this great, you know, efficient, highly efficient business model that you've built, over the last decade, does that sustain going forward as you, as you move, further and further upmarket? And do we see that, you know, same level of efficiency? Granted that you're making some investments near term, but does the model sort of prove out when you, when you move, further upmarket?
We were most likely too efficient, right? So in other words, when you look at the return from enterprise investment, given our retention rates, you know, we had really, really high return, returns. And so we think we can grow this and, you know, and have a payback in roughly the year and a half or so period. And because of the land and expand, you know, it's a great return. So I think we do expect that the sales and marketing as a percentage of our revenues will go up purposefully. And but the return profile will be retained because it was, it's so high return.
We feel good about the investment and the ROI. I mean, we, we were investing and we're driving up the investment, because we know from past results that we will do well with that. I will say we, we're not afraid of, degradation of economics. I would even say that, you know, if you look at the past, the past year, two years, we've had better sales economics on the enterprise side than on the, on the SMB side. Now, granted, that might be relative to the SMB being more affected by the macro and, some of the, I would say, free growth not being as free anymore. But, still, we feel very good about the level of economics.
But we're pretty disciplined people. You know that, and we always watch this. So we are going to layer in enterprise investments as long as we think they're high return and they, they are.
That's where cloud's most or least underpenetrated.
Yeah.
So that's where the growth dollars are, which makes a ton of sense. Let's spend the last five or so minutes talking about the AI opportunity, some of the debates on. One of the debates that we hear on Datadog is that as we go into a new compute cycle, and this category has been highly sensitive to changes in compute cycle, will there need to be a new sort of AI-native approach to observability, in the way that sort of applications are built? And is that in any way disruptive to how Datadog is built today? How did you sort of respond to that?
I think so. Again, the first half of it is a very, very traditional observability, you know, in that you build an AI application, it's still going to have a database, a web server, it's still going to be to run some infrastructure. And, you know, for that, you need observability to tell you how the whole application is working. Some part of it is going to be new in that, you know, a lot of the application is going to be non-deterministic, and is going to be powered by an LLM model.
So, you know, if I were to give you two archetypes, like two extremes. One extreme is the code is still all written by developers and engineers, but the machine writes it, and then it runs on a distributed system, etc., etc., and it's beginning to get operated.
The second archetype is the code is written by known developers. It might be, and it's basically a model that is operated, that is the app itself, and it's more standalone. I would say on those two ends, we have a good idea of what the observability picture looks like. And we think that for the whole market is going to be on a continuum between those two sides. On the end where the machine writes the code and the developers manage it, we think it looks a lot like traditional observability. And as I mentioned earlier, the value moves from writing the code to operating it and securing and running it.
On the end where the application is largely non-deterministic and is a model and not necessarily written by a developer, then the job becomes understanding how that model is behaving and whether or not it is performing for the business. That's also what we do. We do that through our analytics and RUM products. We also do that with our LLM Observability product. By the way, we're starting to see some interesting uptake for some of those products. I mentioned the RUM growing fast, at more than $100 million in ARR and really a good leg of future growth for us. We're also starting to see some real usage and real customers using our LLM Observability product.
to give you a sense of what that represents, we have customers now that are spending more than six figures on LLM Observability every year, while these are customers that are non-AI-native companies and that have a total observability bill that is in the mid to high six figures. That gives you a sense of the importance it's taking and the place it can find already with customers.
I can't let you go without asking an agentic architecture question. Whenever we've talked, whenever I talked about technology trends, in, in the context of Datadog, you always sort of frame it in terms of the level of complexity. When we talk about agentic architectures and agents proliferating in customers' environments, what is the implication for Datadog's business as customers build out, those, those architectures?
We'll need to understand what the agents are doing, you know, which is a form of observability. I would say it's still a little bit early to understand exactly what the plan is going to be for most companies because the agents are still mostly not there. I mean, they're a little bit, but not quite. I think we still have to see what's going to happen, but that's definitely a big opportunity. By the way, I, as a CEO, if I don't hype the agents, I think I'm going to be fired next week. You know, we really were big believers in agents. I just think the market's still a little bit early for the production observability of most of them.
We do see with LLM observability, we do see some customers starting to move from building and observing chatbots to moving more agentic workflows. But that is still largely in test more than in production. What we see more in production today is the chatbot still.
Maybe we could wrap up the conversation just getting your perspective, Ollie, of where we are in the cycle in the framework of training versus inference? Are you seeing your customers get consequential applications in production that's sort of meaningfully impacting the, the business, sort of in a crawl, walk, run framework, whatever framework you want to look at? Where do you think we are in sort of the AI application build-out cycle?
I think we're still early. So I mentioned we have some customers that are making real usage of our production-minded products, and spending real money on those. So that is happening. You know, a year ago, it was not happening. Now it is happening. So it's good. We see that. But it's still a relatively small fraction of those customers, and I think the bulk of the adoption is still ahead of us for that.
In terms of training versus inference, I think what's interesting now is that there's less of a, like, in terms of where the world is going, there's less of a gulf between training and inference. In fact, it looks like a lot of the training might happen continuously. It might happen many more companies might be doing it than just a handful of hyperscale model trainers.
And so, you know, in great part after what we've seen from DeepSeek and the regain of attention into reinforcement learning. So I think there's definitely a future where there's not as big of a difference and more of the workloads become production workloads, meaning ongoing concerns for companies where we can add a lot of value.
Maybe just quickly to end on, it's like, could you describe the world where Datadog becomes the business is starting to benefit from the cycle getting more mature? Is that a proliferation of enterprise applications? Or just describe that world of where we are in that cycle where Datadog becomes more of a beneficiary?
I mean, look, more applications, more agents, more value created through software. That's what creates value for us, you know. We're confident that is exactly where the world is going and in many ways in a shape that is similar to what we have today, but in some ways in a shape that's going to be different. That's why we're working hard to build that up.
Awesome with that. Thank you so much, David and Ollie, for joining us in the conference again this year.
Thank you.
Awesome.
Thank you.