Okay, good afternoon, everyone. I am Mark Murphy, Head of Software Research with J.P. Morgan. Great pleasure to be here today with Olivier Pomel, CEO and co-founder of Datadog. Olivier, I just want to say thank you for taking the time to be with us again, and it's great to have you here at the conference.
Thank you. Great to be here in Boston.
It's interesting when you run through and look at the stats. Datadog is one of only four enterprise software companies with at least $3 billion in revenue and growing mid-20s plus. The other three are Palantir, CrowdStrike we were just talking about, and Snowflake. Clearly something very rare is happening at Datadog. Can you explain in layman's terms what is the problem that you're out there solving for customers, and what do you think is the phenomenon that's fueling that level of growth and scale?
Yeah. As a reminder, we do observability and security for cloud environments, which means we sell to engineering teams primarily, and we help them understand whether their applications, their systems, their AI, all of that is working properly, is performing, is doing what it's supposed to do for the business, and is secure. The trends, the mega trends that make us successful are digital transformation, cloud migration, and now AI transformation. This is the specific motion or specific trends. If you zoom out, really, the problem we solve and how we keep delivering more value to our customers is that we help them make sense of all of the complexity of their systems and their applications. If you replay what happened over the past 50, 60 years in technology or in software in particular, there's been a dramatic increase of productivity.
It's been easier and easier to write applications. There's been more and more components off the shelves that you could combine, whether that's software libraries, cloud components, SaaS, advanced languages, and now AI models. What you see happening pretty much everywhere is an escalation of complexity, a complete explosion of complexity, and the humans are having trouble keeping up with that, and that's what we help them with.
Olivier, you have over 30,000 customers. They are using Datadog to observe what is happening all across their own hyperscaler environments and basically to keep it running. It gives you quite the vantage point to comment on and really try to understand the trends. We look back on it, and interestingly, in Q4, while the stock market was rising, it was a very disappointing quarter across all four of the top hyperscalers. All four of them either missed or guided below in Q4. After that, you had the threat of a trade war coming in. The market was sliding. Now in Q1, the hyperscaler results actually felt a little more stable. When you look specifically at Azure, it accelerated. It accelerated and surprised quite positively. How are you reading the tea leaves on cloud activity for 2025 amidst potentially a trade war?
Certainly, the headlines are changing every day.
Yeah. You know, we try not to look too closely at what happens quarter to quarter to the major cloud providers. One reason for that, at least in relation to what we do at Datadog, is that a lot of those generations and changes in revenue are tied to the supply of GPUs, for example, and things that are, I would say, somewhat decorrelated from market demand and growth trends. You will see in the longer term all of that even out, but quarter to quarter, it is quite hard to make sense of it. From our vantage point, though, what we see is that cloud migration is alive and well. It can accelerate a little bit at times, slow down a little bit at times, but it remains around a certain trend line, and we expect it to continue for the very long term.
We also expect it to accelerate and have a longer runway thanks to the AI transition that is also happening now. Digital transformation and cloud migrations are prerequisites to AI transformation. I think all of our customers, and we can talk more about that, but all of our customers, whether they're AI natives or traditional enterprises, all of them realize that.
Okay. Core trends, pretty healthy and resilient. You've been consistent on that on stage here, I think, over the course of several years. Thinking back to March and April, we did hear some feedback that companies in the EU and actually Canada becoming a little hesitant to put more data into a US cloud provider, Amazon, Azure, Google, et cetera. There was this comment that they were realizing that they need an Airbus of the cloud to maintain independence, a local cloud provider. Do you see anything tangible happening there? Do you think the bark is louder than the bite?
I mean, we do hear that feedback. The last few times when I've been in Europe, I've heard definitely the intent of owning more of the data, hosting more of the data locally, giving more business to local players as opposed to relying too much on global players and in the U.S. in particular. That being said, we do not actually see that much of an impact at this point because there are no viable options outside of the major cloud providers. If you go the next step down in technology, it is quite a bit less performant and quite a bit more expensive. I do not think it makes sense, even if you have the intent of getting more business locally, to go for alternates.
Now, if you look at the way the world can evolve over the next few years, I do think there's going to be more of a motion to at least host more of the data locally and have more local governance. As far as we're concerned, I mean, look, first of all, we'll go where customers go. If they want to run data a certain way, we will be there for them. I also think it creates an opportunity for a company like ours. Hosting data in many different geographies, jurisdictions, and having a lot of different residency laws to comply with is a huge headache, it's a big problem. I think most companies will need help, and they'll need our help to manage that. I think long term, it's an opportunity.
It's more talk than action. You do think there could be a trend line there, multi-year, but net-net opportunity for Datadog.
Yeah. I think the real battlefield, so to speak, in the short term is going to be AI and the ownership and the control over the AI models. I think that's going to drive the decisions that are being made for the rest of the data centers. To put it another way, if what you really want is to be able to build and host an AI model, and if for doing that, the only available option you have is the existing hyperscalers, you'll go with the existing hyperscalers. You will not wait 10 years to do it so that you can build your own local equivalent before you can train AI models.
Okay. Very helpful perspective. I want to think back for a moment, Olivier, and the mood out there was very different three or four weeks ago. Coming into this earnings season, we were saying that investors were too pessimistic, and we were emphasizing that really the hard data was not changing as much as the soft data. We were saying the mood has changed, but the activity had not really changed. For Datadog, the Q1 results, though, were very solid. What stood out was you had the bookings health. You had backlog growth or CRPO, it actually accelerated noticeably. It accelerated to 30%. Then eight-figure deals. You had done one a year ago in Q1. You did 11 this time.
What do you think drove strength during Q1, and why did you have so many customers booking that much business given the kind of environment we've been in?
Yeah. I mean, look, I think there's really two reasons. The first one is, again, we're still early in cloud migration, and cloud migration is going well. I think one of the stats we disclosed was that we have only 45% of the Fortune 500 that are customers today, and the median ARR with those Fortune 500 is less than $500,000 a year, which tells you that there's tons of growth to be had with all those customers. They're still early in their migration in general. For them as companies, their cloud spend is a relatively small part not only of their IT budget, but also of their top line, meaning that that's the part that they're going to invest in. That's the part that's transformative. That's not the part that is at risk if they're going to face issues in the short term.
That's number one, a very healthy market. The second reason we did well is that we've invested over the past year, especially in the second half of last year, in building up our sales capacity. That sales capacity is coming online, and we see great ROI and great productivity there. Even though we're a leader in observability, I think there were updated Gartner numbers this week on the market share. We're also taking share. We're growing faster than the rest of the industry.
Okay. Cloud migration's doing well. The sales capacity kicked in. You're structurally gaining share. We had showed data, again, coming into this earnings season, Olivier, that the current environment was actually looking it's less severe than what we saw during the COVID lockdowns, and it is actually less severe than the onset of, we call it the software recession, which was the second half of 2022. We thought investors were expecting this environment to be worse than those prior cycles. Can you compare and contrast what you're seeing now? Because you see the day-to-day consumption, but you also see the pipeline looking forward.
Yeah. I mean, if you look at what happened, at least seen from our business, during COVID, actually, it was fine because that was the explosion of online, et cetera. What was trickier for us was the end of COVID and the flattening of the demand from all of the cloud-native companies, basically, which were the ones that were spending big, that were done with the cloud migration, that were fully at scale in the cloud, and that tried to save as much as they could in as short a time as possible. That was fairly painful, and you saw that in all of the numbers we released at the time. Today, though, if you look at the current situation, the companies that are growing faster, like those that replace the cloud natives, are the AI initiatives, and they're accelerating, and they're still fairly early in their runway.
The bulk of our business, the bulk of the demand we see is these larger enterprises that are still fairly early and that, as I said before, only spend a small fraction of their OpEx and an even smaller fraction of their top line on the cloud, and that's really what they're investing in. From where we stand, we clearly do not see the same kind of pressure. Obviously, if things take a turn for the much worse, like everybody is going to try and save money, it is going to be more difficult for everyone at every single level. Today, in the numbers we have and what we see in consumption or in what we see in the booking size or the willingness of customers to do deals, we do not see any impact.
Have you tried to look at that specifically in the industries that are most heavily tariff impacted? We think of, because to be fair, we had a wave of pre-announcements from airlines. There were a bunch of retailers that had problems. There were automakers having problems. Is any of that looking like the canary in the coal mine right now in the data?
We do not, but again, the spend of the company is fairly small at this point. We had actually a great quarter in Q1 for traditional companies. Q4 also was great for these. We mentioned in an earnings call, one of our best new logo deals was a car manufacturer. We actually signed two car manufacturers that same day, I remember. We signed a few airlines over the last few quarters, and those are growing nicely. Again, the budget we see, the spend on us, that spend that is growing, is part of the transformational investment. That is not part of the much larger carrying costs they have for their supply chain, their factories, their operations, their aircraft that I think is where they are trying to save money.
Okay. So it's more of an insulated pocket for you. I want to talk for a moment about the AI-native trend. Olivier, Datadog has just clearly stood out for developing one of the strongest AI tailwinds really across the entire software landscape. The AI natives reached 8.5% of your ARR in Q1. So it's actually Microsoft and Datadog are the two companies that are quantifying a really substantial tailwind at this point. Can you help us understand who are those companies that are driving AI for you, and then why do you think Datadog is so linked to it?
Yeah. I mean, it's pretty much the new incarnation of the cloud natives. I think if you started a company in the past two, three years, you're very likely an AI native. If AI is not part of your pitch, you probably cannot raise money, is my guess. That's mostly newer companies. We do have revenue concentration in that cohort of customers. We have one customer that's now our largest customer that is meaningfully larger than the others in that cohort. We also see a diversity of emerging winners in that cohort. We now have more than 10 companies that are AI native that are over $1 million in ARR for us. Those are growing both as businesses and also in terms of their consumption and their consumption of the cloud in general.
When you look at the makeup of those companies and what it is they do, they cover the gamut of what you need to build in AI. There are infrastructure companies for AI. There are model builders. There are agent companies, whether they are coding agents or legal agents or other kinds of business agents. There are also various applications that do not necessarily build models themselves, but that are built on top of these models and that generate value based on that for consumers, for example.
If we drill down into that and look at how they're using it, I think Datadog isn't really getting involved so much in the training side, right, where they're building the models. You are involved in the inferencing stage. In other words, product reaches commercialization, and then you're getting involved. Inferencing feels like it's a lot earlier stage to us and that it's probably going to have more durable, more explosive growth because basically you just look around and say, well, there's so many models that are still being built. Would you agree with that?
Yes. I mean, to level set, I think a lot of the training today is still very like it's more of a research activity, and it's largely one-off homegrown. I would say a smaller fraction of companies are doing that at scale. Inferencing is where the action is. That's where you actually have to serve customers and you scale with the demand and you provide value. Typically, even when you are a model builder and you ship a model and you have customers connect to it, that model just doesn't live on its own. There are other layers on top of it. There are databases behind it, authentication systems, firewalls, everything that you would find in a traditional application that needs to be monitored with it. Also, as these models and AI companies are becoming more and more sophisticated, the models don't operate in a vacuum anymore.
Many of these models get smarter and smarter by becoming agents and by using various tools. Those tools need to be run as well. If I take the example of what we built at Datadog, we're building agents to automate a lot of the work that SREs and engineers are doing. As part of that, there are models, but then those models run queries. They ask for data. They run automation scripts. All of these different things are applications that need to run. What we see is as more and more AI gets adopted and these applications grow, there's a bigger diversity of components that need to be monitored, and that's exactly what we do.
Okay. It has been an interesting journey, Olivier. For the last year or two, when I will ask institutional investors, what do you think of various AI products or copilots, a lot of people shrug their shoulders and seem a little unimpressed at what had been out there. We started commenting recently that the killer AI product for the buy side has finally arrived. It is here. That product is called Deep Research. The response that we get is very different when we ask about that. People say that it is amazing. It will take a project. It will divide it into subtasks. It will write Python code for you. Basically, it saves people a ton of time. What do you think the advent of these reasoning models is going to do for growth of inferencing?
I mean, we think we'll see acceleration of the growth there. I mean, you're right that the reasoning models and the improvements of models and their ability to use tools really helps deliver more value. We've seen that internally. We see that with our customers that are using those products, and we see that with our customers that are producing those products. We think there's going to be, as I said, many more diverse applications to monitor with these kinds of reasoning models.
They have a different level of complexity. They have a different level of compute intensiveness. Sometimes there's eight GPUs clocking at once. Does that make it more important or more of a challenge to try to observe and monitor those environments?
I would say it depends. If you're a model builder, maybe you've built a lot of that technology already. Like you build the model, you understand exactly how it runs. Maybe you build some other technology to understand what happens within the model. Even if you're a model builder, if your product is, say, an agent that is going to crawl the web and that is going to use a browser to try and simulate your actions and book flights for you and things like that, which you've seen those products and those agents out there, and they're getting better and better every day, you will need to run infrastructure that is crawling the web. You will need to run infrastructure that is running these agents in sandboxes or these browsers in sandboxes and recording them and storing the images.
You're going to run a very, very sophisticated and diverse software stack. All of that, at the end of the day, you'll see maybe 5%, 10%, 20% of your compute is going to be the model itself, but the rest is going to be the rest of the applications that are here to help the model, to feed the model with information and to help the model do its job.
One of the big hurdles that these LLM providers will have is dealing with the bias in the models, the hallucination in the models. They're trying to deal with the drift in the models. It has come to our attention that Datadog can help them with that. Can you help us understand what are the mechanics there? What does that value proposition look like?
Yes. We have a product that is LLM Observability. The questions that that product answers are around, is my model, first of all, the basic stuff. Like, is my model up? Is it working? Is it fast? How much is it costing me? That is the very basics. Beyond that, you can ask, is my model correct? Is my model safe? Like, is it leaking data? Is it saying what it is not supposed to say? Is it saying what I am expecting it to say? The last step, which is even harder, and I think that that part we are still building, is, is my model doing what it is supposed to do for the business?
In other words, when folks interact, or if I have end users that interact with a bot, for example, or with a feature that involves some open-ended thinking, what are they doing after that? Are they buying more? Are they staying longer? Basically, are they doing what I want them to do with the system? We are doing all that. I think, and that is with the current iteration of those models, and we expect those models to evolve a lot. I mean, we already see a move from our customers using these models in chatbots to customers using them inside agents that are always running and do not necessarily require a human to prompt them. We are seeing an evolution there already.
If you zoom out even further and if you look at what might happen in the future, if more and more of the application is not coded, but is emergent and is stochastic and is this model that is somewhat unpredictable in some ways, we see that there's a ton of value to be provided by observability because the value goes from initially packaging into a new model to understanding what it's actually doing every day in real situations with real users and who is changing over time.
Okay. That sounds like it would play into the AI tailwind that you have had, which you disclose as coming from AI natives. I think it is the big model builders, et cetera. One of the big questions is, when do we think enterprise adoption of AI is going to start kicking in? How do we want to try to forecast it? In other words, when do you think a big bank, a big retailer, a big pharmaceutical company is going to be done training a model and is going to actually be deploying it and then therefore driving revenue for Datadog?
I think the best way to think about it is to look at what the cloud natives or whether the AI natives are doing and see that as the future of what the rest of the market and the big enterprises are going to do. If you caricature, you can think of three steps in the AI maturity. The first step is you test applications with third-party models. The second step is you scale those applications with third-party models. The last step is you keep scaling these applications, but now with some homegrown models. If you look at the AI native companies, many of them are between step two and step three now.
We see a lot of companies that started by building on top of third-party models that are reaching some market fit, that are growing very fast, and that are trying to or starting to augment these third-party models with homegrown models and maybe even replace some of those models with homegrown models in the end. When you look at larger enterprises, we are between step one and two right now. They are between testing and, in some cases, they are starting to scale some of those applications. That is where we are. When you look at the incredible growth of our AI native cohort, we see that really as a sign of the future demand we are going to see from those enterprise customers.
What's happening with AI natives will inevitably trickle its way out into the broader landscape of enterprise.
Yes. It's very similar to the like we've seen that movie before with cloud migration.
For sure.
When we started the company, there was absolutely no interest from enterprises into the cloud. I remember the first time I pitched a bank you might know. I was, I think, a lot faster than you.
I was going to say it sounds familiar.
That tune changed pretty quickly. I think it became to be understood as not only viable for the larger enterprises, but also a big competitive advantage and a true way to transform. I think with AI, everybody understood much faster that it was going to be a competitive advantage. I think there are still questions about the safety of it and how fast the transformation can happen. To us, there is no doubt that the larger enterprises are going to follow in the footsteps of the AI natives.
Okay. AI, Olivier, is also impacting code writing itself very rapidly. You may have seen the CEO of Anthropic recently said that in 12 to 18 months, 100% of all code could be written by AI. I'm sure there's a bit of hyperbole in this, as always. Can you speak to how that trend might play out for Datadog? I think in theory, more code being written more rapidly, more applications being deployed, there's just more out there that needs monitoring.
Yeah. I mean, that's right. I would say the opportunity is even bigger because when you think of the whole continuum for delivering value and delivering applications, right now, most of the time is spent coding still and conceiving the applications. After that, less time is spent bringing it to production and making sure it works right. I think as more and more code can be written faster and without necessarily the intervention of humans, we have these situations where the humans have all these suggestions, all these lists of things that the machines have produced, and then they're the ones who need to validate it and make sure it actually works and it's secure and everything else. We think we can do that. We think it would become valuable.
The problem that is truly valuable in the end is understanding how that code actually does what it's supposed to be doing. Is it helping the business? Also, is it safe? Is it running? How is it changing over time? What happens when other components that interact with it change over time? How does it behave in production environments? We think it's a huge opportunity for us.
Is AI helping Datadog itself? Is it helping you write code faster? Are there any other AI efficiencies? Is AI handling support tickets for you?
I'll give you just a quick example on that. Just to show the acceleration, when we first started adopting copilots, it took us more than a year to get the whole team to adopt copilots. The reason for that was that for coding, specifically. The reason for that is that it was fairly helpful in a number of cases, but also fairly disappointing in a number of others. As a software company that builds a lot of low-level software databases and optimization systems and things that are, I would call, hard engineering problems, we have engineers who are quick to dismiss output that is okay, but not great. I mean, that thing is not, I mean, yes, it gives me something, but it's not great. I'm not going to use it.
Now, fast forward a year later when we started adopting coding agents, it took us just a couple of months to have the whole team pretty much adopt the coding agents. The reason for that is that it's that much better. Everybody from the enthusiasts to the skeptics, everybody sees the value and started adopting them much faster. As part of that, we see more and more of the code that is being written or at least influenced by AI. We think that that progression is not going to stop.
How about the progression of DeepSeek? When DeepSeek kind of dropped onto the landscape, which was back in January, we had hosted a large investor call. We had a contact saying it's going to reduce the cost of inferencing by 90%. I think one of the questions is, does that cause a flywheel? If the cost of building an AI model comes down, then are you just going to have a lot more AI models coming out into the marketplace? Do you see any lasting impact from it?
I mean, we see much more enthusiasm for the models in general. I think there's two main impacts. The first one is, yes, if the cost is down by 90%, it means you're going to do 10 times more of it. And some things that were impractical before because the AI model was too expensive to run. And really, if you wanted to have a good chance of being right, you needed to run it 100 times. Now you can do that. Before, you could not. That's one of the effects. The second effect we've seen is that it really was a wake-up call for many companies that had sort of started to believe that you needed $10 billion in investment and 200 researchers to build differentiated AI models. It turns out that you don't.
It turns out that many companies in their domain can have an impact, can innovate, can build state-of-the-art models. We have seen many, many more companies start investing and start building those models. I think as a result, we probably are looking at a future where it is less likely that we will see the AI innovation concentrated in one or two players and that there will be a more robust ecosystem. There will still be leaders. There will still be dominating companies that will capture large parts of markets where massive investments can be brought to bear against a variety of use cases. I think we will also see a lot more specialized vendors and local vendors that will be able to innovate there.
Olivier, in the remaining several minutes that we have, I do want to ask you about philosophically how you view investments, especially into headcount. We publish these statistics every quarter where we try to look at hiring trends across the software landscape. Basically, for the last two or three years, it's been very sluggish. After I got off the stage with Microsoft today, Microsoft announced a 3% layoff. It feels a little different where you have growth companies that are very healthy and very strong, and they're trimming out some of the headcount. Datadog feels like more than any other company. It feels like it's investing to win. You took headcount up 25% last year. What are you seeing differently with respect to this overall investment cadence?
I mean, we see that we're early, and there's so much white space, whether that's on the product side or on the demand side and the market coverage, that we keep building the engineering teams and we keep building the sales capacity on the go-to-market side. We constrain by capacity in both situations. I would say the only limit that we've given ourselves is we invest around 30% of our top line in engineering, and we keep that going. If we get more efficiency with AI, we'll probably keep investing in that because we'll be able to produce more and we'll be less constrained on capacity on that side. I think that's the equation.
How do you think about, in the last minute that we've got here, we've always felt that the best software companies out there, the ones that you want to align with, they're going to try to consolidate point products. They're going to try to put it on a single platform. The key thing to look for is they're going to do it organically, right? They're going to be builders rather than acquiring. We see that very clearly. One of our discussions said there's growth in every Datadog contract that I see. Culturally, how do you think about preserving this level of organic innovation and kind of avoiding the pitfall of becoming the big slow-moving company?
Yeah. I mean, look, again, we're building a lot. We have a very humble attitude to customer conversations in terms of whenever we talk to customers, we're here to listen to them telling us what the problems are and what works and what does not. We're not here to tell them how the world should work. That gives us a lot of insights on what it is we need to build. The other thing we do is, as a company, we're fairly disciplined about understanding what's valuable and what's not. We do that by having fairly transparent pricing in terms of what we charge for, what we do not. We do not do heavy bundling. We do not keep adding features to the same stuff, whether that works or not. We try to have very clean signals to keep ourselves honest so we know what's valuable and what's not.
We want to keep that going as long as we can.
Great to see the discipline. Thank you for keeping the internet running for all of us. Thank you so much for joining us again, Olivier.
All right. Thank you.
I appreciate it.