Datadog, Inc. (DDOG)
NASDAQ: DDOG · Real-Time Price · USD
132.66
+3.18 (2.46%)
At close: Apr 27, 2026, 4:00 PM EDT
133.09
+0.43 (0.32%)
After-hours: Apr 27, 2026, 5:33 PM EDT
← View all transcripts

Investor Day 2026

Feb 12, 2026

Yuka Broderick
Head of Investor Relations, Datadog

Good afternoon, everyone. My name is Yuka Broderick, and I lead Investor Relations here at Datadog. Welcome to all of you here in the theater and everyone joining us online for our 2026 Investor Day. Before we dive in, just a few reminders. During this presentation, we will make forward-looking statements, including statements related to our strategy, product development, market opportunity, and financial goals. These statements reflect our views today and are subject to a variety of risks and uncertainties that could cause actual results to differ materially. For a discussion of material risks, please refer to our Form 10-Q for the quarter ended September 30, 2025, and other filings that we may make with the SEC, including our Form 10-K for fiscal 2025.

We will also discuss non-GAAP financial measures, which are reconciled to their most directly comparable GAAP financial measures in the appendix to this presentation, which is available at investors.datadoghq.com. All right, let me briefly run through our agenda for today. In the first half, we will focus on our strategy, platform, and product innovation. You will hear from our co-founders, CEO Olivier Pomel and CTO Alexis Lê-Quôc, followed by product leaders Yrieix Garnier , Tim Knudsen , Michael Whetten, and our Chief Product Officer, Yanbing Li. We will follow that with a Q&A with that group. In the second half, we will discuss our go-to-market, how we deliver value to customers, and our financial performance. Our presenters will be CRO Sean Walter, COO Adam Blitzer, and CFO David Obstler. That group will be joined by Olivier for another Q&A session.

With that, let me turn it over to Olivier to kick things off.

Olivier Pomel
CEO, Datadog

Thank you, Yuka. All right. Hi, everyone. My name is Olivier Pomel. I'm the Co-founder and CEO here at Datadog, and welcome to the Investor Day. So you're going to hear from several of our leaders today, and I know they're, they're quite excited to present. And so as we discuss what we're doing here at Datadog, our goal is to show you why, even after 16 years of building the company, we believe that we're still only just getting started. We're barely scratching the surface of the opportunity. So I'll kick us off with a quick recap of the problem we solve today, how our platform has expanded over time, what we're up to with AI, and I'll give you a sense of the broad direction we're taking over the long term. So... Whoops! All right. Nope. All right.

So let's start with cloud migration and digital transformation. You have here the usual beautiful chart with Gartner data, which shows a sustained rate of migration over the past few years that is expected to continue for the foreseeable future. It is worth noting that Gartner expects spend on public cloud to exceed $1 trillion by 2027. Even with that, it would still add up to only 16% of global tech spend. Why is this happening? Because as a company, any kind of company, you absolutely have to. You have to interact with your customers online, you have to differentiate from the competition through innovation, you have to run into the cloud to get agility, short time to value, and efficiency.

To be honest, you also have to lean into the best tech, so you can hire and retain the great engineers. In the end, this modernization leads to better business outcomes. All of this was true over the past decade, and we expect it to be even more pronounced in the age of AI, as you cannot adopt AI if you are not digital and in the cloud. Our customers want to build their applications in the cloud as fast as possible using the latest technologies. This is a slide I've been using for a while now, and that is still true, and it illustrates the explosion of complexity we've all witnessed with tech innovation over the past 20 years.

So I won't spend too much time on it because we've gone through it before, but you can note that we added a few things in there, such as on the chart on the top right, which is the scale in compute units. You see that now we've added the GPU fleets with many teraflops being deployed all the time. And the way you read those charts is you multiply each of them. And so what you end up with is truly an explosion of complexity. Now, AI is a large, rapidly growing, and exciting new area of spending, and I think we all know this, but to put things in perspective, we have a few Gartner numbers, so the market opportunity really is very large.

And for us, it is the next big thing that's going to bring both better business value to customers and additional leverage internally in our business. But it's also going to compound the complexity our customers face to take advantage of it. And so we added a couple more charts in there on the right side. You'll see one on the top right that illustrates the number and also the scale of models that are available in Hugging Face. And on the bottom right, we have a chart that shows the increase of developer productivity. And we've all seen the shocking ascent of coding agents over the past few months, but really, this is only the continuation of a much broader trend that has been going on for decades.

You know, when we started back in way back when, you had to write all the code yourself in low-level languages, and then, you know, you could use higher-level languages, so you became, you know, 10 times as productive. Then the internet came along, and you could learn about everything so much faster, so you gain another order of magnitude in productivity. Then we saw the ascent of the open source software in the cloud, so you could reuse wholly built components or full end-to-end services, and then you gain another order of magnitude of productivity. And today, you know, with the rise of coding agents, I think we're staring at the next one or two orders of productivity right now. So this is the problem we solve.

You know, to put it simply, Datadog exists to solve this enormous problem of complexity for our customers. We connect to all of their software components, we scale with all the infrastructure compute units they deploy. We also scale with the services they create and deliver. We understand the way infrastructure and applications are changing, and we connect separate teams to each other across different functions. Now, what's interesting is that none of these problem are going away in the age of AI. In fact, the complexity is even greater because AI allows things to be built faster. There's a lot more of everything, and the stakes are much higher as agents start to act on their own and not just to assist humans.

Our response to win this race against complexity is to invest, to invest in innovation, and to invest on behalf of our customers. Over the years, we've invested about 30% of revenues into R&D. In 2025, we invested over $1 billion in R&D and ended the year with about 4,000 engineers. We believe that we are investing several times more than our largest peers. We are also adopting AI in R&D and dogfooding our own products, which gives us unparalleled leverage and velocity in our space. That investment has led to an expansion of our platform over time and our successful entry into new categories to solve more problems for our customers. We deliver this all on a unified platform that breaks down silos among what used to be disconnected teams and data sets.

Now, all those categories you saw on the previous slide are part of the critical user flows that make up the activities of our customers' businesses. What we see here on this slide is one way you can model, on the customer side, the continuum of problem areas that take customers from tech innovation all the way to realizing business value. You have writing code all the way to the left and understanding where the value is and running the business all the way on the right. We started right in the middle, in observability. Of course, we didn't have all the functionality listed here from day one. We added a lot of it over time as we kept going deeper and kept covering all the new technologies our customers were adopting.

This includes new innovations to help our customers observe their AI stack, as you can see on the highlights there. More recently, we've added capabilities around the data layer, bringing observability across the entire data life cycle, from ingestion to transformation, and all the way to downstream usage in reports, applications, and AI models. And the market for data observability is picking up quite meaningfully as data is critical to developing and adopting AI. Data quality monitoring, in particular, resonates with customers. Looking to the right, we have been building a great business in digital experience. We are now expanding further into user and product analytics, where we are seeing rapid customer adoption, and we are improving value for customers with our synthetics and run products that can scale very cost-effectively to extremely large consumer user bases.

What we found when getting all of this into the hands of customers is that by broadening our scope and removing painful integration points, we see the value of our platform go up dramatically in surrounding areas. Now, looking left on the developer side, the developer landscape is evolving very quickly with code gen tools and rapid deployments. And so the market for tools that help developers build, deploy, debug, and iterate is expanding very meaningfully. So we've been bringing more value to developers with capabilities such as Feature Flags, the Datadog MCP Server, and the Bits AI Dev Agent, and we have much more to do in this software delivery space. You will hear later about this topic from Michael. Security is a concern that spans from end to end across the spectrum of development, production, and user interaction, and our suite of product delivers against that need.

This includes helping find vulnerabilities in development, identifying and eliminating threats in production, and securing sensitive data around live user activity. We are building AI capabilities to move faster and preempt security problems. You'll hear more from Tim on this area. We've been building our cloud service management products as well. What this involves is going beyond helping our customers understand and secure their systems, and into helping them coordinate people and teams to manage, communicate, organize, take action, and more and more, automate response. We are building on our momentum in this space, including the successful launches in 2025 of our OnCall product and of Bits AI SRE Agent, and Yanbing will share more about that.

So if we look at the platform altogether, we are delivering an end-to-end suite of capabilities that help our customers build faster, deploy confidently, fix problems rapidly, and deliver better business outcomes. And we are breaking down silos across operations and DevOps teams, data engineers, product designers, developers, security teams, incident responders, FinOps teams, and business users. We have much more to do in each of these areas, but we've made meaningful progress over the years and are seeing broad customer adoption, which shows that we are delivering value … as proof of the value we're delivering across multiple categories, as I discussed on the earnings call a couple of days ago, we now have $1.6 billion of ARR in infrastructure monitoring and over $1 billion of ARR each in both log management and the end-to-end APM and DEM suite.

The fact that we have real balance across the three pillars of observability, as well as meaningful scale in each one, shows that Datadog is unique within the industry in establishing true platform value for customers. Even though you've heard a lot of competitors say they have three pillars capabilities, their business typically remains driven by just one of these pillars, which dramatically reduces the value they're able to deliver against customers' explosion of complexity. All right, so I want to talk a little bit now about our AI build-out and break it down into two buckets. First, we're building AI for Datadog. We're embedding AI across Datadog, so every type of engineer can move faster. Our AI agents now surface contexts, identify problems, and recommend fixes faster than any human can.

They can be proactive in getting in front of issues, and users can interact with them in plain English. The second category is to cover the AI applications or agents our customers are building or running themselves, and we call it Datadog for AI. If you're putting LLMs or agents into production applications, those systems do need observability. They need to be monitored like any other critical app. In fact, as I mentioned earlier, the stakes are higher as AI agents can now take action. So we're building a full stack of products so users can understand and improve their AI solutions. So here's that AI for Datadog bucket, but laid out across our platform. So as you can see, there's capabilities across every single layer of our platform now, and we're building more. And here are the capabilities in the Datadog for AI bucket. Same thing here.

AI-specific instrumentation permeates every single layer of our platform, and we believe we are uniquely positioned to win market share in each of these areas. The last time we had an investor day, two years ago, we told you this: We wanted to make our customers one order of magnitude more productive for every single step, going from code to business value from end to end, and we call this closing the loop. We are well on our way to deliver this vision on behalf of our customers. We do this across 30,000 customers, from the most tech-forward AI-native companies to the largest Fortune 500 companies in every industry, in every geography around the world. Here's the loop that we've been helping our customers close.

For their production systems, it's an endless cycle of making or incurring changes, figuring out how to understand the affected systems, and fixing problems as they arise. It's complicated, it's frustrating, and it's expensive, and today, we save our customers time and money by breaking down silos across teams and data sets, and we close these loops tighter, faster, and cheaper. But there's a second key workflow that is becoming increasingly relevant, thanks to the rise of AI coding, and that's the loop that moves software from development to production. The time spent and value created have, to this point, been heavily concentrated on the build part. But with the rise of AI coding, software engineers will work much faster.

Already today, we have, at Datadog, experienced engineers who are building features 10 times or 100 times faster than they could before, and I think I've seen some of them in the room today. So please, don't hire them away anywhere else. So the value is instead shifting quite rapidly from coding to being able to evaluate changes and deliver business value. This is the space where code meets production environments, and that is exactly where Datadog is. Bringing 100 times more code to production will not be easy. The tough part is making it work in the real world. It will come into contact with all the other code and components in the environment. It will need to be reliable and scalable while being cost-efficient.

It will need to maintain the security of the business and user data, and in the end, it will have to deliver great business outcomes. As you'll hear today, we think our investment in innovation, deep domain expertise, large and diverse customer base, and massive corpus of data are all factors that will help us be a critically important part of the solution to this problem. Our place in the world is right where code meets production environments, other applications, other agents, end users, and broadly speaking, the real world. And that is what we think is the most impactful place in AI development. At Datadog, this is what we've been building towards for the past 16 years. We've been around long enough to be part of the transition from monitoring to observability and to drive the adoption of DevOps at the time when cloud was just emerging.

With code development accelerating potentially by orders of magnitude, the problem we solve is expanding to be an even more important, pervasive, and valuable one going forward. Solving this problem will take us from observability to the edge of autonomy. Enabling autonomy will mean that we validate our customer systems, apps, and agents and that we do so as they are increasingly AI-coded or rely on probabilistic AI models that are by definition harder to test or predict. It will mean that we help maintain their security and safety, that we keep our customers' agents aligned with their intent and constraints, that we give our customers the right control mechanisms and automate their feedback loops, and that we verify that every change generates the expected business outcomes. That's where we are headed.

We think we have a unique opportunity to enable autonomy for our customers across development, operations, and security, to support our customers in their goal to rapidly deliver business outcomes accelerated by AI, to bring everything together end-to-end, and to give our customers the ability to harness complex new technologies and deploy with confidence. This is the latest and by far the biggest area of opportunity for us. As you'll hear later, our market, just observability, is very large and growing quickly. Even though we've successfully grown our business over time, our market share is still only in the mid-teens. As I showed you, we have made significant headway in building capabilities in other areas, and those do significantly expand our addressable market.

Furthermore, we are building both AI for Datadog and Datadog for AI, and we're embedding both broadly and deeply into the Datadog platform across every single product or use of AI. We're already delivering AI capabilities that our customers use and love. Meanwhile, our investments will deliver a quickly accelerating pace of innovation as our engineers themselves build with AI and dogfood our technology. We are very focused on deploying those investments to achieve this vision of enabling autonomy for our customers. If we can get this right, the sky's the limit. This is why we feel that we are still just getting started and barely scratching the surface. Now, we'll turn it over to Alexis, who will talk about our data-driven advantage and how we apply that to Next Gen AI. Thank you.

Alexis Lê-Quôc
Co-Founder and CTO, Datadog

Thanks, Olivier. Hi, everyone. My name is Alexis Lê-Quôc. I'm the CTO and one of the co-founders. So, you know, I think Olivier gave you a clear picture of the dizzying complexity that our customers operate at and presented very clearly the opportunity that's ahead of us. You know, given the platform we have and the platform we've been building for 16 years almost. What I wanna do now is explain further why we at Datadog are uniquely positioned to use AI to deliver on this vision. So Datadog is unique in how much data we get, how much data we have, but also how much we know about infrastructure, applications, and systems running out there. We ingest data at a significant scale, trillions of data points, billions of traces, exabytes of logs.

We also have a diversity of data that users are sending to us about their system. This could be, besides metrics, traces, logs, user sessions, data jobs, lineage, LLM and agent traces, team structure, service names, and many, many other pieces of information. They come from our SDKs, from our agents, and from our crawlers, from integrations. That data is, of course, what powers the Datadog of today. That gives the current observability to our customers. But it is also the foundation for the AI needed to deliver fully autonomous operations. A few years ago, we started an AI research lab because we were convinced that given the amount of data we have and given our R&D capabilities, we could be leaders in building AI specifically for observability and . How did we start?

So we wanted to prove that having lots of data, lots of domain-specific data, give us an edge, and thus was born our first foundational time series model named Toto. Now, if you look for the largest public data set of time series data, you get to about 300 billion data points, and it covers domains that you'd expect. You know, finance, healthcare, energy, transportation, some web traffic, and so on. But we trained Toto on three times that amount, and the vast majority of the data we have, we use for training, for pre-training, actually, is completely unique to Datadog, and it's all related to applications, infrastructure, software systems. So as a result, when we compared Toto to other time series model and its ability to forecast, it performed far better than other models out there. So we reached state-of-the-art.

And from there, what we did is we released it as an open-weights model on Hugging Face. That, I think that was last May, and we've seen a significant uptake since, about 9 million downloads. Now, you may wonder why release it as open-weights. What's the point? Well, for three reasons. One, and maybe importantly, we wanna contribute to the field. This is a nascent field that I think can use all the help it can get. Number two, it was important for us to establish our credibility as an AI lab. And number three, because it's actually a way to understand how important these models are by just seeing how many downloads they get. But I'd say the most important difference that we notice is the difference of cost compared to AI models you use every day....

We spent about $375,000 to train this model in 2025, which was, at the time, three-four orders of magnitudes cheaper than a frontier model of the same vintage. Now, sure, frontier model can do a lot more than what Toto can do, right? It can speak, you know, hundreds of human languages. It can review and amend legal contracts or analyze medical imagery. But none of that matters, right, in the context of observability. And with Toto, we can show that we showed that we can get good results with lots of proprietary data and small models. But we'll get back to that. Let me talk a little bit about training.

So really, when you're creating a model from scratch, you spend a lot of time and money on, on the pre-training stage, if you will. Training is an important piece, and it's both pre-training and training are essential to produce models and agents that are useful in real-life situations. So let us see how we were training the Bits AI SRE agent. And if you're not familiar with Bits AI SRE, it's what's in the name, it's effectively a site reliability engineer. It is tasked with finding and building a plausible causal chain, starting from a symptom on a software system somewhere, you know, and sort of building the chain from there.

So when an alert, when an alert signals an issue in an application, for instance, Bits AI SRE develops hypotheses about what the problem, where the problem is coming from, and analyzes all the data available. The goal is to identify root cause that can then be addressed, and so that the symptom goes away. As you can imagine, and you'll hear more about Bits AI SRE, it's very popular because there are always problems out there, you know, on software stacks around the world. Now, in order to train an agent to correctly identify issues in production, we need a solid baseline of past incidents and correct root causes. Here's how we've done it. Like any complex system, our own platform is constantly evolving and constantly being maintained. Think of it like the city around us. You know, the streets need to be plowed.

The city needs constant attention and effort to fight entropy so that life can keep going. So in the course of any day, our engineers investigate and fix issues. And what they do, too, is they record their finding, as well as the entire set of observability that was needed to reach a conclusion. And they turn that package into an evaluation or an eval. And because that eval comes from human analysis of an expert, and it was used successfully to troubleshoot and fix an issue, we know we can trust it. And every time we make a change to a model or a model instruction, we run through that growing body of evals and see if the model gets better or worse. So this is a chart behind me showing the number of evals our engineers have recorded over time.

It is not something they can outsource to non-experts out there, and it is important that you cover as many use cases as possible, and for that, you need a large infrastructure to do it, and we have the large infrastructure. And again, going back to the city analogy, you know, to keep New York City up and running, it takes a lot more variety and scale of effort than if you only had to keep up a small village, for instance. And so as we recorded more evals, we've seen the accuracy of Bits AI SRE. It has gone up, not as a straight trajectory, it's up and down, but we can see generally the trend. And so that's how we've improved the quality of Bits AI SRE.

And not only we're, we're going beyond this sort of human-curated set of evals in building, so harnesses to generate synthetic data, and that's really to reach an even larger scale, and we think it's necessary. And as our customers have begun to use Bits AI SRE, they've sent us feedback, right? "This is useful, not useful. This has worked, not worked, and here's why it was useful and why it was not useful." And we've, of course, used that as evals, and that's great because it continues to enrich the diversity of environments and problems that our agent faces. And the reality is there's no shortcut in training and in pre-training and training to get this kind of high-quality result.

You just need to have lots of data and expertise, and that we feel is a strong differentiator for us. So let me maybe sum up our advantage. One, we have access, continuous access to lots of clean and rich data that we can use for pre-training, for training, and of foundational models and agents. Number two, we are building our own models. Number three, it's not only the sheer amount of data matters, it's also the diversity. It has to come from a broad context, and that's very helpful. Number four, we bring to bear our domain expertise to improve our model and, and agent performance. So we think that we're unique in our ability to deliver the best in observability, powered by AI, bar none.

Now, you may wonder, "Okay, that's great, but why not just take the data and throw it at a frontier model and see how it does?" And thus, you would replace DataRobot with just a bunch of data and frontier model. And maybe you have the data to do that, but first of all, is it enough? And what you'll get is you'll get. So we've done that internally, our customers probably do that. What you can do is the frontier model is gonna be great at summarizing data. You have to give it some bunch of context. You're gonna get some good results. The main problem is extremely expensive. So here's the idea. Here I'm plotting costs and accuracy, if you will.

You know, really, first of all, training a frontier model, it's starts at $1 billion. And it's unclear how high or how the accuracy slope trends as you go orders of magnitude past $1 billion. Whereas the research telling us a different approach works and is, we think, better. We've built much smaller models on the exabyte of data we have. And the small models, what we've proven is that they have orders of magnitude better accuracy per dollar, because simply we're not paying for things that frontier models have, but we know won't be useful in the context of observability and security. And even, I'd say, if you hire lots of engineers to fine-tune and do RLHF and so on, it's not gonna be able to match the accuracy at the same cost.

Because really, when you use a frontier model, the reality is you're paying for the amortized cost, the pre-training, training, and on top of that, all the, so the vast quantity of hardware that's needed to run inference of a frontier model. So in summary, one, we believe that the autonomous operation need very good models at a low cost. And number two, is our approach with small, dedicated models, lots of real-time data, and ongoing evals, we think is the way to go. So I've shown you the reason why we build our own models and also how we can improve them based on real-life observation. Now, you may wonder, how do we tie all this together to get to fully autonomous operations? Because autonomy requires validation, safety, security, alignment, and control. In other words, we have to go beyond only observing a system.

We have to understand its behavior and maintain control through verified changes. So in our lab, we've built, as I mentioned, a bunch of models, but we've basically understood that what we need really for this domain is a world model. So how are we going to do that? We take all the data we have, and it needs to encode and represent the state of distributed system. Now, we are already in a position to observe these systems, right? 'Cause we're plugged in everywhere. We get, you know, that, these exabytes, trillions, and so on of data every single, you know, every single day. So what we're working on right now is an optimal representation of code, system structure, and system behavior. And we think it is essential to predict the future behavior of software with high enough accuracy.

For that, you need to understand past behavior and have knowledge on how such a system was built. Without it, it's not gonna work. Without high accuracy, there won't be autonomy, so we have to get to high accuracy. When we think about steps towards autonomy, we think in terms of stages, here represented on the right-hand side. Starting with customized and adaptive observability, then proactive learning, alerting, automated remediation, and then you get to the predictive and proactive, and preemptive. And finally, you get to autonomous operation. And with Bits AI, we've tackled the first stage, automating away sort of the more manual steps of observability. So the need, for instance, to build dashboards by hand and so on, is gradually going away.

We've built proactive alerting and automated remediation, with the goal to take care of an increasing number of cases when currently people are still in the loop to fix things. Now, there, we think our trajectory is gonna be like that of coding agents, limited in scope in the early days, more narrow, but increasing as time passes as, as we refine our models. And as that happens, as accuracy and coverage continue to increase, we'll be able to further predict and then preempt and prevent issues before they occur. When we reach that, we have a self-healing, self-managing, self-optimizing system, able to operate customers' infrastructure and application with fairly limited human intervention, and that is our goal, as Olivier said. Now, there's obviously a healthy amount of work to get there, but we think we're uniquely positioned to make it happen.

That is where we're going, because that is the only worthwhile goal. But for now, let us hear about what we're delivering for customers today, and for that, I'll hand it over to Yrieix.

Yrieix Garnier
Product Leader, Datadog

Thanks, Alexis. Hi, everyone. My name is Yrieix Garnier, and super excited to be here today for my second Investor Day. Oli and Alexis talked about how we're moving forwards toward, towards autonomous operations. In these sections, I want to take it back to its foundation, the Datadog platform. Platform is where Datadog started, and over the years, it really helped us break down silos amongst teams and, and data. To do that, we've built a robust set of elements, thousands of integrations, common UI, data services, and we have about half of the 4,000 engineers working on the platform. The investment allows us to re-evolve it very quickly and to seamlessly integrate AI to better serve our customers. Bits AI, this is how we're calling our AI capabilities, is really present throughout the platform. Let me give you a few examples.

So it can analyze and correlate all the different data types that we're getting from our customers to detect, investigate, and remediate code fixes, or interact with Datadog through natural language. And we're just getting started with more AI capabilities to come. But the platform is also critical for a rapid pace of innovations. By leveraging the platform and all the, these building blocks, our engineers team can really stay very lean, but also move really quick to deliver either new products or enhance existing features. This is really the flywheel between platform and product. For instance, it helped a handful of engineers to deliver Kubernetes Autoscaling end-to-end solutions. Or after the Metaplane acquisitions, we were able to take to market data observability in a record time.

So today, we have dozens of products, and we continue to advance our platform to accelerate this flywheel effect and build even more products faster. But let's look at the data ingestion side. We've also added a number of data sources, including like eval sets, LLM inferences, and we can ingest extremely large amounts of data to provide analysis and correlations in the context of our products. But our end goal is really to deliver value to our diverse users, but also to the rising number of agents. They all need the data in one place, really, to provide the best outcome. As Olivier said, our customers are facing a rising level of complexity, and our job is to stay ahead of that. And if we do all that right, we provide a single source of truth to break down silos across all users and agents.

So the platform actually has come a pretty long way over the years. Let me reflect on that. In 2015, we started with few integrations, had like millions of events per hour and only had one product, infra monitoring. But our customer needs and demand grew really exponentially, and today, we have over 30,000 customers, including some very, very large ones. We have 25 products, and we can handle trillions of events per hour. As you can see, our platform reaches unprecedented scale, and it's one of the main reasons our customers keep choosing Datadog. As we can store, process, and move data with always improved performance or cost efficiency. But let's look at this from a customer angle. This chart is a very fast-growing AI company.

Their customer demand grew very quickly, but where the end users were experiencing bottlenecks, they needed a unified observability solution to get to the root cause as fast as they could. To do that, in about one year, they adopted 16 Datadog products, and that was really key to their business success. That's a platform. Now, let me speak of the way we're scaling and that our customers ingest, retain, and analyze their growing data sets. To expand scale, let me take the example of the log management. We launched it back in 2018, and it had multiple scaling phases. The first phase was called logging without limits, and it's all about ingesting all your logs without limits and only processing the relevant ones with correlations with infra monitoring and APM.

So bringing in logs in a valuable but still economical way is really the foundation that helped grow that business 7x to $1 billion ARR. But really, it was the... It's the foundations of our customers' value, and it worked great for observability logs. We also knew that we are only capturing fractions of our customer logs. Other use cases like transaction logs or audit logs typically involve log volume that were orders of magnitude larger and which needed to store for much longer period of time. So as a second stage of growth, we've extended the capabilities to support those use cases with Flex Logs, Frozen, and Archive Search. By doing so, we've unlocked new market opportunities and delivered more and better outcome to our customers, and it actually worked. We saw very strong adoptions of Flex Logs from the start.

Now, customers are storing tens of trillions of events, and Flex Logs is approaching $100 million ARR and growing very rapidly. But to keep pushing on this log journey, we've leveraged a platform to build a new product, Cloud SIEM. Cloud SIEM is actually a natural extension for Flex Logs, and it requires logs to investigate security issues over a long period of time, and Flex Logs really unlocked the security revenue. With 18x growth in five years, we are still seeing an acceleration in Cloud SIEM adoption. And if we look at this, we are still actually at the beginning of that opportunity phase here. So as you can see on the chart here, Flex Logs really unlocked that security revenue. It grew like 18x in five years, and we do see that acceleration.

Yeah, I just already said that. So Tim is gonna talk more about this, as we actually do see that, the beginning of that opportunity. So let's now look at the impact of the platform data scale on actually a whole customer journey. This e-commerce customer started using Infra, then in 2022, they adopted APM, Synthetics and Logs, and in Q4 2023, they became a Flex Logs and SIEM customer. As you can see, over time, they consolidated more tools within Datadog, which enabled them to address an increasing number of use cases, cover more environments, larger datasets, and most importantly, enhance their customer experience. And that's how Datadog and customers really scale together. So to finish, I want to talk about enterprise coverage. This is where the platform effort helps our largest or most sophisticated customers to drive tool consolidation.

Beyond monitoring everything in the cloud, a number of our customers are asking us to cover more of the environment by combining cloud and on-prem. With our extensive integrations, this is something which is out of the box for Datadog. We're already supporting on-prem servers and network, and now we also monitor wireless access points, end-user devices like laptops and desktops or edge devices. So with that, customers can really see the entire physical footprint in one place and also combine with their cloud environment. And for us, it means a larger footprint and a bigger market opportunity. We also continue to expand our coverage across our customer tech stack. Historically, we've been focused on cloud deployment, and we've been anticipating market trends as we were first with containers and serverless.

We have very heavily containerized customers, and one of them is actually being our internal team, as we also heavily dogfood our own product. But more for non-cloud environment, the stack is typically simpler and based on more kind of legacy architecture. For those of environment which send, like, less data and are less complex, our pricing was actually not adapted. So now we're actually pricing for value to respond to our customers who wants to, like, that end-to-end visibility, including their legacy stack. But as we think about the future, there are few customers now, and potentially many more in the future, that will adopt GPU-based architecture to serve their AI applications. The cost of building and running this environment are really high, and being able to improve performance and efficiency is critical for those modern AI workloads.

So with solutions like GPU monitoring and AI observability, we're ahead of the curve, and we're already capturing AI-native workloads. So one thing you can take from my talk is that the Datadog platform is our key differentiation. It allows us to move fast, staying lean, fully support our customer use cases and their tech stack, including AI workload. But one last thing before I conclude. Some customers cannot send the data out of their environment, either due to restrictions, data residency, compliance, or due to very high volume, and I'm talking about, like, petabytes per day, and that could become really cost prohibitive. So Datadog BYOC, Bring Your Own Cloud, is actually built for those customers, as they can go and keep their telemetry in their own environment.

They can index, store, and search data within their own infrastructure while using the powerful Datadog single pane of glass cloud solutions for correlation, analysis, monitoring, and alerting. So we're already engaged with very large companies and previewing BYOC, and we expect to unlock even more opportunities with it. So that's it for me and the platform. Let me hand it over to Tim to talk about security. Thank you.

Tim Knudsen
Head of Security Products, Datadog

All right. Thank you, Yrieix. Hello, everyone. My name is Tim Knudsen, and I just joined Datadog, and I lead our security products. Prior to Datadog, I held product and GTM positions, leadership positions at Google Cloud Security, Zscaler, and Akamai, where I led global teams to deliver market-leading solutions and build security businesses.... So let's begin. When we speak to customers, we commonly hear about this painful silo attacks, okay? So threat detection's in one place. You have application behavior and performance data needed to investigate threats in another, security posture of resources and services supporting the apps, knowing which teams owns what, in which repositories the code lives, all living in more separate silos, if at all. This results in detection gaps, this results in prioritization noise, this results in remediation slowdowns.

And now, in the era of AI and agentic, there's a whole new threat landscape further amplifying this problem. So, but with Datadog, we flatten the attacks. We combine and correlate security observability in a fully and truly unified platform, with all the capabilities for the SOC, for cloud sec, DevOps. So all the critical teams defending the enterprise, from security, SRE, and developers, can work together smarter and faster. Over the past couple of years, we've been building out our security capabilities on the Datadog platform, and we now offer a broad portfolio that integrates the traditional silos between reactive security on the right and shift-left proactive security on the left. For the SOC, we have our Cloud SIEM. It delivers cloud-scale, unmatched data flexibility, direct pivots into observability data to speed investigation, and Bits AI Security to autonomously conduct investigations to accelerate threat detection and response.

For cloud security, we provide posture scanning, runtime vulnerability detection, attack analysis, and combine that with observability-enriched prioritization to tell you what cloud risks matter the most. With AI and data security, Datadog automatically secures sensitive data and offers comprehensive runtime protections for AI agents to enable safe AI transformation. And finally, for developers and DevOps teams, we have Code Security. It helps to identify vulnerabilities in first-party code, open source libraries, infrastructure as code, all before they move to production, and therefore dramatically reducing alert fatigue. And with Bits AI, Code Security generates bulk remediations, so developers can still deliver secure code while getting to spend more time on building and innovating. Now, plus our security products, because they're built on the Datadog platform, they all take advantage of Datadog's shared services to accelerate remediation.

For example, integrated case creation and incident response, so security, SRE, and DevOps can rapidly collaborate with one shared view. Or even one step further, Agent Builder can be used to build custom automated remediation workflows. So we have growing proof that the Datadog advantage that I just outlined, of unifying security and observability, is delivering value to customers. Today, Datadog has over 8,500 customers using our security products. This includes one in four of the Fortune 500, and we've now surpassed $100 million in ARR. And you know who else is using our security products? We are. The Datadog security team secures the Datadog platform for our 30,000+ customers using Datadog. We put our reputation and our business on the line using our own products as an indication of our confidence as a security vendor. So we're off to a good start.

If you wanted to pause, that's okay. We're off to a good start, but we believe we have a big opportunity ahead of us. Today, 70% of our million-dollar customers use one or more Datadog security products, but the spend security represents together is only 2% of their Datadog spend. So as a result, we see potential for much more wallet share as we deliver more security products and capitalize on that Datadog advantage. Let me give you an example. Here is a long-time Datadog customer in the media market, who over time has adopted a number of security products from Datadog. They wanted a unified observability and security platform, and they chose to consolidate on Datadog over market alternatives like Palo Alto, CrowdStrike, Wiz, Google, Microsoft, just to name a few.

As a result, today, 20% of their Datadog spend is on our security products, and that could just be the start. For another one of our million-dollar customers, 38% of their business with us is security. So we see a clear opportunity with all of our current and future customers to go deeper and broader with security. And how we get there is the value of the Datadog advantage that combines and correlates security and observability in a truly unified platform. So that's it for me. Let me hand it over to Michael to talk about what we're doing for developers.

Michael Whetten
SVP of Product, Datadog

Thank you, Tim. All right. I love seeing everybody taking a ton of notes. Anybody using AI right now in this very moment? I knew it. There are some people out there. Advantage, right? So I'm Michael Whetten. So we live in this era of speed right now, right? I don't know if y'all feel it, but my customers are under intense pressure to compete and innovate and build and ship product that works to customers and try to scale it out as fast as possible. And it's been like that. I've been with Datadog almost 10 years now. It's been like that since the beginning with the cloud, this competitive advantage that technology can bring, but it feels like it's accelerating, right? But as Olivier said, the complexity is a drag on their ability to innovate, right?

The big companies have a lot of complexity, and they're feeling the drag in different ways than small companies who are having trouble getting to scale, right? And they have drag in different ways and different types of complexity. And if there's one thing that you take away from today, it's that, I know some of you have traditionally written about Datadog as kind of an insurance company, a utility company, or some must-have for companies at scale. But more and more, my customers are telling me, or they're adopting, even the, the fastest-growing companies in the world right now, who are in the most hyper-competitive landscapes, they're buying Datadog because they need to move fast, and Datadog enables them to do so. So let me walk you through a few examples of how I see this working.

So one type of complexity that we see a lot is fragmented visibility, observability, or monitoring, or product analytics, or whichever point solution they have here. So here's a simplified diagram of an application, a single application at a company. The fragmentation - each of these tends to represent a different team at a company, right? And even though it's a single application, and this user is interacting with your application, making requests, getting response, traversing through whatever your application does, the fragmented nature of the organizations that are required to serve this user, they don't always provide the best user experience. So that when an issue happens, right, if I go to a lot of my customers right now, and I say, or potential customers, usually, and I say: What's

My favorite thing is kind of a trolley question, but my favorite thing to ask them is, "Where, how do you know that things are broken now? What triggers an incident?" Most of the time, they'll sheepishly look at the floor and say, "Well, customers," like a support ticket, or somebody tweets something's down, and then we call an incident and immediately start reacting. Some of you are nodding 'cause you must hear this from people as well, right? The problem is that all of these different teams are using different tools, so we have a bit of a Tower of Babel problem, right? Who knows that these signals that are going off even point to the root cause of the problem. So to use a real-world example, there's a major global bank in which they have 5,000 engineers.

So that was one application. Now, at this bank, there's hundreds of business units, and each one has many applications, so 1,500 applications spread across the organization, right? And the problem that they have is the way that the organization is set up doesn't match the user experience. Somebody might be trying to make a withdrawal or deposit at an ATM or on their phone. They don't know how the company's set up and whose problem is whose, so they just know they can't do something. But when something goes wrong, these are all interdependent technologies, and troubleshooting this is a nightmare, right? So they consolidated everything onto Datadog. They went and ripped out all those point solutions and brought it all in.

And the beauty is that not only is it just one single pane of glass, the real advantage is that from the time of collection till the time it lands in Datadog, we're automatically summarizing and correlating all that data. We're making sense of that data so that when it lands here, most of the work is done, and when a signal goes off, it's typically in the right place, notifying the right team, and they can respond to it much faster. So that you have global impact within an organization just from consolidating onto one platform, right? So millions of dollors a day in avoidance, better customer sentiment, but the real advantage, the real value, isn't actually displayed here.

The real value is all those people that were responding to those incidents that were taking hours and now take minutes, those are sometimes the highest paid or smartest engineers at your company. And rather than building value for the company and living the bank forward into what is the next generation of bank software, they're spending time in incidents. And here now, they can spend time adding value for the customers. So ripping and replacing load-bearing technologies that have been there for 10 years across 5,000 engineers is not super easy for a lot of organizations, right? Either it's a mandate, as it was at that bank, or a trend that we see happening now to address some of this complexity and scale is a new movement that we're seeing, where people are calling it user-first monitoring.

So much complexity on the back end, so many teams to coordinate, so much politics to try to coordinate. Can we just start with the user experience and start making our way backwards, right? What are the most load-bearing, critical user journeys that we need into invisibility and start traversing the organization that way? And so the crux of your business really is your application and your front-end stack. That is what your developers are typically making, right? And then they scale it out with infrastructure and network. So we have a suite of products for this solution, and in the space, it's called Digital Experience Monitoring. But really, it is: What is the user experience? How are they impacted by the changes my engineers are introducing into production? Are they making things better or worse?

In some ways, this is accountability software. Are my engineers, who I'm paying, who are there to bring value to the customers, are they making things better or worse for the end user? We see a big push towards enabling engineers to have direct visibility into the business impact of their changes. This is one of our fastest-growing areas. We see that when the APM, which is the back-end instrumentation, and the front-end user experience are stitched together, it's a story that works, and it spreads through the company much faster, and it spreads through organizations much faster. We see that when these things are together, we provide a lot more value, which turns into more revenue for the business. This isn't unique to digital experience and APM bundling.

As customers grow with us, they do find more value. The land and expand does work, and more products equals more value, spreads to more parts of the company until they have that consolidated approach. Why? Because this is better for the human responders, right? Bringing more people into the conversation to troubleshoot faster together is meaningful and does increase the value. But an advantage of this single platform, and the automatically correlated, and summarized data that we do, is it's better for our AI agents, too. We found that when we have the full context of the entire stack, and it's already correlated for humans, AI also operates much better on that. It can act faster, right?

What might take a team of people before many hours to solve, but then they adopt Datadog, and they can do it in maybe one hour; the AI can come to the same conclusion in minutes, and we'll hear more about that soon. But it can traverse... This is, again, one application. It can traverse all the applications. It's working 24/7, right? And so the advantage here is when it has the you can't separate the AI from the underlying data formats and context. There's a lot of work you can do to make the AI more efficient, faster, and more important, more accurate. So Datadog does bring equilibrium into that DevOps life cycle, so the developers can develop as fast as they can. What are they doing? They're introducing change into production.

That's their job, is to introduce new things, but production can adapt to that quickly and make sure that it's done safely and securely. In Datadog, we benefit from this. The reason that we're able to ship so fast is because we use Datadog, and so we are the embodiment of that DevOps life cycle, and we do have that equilibrium. This last year, we announced and released hundreds of features into production, right? These aren't, these aren't ideas. They're actually out there in customers' hands, iterating with customers, proving value. Now, as has been said already, there's a sea change, right? When I walk around Datadog office, I see a lot more engineers coding on their phones, having multiple windows open, directing lots of agents, and writing code 24/7. They say it's addictive, right? And I see it's only accelerating.

We got one guy who's coding with his glasses, right? It's sending him prompts, and he's saying, "Looks good. No, change this," right? And so this is a real movement in the, in the industry right now. But what does it do? We don't hire developers just to write code. So 100x more code might not actually translate to 100x more value. Are you really gonna release 100x more features? Are you gonna release one feature that's super sophisticated and can do a lot more? How do we verify that this 100x code is good, that it works, and that it's good product? There's a difference between code and product. So this is, the bottleneck is no longer coding, as Olivier was talking about, but bringing value to customers, maybe at a 100x.

This is where Datadog has a unique opportunity to help the AI movement forward. Developers love to make changes and bring things into production, but they break things. The number one cause of incidents is faulty code, right? That's when humans are reviewing it and taking painstaking efforts to make sure it's good. Datadog has automated, integrated code testing. We also have the best understanding of how production works, how that code is deployed, how it's connected to other parts of the application, what data is flowing through that code at all times. That we can actually inform the entire life cycle here, whether it's humans or agents. With AI coding, we can tell the code that you're writing right now, "Hey, this code that you're-this, this idea that you have, that you're gonna propose into production, it's going to increase latency.

It's gonna make things slower because look at all these requests coming in right now," and feed that into the agent, so the agent can make better decisions, or so that the person who's hand-coding can make better decisions and write better, more efficient, more secure code that we already know is gonna work in production. Now, as you're sending it out, right, there's a risk mitigation strategy that already exists. You write your code, you write unit tests, you run those tests locally. There's Git hooks in there that's gonna run all your tests locally. Then you push your change to some centralized pool of source called CICD, and it runs a bunch of tests, and those tests, sometimes they pass, sometimes they fail, and you have to kick it, and it goes again.

Then it will land in a staging environment where people kind of vibe check it, you know, type a few things. "Looks okay. Let's push it into production." If you're more sophisticated, you've got Feature Flags, which means that you're gonna slowly roll it out, region by region. 5% hit it, 10% hit it, 15% hit it, and what are you doing? If you've got Datadog, at least you've got monitors there to tell you if things are going off. If you don't, usually you're just praying and waiting to see if requests are gonna come in. We can actually do much better than that... We have automated testing. We just launched our Feature Flags product, which is unique in the domain, in which we can actually contain or sandbox that change.

When it goes to production, we can attach metrics to it automatically, and as it starts to roll out, we can statistically tell you, "This is going to be better, this is going to be worse. Roll it back or go straight to 100%." Right? And so that's Feature Flags and Experiments. So but the gist here, I could talk about any of these things, but the gist is, we can automate much of this. Just as AI is pushing more code, our AI and the manual tooling that we're enabling for customers as well, which the AI will use, can accelerate this path into production. So rather than it becoming a bottleneck or a super high risk and just breaking things all the time, we can reestablish that equilibrium even at the pace of AI. So that's... My throat's getting dry, but I'm almost through it.

So the code gen that most people are doing right now is to just build existing applications, and there's some exciting stuff. Technology does empower the creative individual, right? So somebody last night told me that from the time they got a white paper for a new database, this person works at a database company, so research paper, they went and vibed out the idea into a working prototype, and they thought they were so smart and novel, and they were the third one on GitHub to have put it up. And this is, like, in hours after the white paper was published, right? And so this is what I mean by speed. Well, everybody's trying to take advantage of this.

They're trying to build more intelligent applications, and they're trying to also see if they're going to be safe in, in the environment. So it means that us, as Datadog, we have to continue to build new types of product, new types of observability, new types of security, to be ahead of the curve when people need to start bringing these things into production. So one of those things, when I do talk to CEOs, CTOs, CIOs, one of their big concerns right now is the proliferation of agents in their company. What are these agents, right? Who made them? Why are they there? How much do they cost? How are they permissioned? What are they supposed to be doing? Are they doing a good job at that, right? Am I getting value? I see five of these agents that are supposed to be doing the same thing.

Should we choose one of them, right? So the AI agent console gives leadership a way to actually track, and teams who are building these things, a way to track all of these questions about the agents that they're bringing into their market. Now, those who are building these agents or trying to enable their product with more intelligence, they're also stuck. Unless you're at a, at a leading research lab or frontier model lab, you probably don't know where to get started, and so there's a lot of information out there. The nice thing about AI observability product is that it comes with out-of-the-box framework. So just by using the product, you start to understand more about, if you're already an expert, then you'll be familiar with all these tools. If not, it's almost an on-ramp in how to think about non-deterministic applications.

I need to evaluate these things for basic competencies before I send them out into the world. Once they're in the world, I can experiment to see what is good behavior, what is bad behavior, or trending which direction. I can try different models, et cetera. I can sandbox them. I have playgrounds, all the stuff that these researchers and AI engineers need. So the momentum on this is growing. There's been a lot of investment and learning over the last 24 months or so, but we're starting to see these things try to get into production, and who helps people get things into production is Datadog. So that's why these products are growing with us now. And the last thing I'll say here is, we're only getting started, right?

This movement and helping this movement to figure out what it is and what it will become, Datadog is perfectly situated to enable that. To talk more about the exciting things we're doing with AI is our Chief Product Officer, Yanbing Li.

Yanbing Li
CPO, Datadog

Thank you, Michael. How's everyone doing? This, you know, you're getting to the last stretch of our first half. So my name is Yanbing Li. I joined Datadog as the Chief Product Officer about 18 months ago. Before Datadog, I spent time at Google, responsible for the observability function, powering Google's planetary scale, infrastructure, and services. So when I went to speak to a senior SRE leader to get some candid feedback, this is what they told me. They simply said, "Go look at Datadog." And this was back in 2019, before Datadog was even a public company. After Google, I got to lead engineering and product at a autonomous trucking company, Aurora Innovation, actually, the very first company to operate commercial driverless truck on the U.S. public roads.

At Aurora, I get to learn firsthand what it meant to ship safety-critical autonomy product into production and at scale. So this is why I'm excited to be here at Datadog, you know, to help our customers, you know, ship faster without breaking things and operate reliably and safely, all while navigating the increased complexity of AI. So let me circle back to this DevOps loop that Ollie showed earlier. This is the reality of what our customers' DevOps team lives through every day. They need to detect issues as they emerge. They try to investigate and find a root cause and the next step of action, and they take action to remediate back to health. Because systems are always changing, with new code, new traffic, new dependencies, this loop just doesn't stop.

So what happens when a major accident, when a major incident, happens to a production system? You know, our largest customers often tell us, you know, they need to mobilize tens or even hundreds of engineers because they bring different knowledge, different data, different tools, and also different system boundaries. And also, most of those teams, not only they have a partial view, they're motivated by proving it's not their problem rather than finding the problem. So the area under this curve represent the time and time's resources that's, you know, part of this operating expense. And certainly, incidents, we all know, is very expensive to business outcomes, you know, with lost revenue, lost customer trust, and reputational risk. So this is a structural inefficiency we're trying to solve at Datadog. So Datadog is in the business of keeping this DevOps loop healthy and running for our customers.

So when there is production stress, we detect issues, we help coordinate the customer's team, get the right team involved to investigate and take action to remediate the system back to health. And the previous speakers have already talked about Datadog's unified end-to-end observability platform can shorten the incident response with fewer people, less time, and closing this loop faster. So the result can look something like this, you know, with faster detection, with the right team involved, with the right information, they're solving the incidents much faster. And we all know when you have an incident, time is money. Let me take you through this with a concrete customer example. So this is a major U.S. insurance company who's been a customer for five years. You know, before Datadog, they experienced thousands of severe incidents every year.

With Datadog, after they standardized on our core pillars of products, they began detecting and fixing those issues proactively, preemptively, before they became real production escalations, and they have seen a 10x reduction in their severe incident count. Certainly, when they have fewer incidents, when they solve them faster, there is a significant boost to their engineering productivity, and they're saving about equivalent of 70 employee years and translate to $11 million every year. So what does this mean from their business point of view? You know, again, with fewer incidents, with faster resolution, they are actually seeing a whopping 20x reduction in customer impact, you know, that's caused by these, these incidents. So this is the kind of value we've been providing to our customer with our unified platform. Now, by applying AI, we're taking that to the next level.

You know, we've launched a fleet of Bits AI agents, you know, yes, you know, Bits with those futuristic sunglasses. So we're helping our customers autonomously detect, decide, and taking action so that we're closing this loop even faster. So let me give you a few examples of the Bits AI agents and starting with the SRE agent. So you've heard about this several times throughout this presentation. So why SRE? You know, by now, the world has recognized that the future of coding is going to be AI coding, but still, a lot of our customers are struggling to really measure and establish the real ROI. And we picked SRE as our first agentic effort because not only our primary user personas are SREs, but also when an incident happens, you know, it's often acute, high stake.

And better yet, the verifiable results of what Bits AI SRE can do is very obvious to our customers. Actually, many of our customers, to test Bits AI SRE, they simply play back all of their previous serious incidents and see if Bits AI can get it right. Obviously, the business outcome is also very tangible when you can reduce incidents. So because of the verifiable nature, our customer are really excited about what Bits AI SRE can do for them. So how do they work? Don't worry, this is not an eye test. Let's focus on the left-hand side. You know, when a alert triggers, you know, Bits AI autonomously, you know, investigate the issue. You know, it first gather all the necessary data and the relevant context.

It then can reason like a group of engineers, you know, in different part of the system, try to establish multiple hypothesis of what happened, and then investigate all these hypothesis in parallel. The right-hand side is intended to show you, you know, how that parallel works, and it's very visually explained to our users. They then identify the root cause and even can propose the next step of action based on the customer's runbook. And better yet, Bits AI can learn, and it's getting better with every investigation. So the superpower of Bits AI SRE also comes from this holistic understanding of our customer's entire environment, you know, systems and applications, and users and teams, and even business processes. So it doesn't just leverage the rich real-time observability context and telemetry inside the Datadog platform.

It broadly integrates with third-party knowledge sources and also third-party telemetry, so our customers can really get that full picture of what's happening in their system. Even though we're still in the early days with Bits AI SRE, we are already getting a lot of positive feedback from our customers. You know, here you see two examples that customers are telling us how Bits AI SRE can accelerate their incident resolution and how it's acting like experienced engineers to help them understand their complex systems. And when it matters the most, you know, our customers are also telling us, Bits AI SRE gets the job done.

So if you remember, the major AWS outage last October, you know, we've got, many customers reach out to us saying, "You know, when that outage happened, Bits AI SRE was able to autonomously root cause to the outage before being notified by AWS themselves." So, even though Bits AI is an AI agent, it actually gets a lot of love notes from our customers. You know, as you can see on the, on the screen here, and not just because this is the Valentine's week. You know, we hear this from customers all the time, how pleasantly surprised at how smart Bits AI is, how it get to the root cause, how it's saving them time, how it's boosting their productivity. And the best indications of that is the actual usage. You know, for a new product, we look at the usage metric very, very closely.

So since the Bits AI SRE launch, we've had our customers run well over 100,000 investigations, and since our GA last December, this rate is increasing and accelerating. In January alone, we have more than 2,000 customers run investigation with Bits AI. Okay. So, let me switch gear to talk about how we're using AI for some of our other use cases and product, and starting with this security example. You know, we have our Bits AI Security Analyst in preview. So this agent can autonomously investigate Datadog's Cloud SIEM signals and conduct an in-depth investigation for potential threats, and also enable users to remediate those threats, all in, you know, the Datadog user interface. I think a better way to explain this is a real example.

So a major financial services company was testing Bits AI Security Analyst for the first time, and it actually correctly identified a live, serious security threat. So the situation is a compromised automation system in their environment changed their cloud firewall setting, such that it's open to the entire internet. You know, some of the sensitive management ports are open and exposed. Does this sound quite serious? Yeah. So, you know, without Bits AI, so the investigation would happen that they may receive some security signals, and that goes into a queue, and human investigate them one by one, and it could take hours for them to come to this realization versus what this security analyst agent could do. You know, it's investigating in parallel, and within minutes, we were able to surface this severe threat to the customer.

Obviously, the customer did become true believers of this technology afterwards. And this is just an example of how Bits AI security analysts can truly transform how security team can investigate and resolve security incidents. Let me give another example, because Bits AI SRE and security analysts, they're doing investigation or the trying to understand what's happening in the environment. What about autonomous remediation? So this is why we introduced the Dev Agent. So the Dev Agent can automatically analyze the telemetry and code when there's an error happen in our customer's system. It can explain the root cause in plain human language and even map it directly to the relevant code files and function, and then proceed to generate a context-aware fix.

And context-aware, aware here is important because this fix would be generated based on the real production context that Datadog uniquely brings to this problem. We can then proceed to test the fix in an isolated sandbox so that you have high confidence that this fix is ready to push to production. And all of this can happen without a developer even logs in. And of course, the tool can interact with the developer. You know, when they do log in, they can review the code, they can ask questions, and they can help also merge the PR. So you may be wondering, is this yet another AI coding agent, now there are already so many on the market? The answer is no, because...

This AI Dev Agent is deeply integrated within our DevOps loop to truly bring that full production context, so to help create a better PR. It's also very proactive. A lot of our customers are really pleasantly surprised that they received a Slack message from Bits informing them there is a high-severity error, and Bits already fixed it for them with a PR that's ready to be merged. So I shared three Bits AI agent example, and the important thing is, you know, with AI, how our users and engineers are interacting with their tool is also radically changing. So the good news is, our customers can use Bits AI from anywhere that's fitting into their workflow, whether it is in their Datadog UI or through collaboration tools or from their favorite IDE or getting invoked by another AI agents.

All of these interfaces are also enabled by the Datadog MCP Server. So the Datadog MCP Server enables, you know, our customers and the AI agent to access Datadog's AI-driven observability context, you know, directly from their existing workflow. As you can see on this chart, you know, since the launch of our MCP, we've seen also exponential adoption and growth, and many of the customers are integrating this to their existing workflow. They're also building custom AI agents so that, you know, they can build those agentic workflow to help them with incident investigation, performance optimization, and many other use cases. So with Bits AI and MCP, now incident resolution can look like this.

You know, we can help our customer narrow to the root cause, take action within minutes, and with very few people involved, as opposed to the tens and hundreds of engineers working tirelessly over hours and days. And better yet, the Bits AI agent can easily work alongside human SREs and human security analysts and developers to make them far more productive. So we're closing the loop even faster. You know, that's the value we're bringing to the customer. So I just spent the past 10 minutes or so taking you through, you know, how Datadog is solving the structural inefficiency in the DevOps loop, you know, by providing an end-to-end unified observability platform, and now turbocharged with AI, that we're helping our customers closing this loop very, very rapidly.

And as Michael and Olivier alluded to earlier on, you know, we've also been shifting to this loop on the left-hand side to the pre-production environment to help our customer ship better software into production. So our long-term vision, as Olivier outlined, is to achieve autonomy across DevOps , and security. So, that will require us to help our customer validate their system, their application, their AI agents. And in addition to helping them shipping production-ready code faster and preventing incident from occurring at all, you know, we have to help them maintain their safety and security to achieve true alignment of their, AI application toward their, intent and business outcomes, and in the meantime, give them control and feedback to help them improve. So I am personally very excited about this vision.

You know, it is special, you know, having built autonomy for trucks, now building autonomy for development, ops, and security. So at Datadog, we are very excited about this vision, and our customer need us to bring this together now more than ever. With that, thank you, everyone, and I will hand it over to Yuka for a Q&A. Thank you.

Yuka Broderick
Head of Investor Relations, Datadog

Okay, thank you, Yanbing. We are going to start a Q&A session now. Joining me on stage are all the presenters you just heard from. Their names are up on the screen. We are gonna be taking questions from the in-person audience. I have two of my colleagues, Megan, on your right, Eric on your left, with mics, and we'll alternate between them. So please, yes, raise your hands and wait for them to get to you, so we can all hear your question. We're gonna start on Megan's side.

Kirk Materne
Senior Managing Director, Evercore ISI

Thanks very much. It's Kirk Materna with Evercore ISI. Tim, I was wondering if you could talk a little bit about the silo tax that you brought up. I mean, one of the reasons there's always been silos is buying silos, meaning you've had SecOps, DevOps, ITOps have different budgets. I was curious, as we head into a world where you need more telemetry across all those areas, are you seeing those budgets collapse into one? And, and if not, how do you make sure that you're talking to the right person to, to get more sort of visibility on the security side? Thanks.

Tim Knudsen
Head of Security Products, Datadog

... Great question. So, I think we're seeing two things right now. Number one is the recognition that, we can make security easier, better, with the unification and consolidating on a single platform. And it's more just the obviation or no longer the need to have those separate budgets for those items, because they can get it all through the single platform. So that's one item. The second point is, I think there's also a broader recognition that, in fact, we do this ourselves at Datadog, that, having organizationally security and SRE in the same org has a lot of benefits, particularly when it comes across the board for both reliability as well as security. Teams working together, of course, in our case, using our own platform and tool sets, able to work much more effectively.

I think that's something that will also emerge over time as probably a growing trend across organizations as they see the need to be more effective.

Olivier Pomel
CEO, Datadog

Just to add up to that, it's not completely about the separation between buyers, but if you think of the impact of coding agents, there's going to be much less separation between the roles. Like, I think as we build a lot more, a lot faster, roles that used to be separate, you know, such as how you're building it, separate from whether you're building the right thing, which is separate from whether or not you're building it in a secure way, which is separate from who you operate it. I think all of that gets merged together quite a bit as the coding itself and how you build it is left to the agents.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Thank you. Eric's Side?

Alex Zukin
Managing Director, Wolfe Research

Hey, guys. Alex Zukin from Wolfe Research. Thank you so much for the presentation today. I particularly wanted to ask about this SRE agent. And given kind of the increasingly heterogeneous environment that is increasing the magnitude, scale, and complexity, when you think about the ability to read through from other kind of data sources outside of the Datadog perimeter, and compile and kinda complete tasks across that information, can you talk a little bit about kind of where, you know, your tool, how you differentiate in that context versus other folks building there? And then maybe some pricing and even a customer example of kind of who's at the frontier showcasing this.

Yanbing Li
CPO, Datadog

A great question. So when we started at Bits AI SRE, we actually focused on the data and telemetry that's within Datadog, because the most important thing for an AI agent is to show that it can get it right. It shows value to the customer. And when we're within the platform, we have real-time, rich, clean data that allow us to, you know, really showcase that value to our customers. You know, obviously, a lot of our customers do have those heterogeneous environments, so now we're expanding our telemetry to cover those outside data sources. Yeah.

And, you know, we also see there is a lot of other AI SRE startups and company, you know, out there, that they probably take an approach that's looking from the outside. And so the way, you know, we are uniquely able to, you know, bring the power of Datadog, but also integrate with those external data sources, have shown that we can simply generate, you know, better outcome and results.

Olivier Pomel
CEO, Datadog

We'll be more right with the data we have, and for which we have more, I would say, higher resolution-

Yanbing Li
CPO, Datadog

Yeah

Olivier Pomel
CEO, Datadog

... more reach, et cetera, et cetera. So if we integrate the whole, the whole stack, we can be more right, which is also why that's where we started. But, but I will say, you know, when you look at where the market is today, the market is very active today. The market right now is you, you have an issue, and you explain it. And to be honest, you can get a pretty good result if you ask Cloud Code to do that. You know, it can ask a number of different systems. You know, for that particular thing, I think it's okay.

For where the market is going, you know, which is you wanna be preemptive, proactive, and you want to prevent issues, that just doesn't work at all because the data doesn't flow, you know, through Cloud Code and through all of those different systems. You know, so the analogy, you know, back to autonomy in self-driving. You know, so after a crash today, you can send the pictures to ChatGPT, and it will help you tell you who was right or wrong, but the crash happened. But ChatGPT is not going to drive the car. You have a separate brain, you have a separate everything. And I think the same happens, is gonna happen to us with observability.

You know, as we get all of the data, as we control the data plane for that, as we can run, develop and run models live on all that data, we'll be in a position to get in front of the issues and prevent them.

Yanbing Li
CPO, Datadog

Yeah, and in terms of the customer adoption, as I mentioned, you know, we have 2,000 customers. So the product is still fairly new on general availability. So we're in the process of getting—make sure customers can let us use their names. But what I can share that, you know, our 2,000 customers is widely represented in all kinds of segments and verticals and geolocations, you know, from the largest, you know, Fortune 100 companies to the most innovative AI startup. So there is a fairly broad, non discriminative adoption of this technology, and this is why I'm personally excited about... You know, SRE is really a very strong, well-fitted use case for AI because of, you know, the result is instantaneous and verifiable. This is why we're seeing such rapid increase in adoption.

Yuka Broderick
Head of Investor Relations, Datadog

Then finally, I'll just mention that our pricing is completely transparent. You can go to our corporate website, and I believe that Bits AI SRE is $500 per 20 investigations, right, Kai? So, but you can check out all of that stuff by yourself whenever you want. All right, Megan? ... Megan side? Great.

Keith Bachman
Analyst, BMO

Hi, it's Keith Bachman from BMO. Tim, I also wanted to direct this to you, is how do you think about the boundaries within security about where you wanna be in terms of portfolio expansion? And talk about some areas of interest or where you don't wanna be. And the reason I bring it up is, through your slide deck, you're still a relatively small part of generating ARR for Datadog, and some of your competitors have much broader portfolio playing a consolidation game. So in some measure, success begets success, and so I'm just wondering about how you think about portfolio expansion, to try to get deeper penetration within your existing customers. Thank you.

Tim Knudsen
Head of Security Products, Datadog

Yeah, something I think about daily, hourly, minute by minute. You know, one of the things that leads our thinking obviously is where there's existing mature spend. And we believe, as you saw in my overview of the portfolio, that we're well positioned to go after areas of established budget, established spend. They also have a lot of established competitors. But then again, as you probably heard me say during my, you know, repeatedly in my piece, I think we do have an advantage because of the fact we have this unification in the platform, which really pays off when it comes to incident response or just being more proactive with security overall. So right now, in the markets we're in right now, we see a lot of runway to go after that.

We see the advantage that we can bring for differentiation. And we also see a, you know, strong ability to pivot off of our existing relationships to get into those security conversations. Now, even within those areas, obviously with AI and agentic, there's gonna be new areas, new services that we'll be looking at because of it—it's a new set of problems for the security teams and for the enterprise as a whole. Outside of that, obviously, we'll look at where it makes sense to expand, where we can again bring that advantage we have with our platform.

Yuka Broderick
Head of Investor Relations, Datadog

Okay, great. Eric's Side.

Ittai Kidron
Senior Equity Research Analyst, Oppenheimer

Thanks. Ittai Kidron from Oppenheimer, and my question is to you, Alexis. Your presentation was quite interesting, and thanks for making the case that frontier models, I can't, I guess, can't scale or do what you do without going outrageously costly or ineffective in the results that they deliver, accuracy. I guess when I look at your business, you talked about the massive feature set that you have, the domain expertise, and your ability to take a small model, right? And make a much better use of that in your business.

I guess if I try to flip it on the upside down a little bit, the question is, if you take $750,000 to build a small model that delivers much better accuracy, how do you think about the barrier of entry from third parties into your business? How do you think about the risks? I mean, the opportunities are clear, the revenue is clear. What, in your view, are the risks of AI to your business?

Alexis Lê-Quôc
Co-Founder and CTO, Datadog

Yeah, I think this is where the data advantage, I think, is clear. And it's, I think, in the case of our time series model, it's just we just have a volume of data, of like, legitimate, real data that's just not publicly available, so that's one edge. I think in the case of the sort of the training of the agents, it is both the volume and the quality of Evals we can build, and we have, that I think differentiates us. So I think, you know, there are obviously other companies trying to build small models.

I don't think there's a clear financial barrier of entry, but the quality of the data you get, I think that's the real, that's the real moat. The one of the issues and, you know, we see is, for instance, generating synthetic data in our domain, it's not terribly easy. It's not like you can basically sample what's out there, text or image, and then you remix and create something plausible. It is much more, if you will, the relationship between the way software is built and the observed behavior is much more intricate. And so you can't just go to one of the providers and say, "Okay, I want synthetic, synthetic data.

I want tons of it, so I can train my sort of small models." So that's where we retain advantage. I think the other piece is what Olivier already did too, which is, look, we just sit on that data. We get it for free, as it were, for training purposes, right? 'Cause it's used by our customers in the day-to-day, and that's where someone else just isn't sitting in that flow. They have to acquire that flow somehow, and they don't have the scale to do that. So I see it as a positive flywheel that we get data, we get more and more data. We can get the SRE agent to generate more and more evaluations, which makes, hopefully, data more and more valuable. So it's really accretive in that sense.

Olivier Pomel
CEO, Datadog

And remember, we spent $1 billion in R&D last year, maybe between $700 million and $800 million the year before, and then another $500 million the year before that. So, spending all this money is why it costs us $750K to train the model, you know?

Yuka Broderick
Head of Investor Relations, Datadog

Great. Megan Side.

Sanjit Singh
Executive Director, Morgan Stanley

Yes, Sanjit Singh from Morgan Stanley. Thank you for taking the question. I wanted to get the team's view on how fast are we racing towards this vision of autonomous operations. Like, what does this look like a year from now? What does this look like three years from now? And then, in terms of executing is that, that vision, are there other pieces of the stack that Datadog needs to own? Do you need to own the software delivery pipeline to really execute on this? And that gets into a build versus buy question. So just on the, the in terms of the race to autonomous operations, what does that look like 12 months from now and over the next couple of years?

Olivier Pomel
CEO, Datadog

... Yeah, I think it's really hard to tell. You know, if there's one thing we could tell with AI, is that the rate of progress is very surprising. You know, so you get these big jumps of capability, and then it looks like things are sort of stagnating a little bit, and then, you know, when you're ready to write it off, it starts jumping again. We've had a jump recently, you know, just two months ago, in the quality of coding agents, for example. That made a very notable difference, at least in our usage internally, and what we can see from our customers. So I think it's hard to tell whether, you know, we get there in a year or in three years, you know, but what's pretty clear is that we're gonna get there.

Like, the problems are getting solved one by one. The technical approaches look like they're working, you know, so we're gonna get there. In terms of the moving pieces, we had identified a few areas that we where we thought we needed to move faster, you know. So one of them is feature flagging and experimentation. We were not all that interested in feature flagging and experimentation a few years ago because we thought feature flagging was a bit of a commodity in itself, and we thought that experimentation was more on the surface, more something that related to, you know, A/B testing button colors and things like that, which is it's interesting, but that was not our core business.

I think we've changed our minds quite a bit on the topic as we understood that this would be a key part of automatically shipping and iterating on software, so that it could really make use of the productivity gains you get from the on the coding side from the AI agents. Another example is data observability. You know, we thought also this was an interesting market, but this was really ancillary to what we're doing. Now, that this becomes one of the key limiting factors or the data quality and timeliness is one of the key limiting factors for building and deploying AI models is also something that has built up to the top of the list for us. There's a few other areas we're thinking about, but I'm not gonna tell you about it.

Yanbing Li
CPO, Datadog

Can I add some more comments? You know, what I've learned about autonomy for, you know, DevOps , and security is it's not a zero to one game. This is very different from putting trucks on the highway without a driver. It's absolutely zero to one. There is nothing in between. What we're seeing is the adoption of this closing this autonomous DevOps loop is the evolutions of technology and human behavior. So we started with the investigation piece. Initially, a lot of the customers don't trust the result of the investigation. They verify that, but now as they use it more, they start to gain more trust. Same thing with pushing a code fix. Most of the people still are very comfortable, absolutely requiring people in the loop.

But as more confidence get built and the technology get more mature, we would expect to, customers to get more comfortable. And really, the holy grail is proactive, preemptive, predictive detection, because this can truly move the needle toward autonomous operation because, you know, we, detect and fix issue before they even occur. So what we're trying to do is to demonstrate tangible progress along the circle to increase customer trust, improving the coverage and rest of the technology. I don't think this autonomy is gonna happen in a zero to one fashion. It's going to happen, in this partnership between technology evolution and our customers' comfort and trust and culture evolution.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Thank you. Eric's Side?

Eric Heath
VP and Equity Research Analyst, KeyBanc Capital Markets

Over here. Thanks for doing this. Eric Heath at Key Banc. I wanted to come back to maybe bring your own cloud and understand that opportunity a little bit more. Can you just talk a little bit about who the addressable customer base is for this, maybe the timing of when you make this product more broadly available and the go-to-market strategy around it? Thanks.

Yrieix Garnier
Product Leader, Datadog

Sure, I can start with that. So, maybe to your question about timing, this is something we're already previewing, so we have customers actually leveraging those, that solution, Bring Your Own Cloud.

Olivier Pomel
CEO, Datadog

We have customers paying for it, you know, even-

Yrieix Garnier
Product Leader, Datadog

Yes.

Yanbing Li
CPO, Datadog

Yeah.

Yrieix Garnier
Product Leader, Datadog

Even customers-

Michael Whetten
SVP of Product, Datadog

For logs, it's GA.

Yrieix Garnier
Product Leader, Datadog

Paying for it, and the idea is to go to those markets where historically they potentially could not leverage the data. And I was talking about like data residency, like some markets require the data to stay in that country, and that's like for those, that would be interesting to have the technology inside their own infrastructure and then be able to leverage it from there. So this will be some of those opportunities we extend. I think would be different geos that can look at it, but also industries where you need to have, like, more compliance, and you need to keep some of the data inside your environment. So that's pretty wide from like geos to different industry, and and we already have paying customers, and we're building more and more of like from logs to other type of telemetries that we're actually building on the BYOC.

Olivier Pomel
CEO, Datadog

There's another type of customer we're targeting with that, which is the hyperscale customers. You know, so when certain customers want to make use of infrastructure they already have, or they want a licensing model that is more favorable to them than the SaaS model, and that is something that we are addressing with BYOC as well. So we see some of these customers coming to us.

Michael Whetten
SVP of Product, Datadog

'Cause everybody wants that SaaS solution and feeling for their users, right? But, making it affordable when you have 30+ petabytes a day is tough over the wire, so they're already finding use for it.

Yuka Broderick
Head of Investor Relations, Datadog

Megan Side.

Fatima Boolani
Managing Director and Co-Head US Software Equity Research, Citi

... I thik you have the mic. Fatima Boolani from Citi. Thank you so much for doing this. My question is just around the Bits AI suite. I can appreciate that is the gateway drug, so to speak, to the autonomous vision. But just taking a step back and maybe asking a more pointed question, you all are very excited about the Code ecurity that you can absolutely infuse right out of the gate. But Opus 4.6 and the Codex 5.3 iteration, I mean, they are absolutely coming up with relentless capabilities around Code Security inherently.

I'm wondering how you create a protection barrier to the value you're providing to customers, and where that competitive edge is vis-à-vis the type of Code Security , hygiene, and rigor you're providing vis-à-vis the context from your platform versus the general purpose LLMs, who could maybe have a broader coverage around Code Security ? Because, you know, to your observations, the coding assistants are only going parabolic. Thank you.

Michael Whetten
SVP of Product, Datadog

I think it's an advantage myself. Like, I don't think it's us versus them in any way. Like, having the LLMs be really good at creative thinking and ideating around these things is fantastic, but one advantage of our Code Security is that we can see if how that code is deployed in production. So for SCA, for example, it might find that there's malicious packages or vulnerabilities in packages, but you don't know if that package is actually deployed in production. So you might pull a fire alarm and have everybody waking up, and then it's actually not even deployed in production at that version, right? It's not a real vulnerability. So I think these things can work in conjunction.

As we've said, we're using all these technologies in the appropriate ways to inform, but I think there's still something unique to bring to the table there, is my opinion.

Tim Knudsen
Head of Security Products, Datadog

Yeah, I don't think we're going to see the end for the need of defense in depth.

Michael Whetten
SVP of Product, Datadog

Yeah.

Tim Knudsen
Head of Security Products, Datadog

But clearly, we should think about and also understand how far in the left now with coding agents we can solve for a lot of security problems. But to, to Michael's point, there's a, there's a lot involved, and a lot—there's a lot of complexity in these production runtime environments. And that's what you're going to—that's not going to go away. And there's always gonna be this need to understand for something that has been found, you know, to the degree, for example, a vulnerability. Is it actually something that's not only being loaded, but is it actually being executed? And that's, you know, it—that's the area that we'll continue to focus on, even with, you know, the advent and parabolic adoption of coding agents.

Olivier Pomel
CEO, Datadog

Yeah, it's not either or, right? It's and, and, and again. So there, there are a few examples of ways in which this breaks down. You know, it's also because the code you at the point, at the time you wrote it, because it was secure at the time, doesn't mean it's still secure, you know, two weeks from now. So you know, there's something that needs to be re-evaluated permanently. There are some things that, you know, Claude might think is secure, but you, as a company, decide is not, you know. And so you might have your own rules, you might have your own, you know, take on everything. So all that to say that there's gonna be a lot of room for a lot of specialized tooling that is going to complement the general-purpose coding agents.

But definitely, you know, one, that tooling might use some of the same models as these agents, and two, you know, the agents are here to stay, and they're going to do more and more. So it's going to be a question of working with them and complementing them, not trying to replace them.

Michael Whetten
SVP of Product, Datadog

Can we produce a good product, right? I'm sure people told you, "Why would you build a monitoring company for cloud software when the clouds would probably do it right?

Olivier Pomel
CEO, Datadog

Yeah.

Michael Whetten
SVP of Product, Datadog

Here we are.

Olivier Pomel
CEO, Datadog

How'd that be for you?

Yuka Broderick
Head of Investor Relations, Datadog

All right. Ryan?

Ryan Mac
Executive Director & Senior Equity Analyst, Wells Fargo

Hey, Ryan Mac, Wells Fargo. It might be early, but love to hear about the differences in monitoring an AI agent workflow versus monitoring a normal SaaS application. Do AI agent workflows require more data intensity and more logs that are required to monitor, and maybe more observability across a wider surface area? Love to hear what you're seeing so far.

Michael Whetten
SVP of Product, Datadog

There's a lot of recursion, and uncertainty in one what the agents are doing, and it's changing a lot. Even internally, we're always experimenting with different methods. So it's a very volatile area, and so it does require some specialized tooling. Also, the two fundamental testing dimensions of quality assurance: verification, that it's good, and validation, that it's working appropriately and a good product, right, are a little tougher when you don't know what it's doing, and then you put it out there and see how people are engaging with it and what it's doing. It requires a different feedback loop than when you can write deterministic software and test it in pretty predictable ways. So I think it does require some new things.

That's why we have playgrounds, and sandboxes, and experimentation, and why, experimentation really became important for the major research labs and foundation model providers. All of them are big experimentation users because they don't know exactly what it's gonna do in production.

Olivier Pomel
CEO, Datadog

But it's, it's super early. So I'd expect there will be a lot more clarity about the space in a year, in two years, three years. You know, it's so early that, you know, right now, the companies that are building agents are on the leading edge. And so, you know, we, we're all learning together.

Yuka Broderick
Head of Investor Relations, Datadog

Megan Side.

Arthi Lulla
Vice President of Equity Research, JPMorgan

Hey, this is Arthi Lulla from JPMorgan, here for Mark Murphy. Olivier, anyone who wants to chime in, a couple of days ago, you guys talked about one of the largest AI foundational model companies adopting Datadog, consolidating open source in-house hyperscaler solutions. We spoke with another AI company that said that your platform was critical, and they couldn't replicate it if they wanted to. So can you just help us understand that journey that some of these really highly innovative companies are taking, where they, you know, come to the realization that they can't do it themselves, they don't want to do it themselves. Is it the breadth of capabilities, is it the fact that it takes more resources, even with developers, than they think it does? Is there, like, an aha moment for them? Thanks.

Olivier Pomel
CEO, Datadog

Yeah, I mean, it's been the story of the company, I mean, since, since day one. Like, and these, these, AI companies are not any different from the cloud natives we were serving initially or from the larger enterprises, you know, we started serving after that. You know, they all have some mix of homegrown and, you know, some various tools they bought in the past. It's always-- it never works quite, quite, quite well enough. It's always a time suck. It always, becomes a big issue at some point because, you know, it's the keeping your systems up and, and right and safe, and keeping shipping software is a absolutely business-critical need, and you absolutely have it- have to have it, nailed down, and it breaks, you know, and then it causes some questioning of what you're using.

Folks realize that, you know, they have other problems to solve to be competitive than to reinvent something that they can buy, and they just typically buy what we do. So the question is not, like, look, if, if the biggest companies in the world try to do that as they, as their sole focus, could they do it? Maybe, maybe not, you know. But the point of the point is they're doing other things. They have to do other things, and there's no point in them, you know, building their own monitoring, their own observability, and their own, autonomy now.

Yuka Broderick
Head of Investor Relations, Datadog

Great. All right, Eric's side.

Kingsley Crane
Managing Director of Equity Research, Canaccord

Hey, Kingsley Crane at Canaccord. Thanks for doing this. So you've used Datadog to help observe and build Datadog in the past. How do you think about observing your own agents? And in solving that recursive challenge, does it help you build a better agents console and be the best product on the market to help customers observe swarms of agents?

Olivier Pomel
CEO, Datadog

Yes, but we have to be careful because, you know, when a field is brand new, we're not every other company. So the one mistake you can make is to mistake yourself from the customer, and the way we learn is by speaking to as many customers as we can. Then we dog food the product. We need to make sure it needs to make sense to us, too. It needs to be amazing for our use cases, but just because it works well internally does not mean it's going to be the right product for customers.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Okay, this will be our last question on Megan's side.

Howard Ma
Director and Equity Research Analyst, Guggenheim

Hi, Howard Ma with Guggenheim, Guggenheim Securities. I wanted to ask about the perceived threat of OpenTelemetry and other forms of open-source observability tools, and/or I guess OpenTelemetry being more of a standardized protocol. And what is Datadog's competitive mode while embracing these open-source standards? And specifically on the back end, I'm curious how defensible are, you know, having 1,000+ integrations and the ability to correlate lots of different data sources in, you know, in the way that Datadog does it, that's different from others. And from a coverage standpoint, you had a slide that had, you know, shows monitoring virtualized environments on one end. So going more on-prem and in one direction, and then, you know, GPU monitoring in another direction.

I mean, is really the right way to understand it that you want to address highly customized enterprise needs out of the box, and then that is really the true mode? Thank you.

Olivier Pomel
CEO, Datadog

The collection has never been the mode, right? When we started the company with Alexis, we thought, we decided everything that's on the server side, the SaaS, is going to be the smart. And then the what lives on the customer environment, like the agents and everything else, and the collections and the integrations, is going to be open source. And our agent and everything that came with it is open source. It's actually very permissively licensed. It's Apache. Is it still Apache or?

Michael Whetten
SVP of Product, Datadog

Yeah, I think so.

Olivier Pomel
CEO, Datadog

Yeah. Well, we'll have to check the license. But, we didn't change it at the very least. And by the way, early on, our competitors were using our agent, and they were using our integration and everything else. Today, we're very happy to see OpenTelemetry come up. You know, this is, we are OpenTelemetry native. It's great, so a great way to get more data into the system, make it work faster, reduce friction. I think it makes everybody happy. It's never been the differentiation. When you talk about having a 1,000 integration, the question is not: Can you plug into the system and get data out? The question is: How well do you understand data? How well can you use it?

How does it come together with the rest of what you have? Whether or not you're using OpenTelemetry or some of its predecessors, because there were a few, different standards before that, that part is the, is what is fairly unique to us, and what we're doing much better than anybody else.

Yrieix Garnier
Product Leader, Datadog

Maybe just to ask, to add on the OpenTelemetry side, it's not really a competition in some way, 'cause we are, like, a big contributors to OpenTelemetry. If you look at Datadog, we're, like, in the top contributors of OTel, and we have, like, now we fully support OpenTelemetry, and we're, like, supporting, like, any type of data will come, the same way being the OpenTelemetry our own agent. So that's really is for us, to Olivier's point. How the data is coming is what we do with the data internally, which is more important.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Okay, great. Well, this ends the first half of our session. So we are going to take a 20-minute break. That means you'll be back here at 3:30 P.M. to start the second half. Thank you.

Stephen Erickson
Director of Engineering, Washington Post

... I'm Stephen Erickson. I'm an engineering manager for the Washington Post. We have a very dynamic traffic cycle, so from one moment to the next, we may have breaking news that comes about, and when that breaking news does happen, we can get traffic spikes. On the January 6th, on the Capitol riots, all of a sudden I saw in one of our Slack channels a Datadog monitor that we had set up come in, and it flagged that, you know, there was a huge spike in traffic. If it hadn't been for the Datadog monitor that, you know, fired, that alerted us to the event, we would not have been able to sustain the load. And so because of that alert, we were able to basically go and scale our systems and handle it appropriately.

Datadog has really provided a tremendous amount of visibility into our overall platform, architecture, all of the back-end, front-end services that we have. Particularly for the DevOps culture, before Datadog, there was not a great understanding or visibility into our overall system. Datadog brought that all together in one cohesive scene. When an event does happen, you know, because they're going to happen, I go to Datadog first, and I can get an overall view of the world, where, you know, I can look at all the systems in one unified location. I can see what my problem areas are. You know, how Datadog has impacted my life, I'd say that it's made my life dramatically simpler.

Speaker 37

Why did you need a Real User Monitoring solution?

Speaker 36

Wawa is a large convenience store and gas station chain that's primarily focused on the East Coast, but continuing to grow into further west. It's increasingly important for us to understand customer engagement, understand rewards, and different ways of enticing our customers to enjoy their experience better.

Speaker 37

Why did you choose Datadog Real User Monitoring?

Speaker 36

Datadog offers an integrated solution, so we can look at the problem holistically as to why we might be having issues. The Real User Monitoring solution helps us better understand what our customers are doing in our stores when they go through the ordering process to buy a sandwich or a drink or anything like that, and Real User Monitoring gives us those insights that we need to design our software better.

Speaker 37

Why does T-C onnector need application security management?

Speaker 33

Well, because we manage financial resources from our clients, and it's important to us to secure this resource in our platform.

Speaker 37

Why did T- Connector choose Datadog ASM?

Speaker 33

We use ASM from Datadog to reduce critical issues in order to prepare the company for the annual pen test exercise.

Speaker 35

Our mission, just to simplify it, is to simplify healthcare, to make it easy, to make it transparent. TriZetto's applications are critical because it allows us the transparency to see what we're paying for care, what the services cost. Some of the challenges that we had before using Datadog is we had no visibility into issues. Clients would call us and say, "Hey, such and such is slow, such and such is down." After we implemented Datadog, it was like the world of data was our oyster. We had real-time data and telemetry at our fingertips. We could look across the environment and know what was going on for various systems, where prior we had to swivel between different web pages and monitors. It just improved our mean time to resolve issues.

We are able to more closely partner with our clients, and we are currently actively working on operational dashboards for our clients, where they can see the environment real time. They can see how their systems are performing in real time. They can look at the response times of our systems. They can look at the uptime of our systems, the availability. I have budget responsibilities, and one of the first things that Datadog enabled me to do was to reduce my cost. I was able to consolidate down from 4 different tools for monitoring to one. I always joke with my account rep that before Datadog, I was blind, but now I can see. You know, there is some truth in me joking around in that I didn't know what I didn't know, and now that I know, I can address, and I can fix issues proactively.

If you're looking for a way to get better, save money, consolidate tools, especially around observability monitoring, and get a full feature set, you know, Datadog, I would definitely recommend.

Speaker 31

Mercado Libre hosts the largest online commerce and payments ecosystems in Latin America. We operate in 18 countries, and our mission is to democratize commerce and financial services to transform the lives of millions. Our e-commerce business units has over 73 million buyers and 90 million sellers, 365 visits, and 36 purchases per second. Our financial services units average 173 transactions per second with over $10 million granted in loans.

Our applications primarily run on AWS and GCP. Currently, we have over 100,000 cloud instances running, more than 900 million requests per minute, 24,000 microservices and more than 10,000 daily deployments. As our company grows fast, Mercado Libre infrastructure becomes more complex. We have transitioned to microservices, which makes app building easy but more difficult to monitor, since there are so many different components. We needed a tool that would centralize the data across our tech stack so we could make decisions and troubleshooting faster.

We selected Datadog as our Infrastructure Monitoring solution due to its extensive range of integrations and strong reputation in the market. We valued Datadog's comprehensive AWS and GCP monitoring capabilities, making it the ideal choice for our needs. Thousands of dollars can be lost if some of our critical applications stop working for even one minute. Datadog helps us to act fast. An alert is triggered once a problem is detected, and the right team is able to fix it. The alerts often include troubleshooting information, which accelerates resolution.

We recently had an issue with a service due to a problem with a database. Thanks to thresholds that we have created in Datadog, we are alerted before the issue become critical, and we were able to act quickly and make the necessary changes. Having our metrics in Datadog not only help us to react quickly to problems, but also powerful allies in strategic and proactive decision-making. For example, we relied on these metrics to tell us if our usage on a cloud service was increasing too much or that we were not reaching our goal of using some type of reserved or spot instance in determined region. Mercado Libre is continuously looking to improve our application cost of time and performance. Datadog is a big help with this initiative, and we are researching with Datadog to find more opportunities to be more efficient and available.

Speaker 29

I'm an engineering manager of a few different teams, and we aim to provide a platform for all of our users inside of the Autodesk Construction Solutions division. Generally, we look to make sure that that platform gives them all the observability that they need to be able to own their own infrastructure wherever possible. So we chose to use this database monitoring product in addition to the other Datadog tools we were using. From my perspective as an infrastructure engineer, it allows me to offer more insights to our users with less access directly to the databases, with less expertise required, since query plans explains is turned on by default. I'm really enjoying the explain and query visualization.

It's helpful to have that stand out so that you can just quickly pinpoint where you might want to investigate more, and then it's super helpful to have not only that visual representation, but also some of the suggestions on how you might solve some of these performance issues that it's bubbling up to you in that view. A variety of people on my team use DBM, and generally, they kind of fit into two different personas. We have product engineers that are actually building the tools, and so they go in and look specifically at their own databases to help do performance or query tuning.

And then folks on my team or folks that have a responsibility that crosses many different products, they will use it more as a fleet management tool and understand, "Oh, these specific queries are kind of the top offenders of, like, being slow or underperformant." We'll go in and use that to proactively reach out to those product teams to help them understand how better to improve performance. Datadog allows us to do that by continually developing products that make it easier to maintain the systems that we instrument with them, but it allows my teams to take something we find very important, which is observability, and have it be a default offering of the platform that we make.

Speaker 32

Porsche Informatik is part of Porsche Holding, who is Europe's largest automotive retail company. Our solutions are driving efficiency of retail. We are providing solutions like service booking online, car configurator services, spare part management, financial services applications, and many, many more.

The transformation of the automotive business means we continuously have to change. We need to be ready for the next step all the time. An absolute key success factor of providing a robust and resilient IT landscape is to have full transparency across all parts of this complex landscape.

Application availability is so important. Think about a case where tens of thousands of people at thousands of dealership locations are there at 7:30 A.M. trying to book their car for maintenance, and one of the applications needed to perform the job is down. These are the moments where really every second matters.

Eventually, we selected Datadog because it showed that it allows us to monitor all elements, from on-prem network elements to cloud-native services, in a consistent way. With implementing Datadog, our internal rollout team has, from my point of view, become a major driver in breaking down silos and a significant step towards a complete DevOps culture in Porsche Informatik. With Datadog, we've been able to give our teams full access to performance data from all involved elements in a single dashboard. Now, with that holistic and complete picture, they are able to react faster to the needs of our dealerships in the field. In the end, results in a better customer experience overall.

Datadog has really accelerated our troubleshooting capabilities, and it improved our mean time to respond by an order of magnitude.

An unexpected benefit from introducing Datadog as a platform was its usage for our overall logging strategy. So we could get rid of other logging targets, integrate everything directly into Datadog, and thus have a integrated and consistent analysis of all of this data.

...As I look into the future, Porsche Informatik will continue to be leading the change in decarbonization and digital transformation. Datadog provides us with intuitive end-to-end monitoring that's easy for teams to adopt across the organization. Beyond that, we look for strategic partners investing into advancing the platform to support our evolving needs, and Datadog as a partner, has proven to be exactly that. Aligning seamlessly with our vision and our goals.

Speaker 30

Flight Centre Corporate is all about creating amazing travel experiences for corporate travelers, helping organizations get their people from A to B in a cost effective, efficient manner.

The scale of our organization, and with 15,000 bookings being processed a day, operating at 60% automation, means that if we actually have 30 minutes of downtime, it would have massive impacts.

Datadog as a solution was something that was good for us because we were looking for something to fill gaps in our technology. Part of that was that we really did struggle getting our previous solutions into the hands of our engineers. We had long lead times to finding different things were wrong, to then being able to fix them, and our tools weren't necessarily allowing us to find things in one place. In Datadog, we were able to see immediate improvements in people, able to see different parts of the tech stack that they weren't necessarily aware of, and also being able to find different, analysis and different insights. And that allowed us to have a lot of benefits right from the word go.

Datadog came along at a great time where we were actually experiencing some general system challenges related to growth and adoption of the product. Since we've been able to actually put Datadog through that product, we've seen a marked improvement in its overall performance, stability, uptime, and we've actually been able to provide a better level of service to our customers.

Our APM platform has really allowed us to identify really quickly and easily when we're maybe seeing some issues that we wouldn't have been able to identify previously. Recently, we saw approximately 13% reduction in throughput. That helped us to really quickly identify that we had an issue happening. Once we actually rectified the issue, we actually saw even a better response from the throughput with a 10% increase.

Cloud Costs was a great module that we're able to just enable and leverage the existing integration that we'd already carried out. It's actually shone a light on wastage, and it's also shown us license optimizations that we're able to go and actually have a conversation with our vendors and, and improve on. It's been a fantastic product.

It's really enabled our teams to get real end-to-end visibility. So from the front end, using RUM, our team can really identify if a user is having frustration with the service, right the way through to our back end and database monitoring. It really allows us to identify issues, quickly, resolve them, and create a great customer experience.

Speaker 34

東芝は150年にわたり、発電などのエネルギー事業、水処理やビル、鉄道などのインフラ事業、そして情報インフラのデバイス事業に携わっております。東芝マネージドサービスアルバコアは、エネルギーや社会インフラ情報システムを24時間支え続けるマネージドサービスを提供しております。また、ITインフラだけではなく、制御システム、いわゆるOT、これも含めた社会の重要インフラに対するサイバーセキュリティもサポートしております。私たちは、オンプレミスの保守を含むと数万台の物理サーバー、クラウドで約4,000のサーバー、そして120のテナントを運用しております。

現在ではAWSやAzureなど、8つのクラウドサービスと仮想化環境に対応できる運用サービスとなっていますが、サービスの立ち上げ当初というのは、異なる複数の環境に対して、いかにして同水準の運用サービスを提供するかというのが課題でした。当初はですね、クラウドサービス側で提供されるモニタリングツールを使って、各システムの監視を行っていたんですけど、そうしますと、そのツールごとに学習コストがかかるというだけではなくて、アラートを設定する手順ですとか、システムの状態を確認する手順といったものがどうしてもバラバラになってしまうので、やはり同じ水準でサービスを提供するというのが難しい状態でした。そこでDatadogを採用して、監視ですとか、状態確認の手順というものを統一して、同水準のサービスを提供できる形になりました。私たちの運用サービスでは、障害復旧時間のSLOの設定として最大120分というものを定義しているんですけど、複数のアラートが同時多発的に上がった場合でも、その目標値を達成できるように、内部の目標としては障害復旧時間を30分という形で設定しています。これを達成するためには、発生するアラートの件数を抑制するのが効果的なんですけど、そのためにはお客様ですとか、あとはシステムの開発担当ですね。そういった関係者と調整をしながら、リソースの増強をしたり、あるいは監視の閾値の調整をしたりといったことが必要になるんですけれども、Datadogがあることで、ダッシュボードですとか、イベントエクスプローラーでシステムの状態ですとか、モニターアラートの発生状況というものを可視化できるようになりましたので、これをベースにお客様とスムーズに調整ができるようになっています。結果として、アラートの発生件数は16%の削減。...

30分以内に障害復旧ができる、あの、その率というのが、一年前は87%ぐらいだったんですけど、まあ、それが100%になっているといった例も出てきています。

Observabilityは、情報システムのすべてのレイヤーを対象に、観測できる能力を高める技術です。そこで私たちが今取り組んでいる活動が、Observability活用の推進プロジェクトです。SREの位置付けは、インフラ寄りを脱却して、アプリ寄りなSREに入っていくスタイルに変えていきます。私たちは、東芝グループが提供するサービスとSREの活動を通じて、安心、安全、快適な社会づくりに貢献してまいります。

Yuka Broderick
Head of Investor Relations, Datadog

All right. Welcome back, everyone. Let's kick off the second half of our investor day. I'm pleased to welcome Sean Walters, our Chief Revenue Officer, onto the stage.

Sean Walters
CRO, Datadog

Thank you, Rika. All right. Good afternoon. My name is Sean Walters, and I've been with Datadog for seven years. I lead our global sales team. I'd like to start by describing our go-to-market motion, how we're organized, and how we've expanded our capabilities over time. Our go-to-market motion has three main parts. The highest volume part of our go-to-market, in terms of net new logos, is our self-serve market. A lot of other software vendors only focus on enterprise use cases, so for them, there's only a finite number of accounts to go after. Whereas for us, even though we have well over 30,000 customers today, we believe that there are a lot more new logos to go get. We will see self-service customers who start trials with us, visit one of our demo booths at a conference, or otherwise indicate their interest in Datadog.

If they've contacted us in any of these ways, it's a fairly warm lead, so we're gonna check in with them quickly with our commercial team. The commercial team is a heavy outbound motion. They work on those warm leads from self-serve customers, but they're also doing a lot of work on prospects, proactively reaching out to see if Datadog can provide value. And commercial is a velocity logo engine, and often lands logos with a small amount of dollar value. That's just fine, because we are a land-and-expand model, and we have the opportunity to grow with these customers over time. While the focus for commercial is landing new logos that are smaller, some of these companies achieve great business success and become very large customers with big cloud footprints and a lot of Datadog usage.

In fact, 24% of our top 25 customers are commercial customers. 50% of our $1 million+ customers are commercial, and 72% of our $100,000+ customers came from the commercial business. So that's really the power of the land-and-expand motion at work. The third leg of our go-to-market strategy is the enterprise team. Enterprise account executives will stay with the customer throughout the relationship, working with all the teams across Datadog to support and achieve success with that customer. We deal with enterprise customers with increasing sophistication, and we've built our capabilities over the years, so I'd like to dig into some of that. First of all, there's our strategic enterprise team. This is our typical enterprise sales motion, and it's a full lifecycle motion. We're doing outbound pipeline generation. There's lead gen through corporate marketing, as well as account-based marketing, events, referrals, and many other efforts.

If we get our foot in the door, we apply a custom selling methodology that guides sellers through qualification, technical selling, trial and evaluation, champion building, and all the things we need to do to build the confidence in the value that Datadog brings. Once we land that customer, the enterprise rep will continue to develop that relationship alongside other teams, including customer success, post-sales support, and service. Then, there's our majors teams. These are our largest existing customers. Our reps here are focused on increasing usage for our current use cases, but also getting more strategic with the customer by going wider across departments, personas, and use cases, and solving multiple technical and business problems with our platform. Finally, there's our key accounts teams.

This is a more recent bet that we've been making to enlarge new customers that we haven't yet engaged with. These customers may have a longer sales cycle, and it may be more of a top-down motion than we traditionally go after. So we're focusing these reps on a longer journey. In the first year, these reps may have objectives like number of meetings that they schedule, but in the second year, their goals are shifted much more to revenue and closing deals. Our continued innovation is delivering capabilities that these sophisticated customers demand, like enterprise-class governance, access control, and data security. And since they operate at very large scale with very diversified environments, our recent innovations like Flex Logs, Frozen Logs, and Bring Your Own Cloud play a very important role.

This team made significant strides in 2025, including some faster-than-expected wins that are already expanding very rapidly. Our enterprise sellers go after the largest customers in the world, including companies in the Fortune 500. We are making great progress in penetrating this group, but we still have more than half the Fortune 500 to go, and we're working to engage them as they move to the cloud. And our median spend for these customers is still modest, at less than $500,000 per customer. So we think we have much more opportunity here. Compared to commercial, there's not as many new logos to land in enterprise, but the size of each customer can be orders of magnitude larger.

As we have more products to sell, and as we've grown in our ability to sell our full platform, we've seen the average size of our enterprise lands increase, and particularly so in this last year. If we do a good job developing the relationship and delivering more value over time, these customers ultimately expand from their land size. Here I'm showing the average enterprise revenue per customer against that average land size. Finally, I'd like to talk about our technical services, our technical support and services team. As a mission-critical partner, it's up to us to provide not only a platform that solves business needs, but also to help our customers establish best practices and execute on their learning curve with Datadog.

We've developed a variety of services to support our customers in their use of us, including implementation services to get started with Datadog best practices and a customized plan. Technical enablement services to offer training and knowledge building of the Datadog platform. Premier support, an extra layer of support that is more individualized to the customer's needs. And technical account managers, who provide guidance to enable and accelerate Datadog adoption and support customers in their journey with Datadog. Okay, so that's what our go-to-market looks like, but we've been investing in going bigger and deeper over time, so let's talk about some of that. First of all, we've been bulking up our capabilities in channel and alliances. First, the hyperscalers. These are very important partners with us. We're seeing more and more synergies over time.

We're always working to improve our relationship, our co-selling motions, and our technical partnerships with them. Then there are the resellers and other partners. In areas like Latin America, Korea, Japan, and many other regions, we're working with resellers and others is critical to our success in those regions. They have customer relationships, technical capabilities, service offerings that make working together a really productive thing to do. So we're seeing a lot of success with these folks. And then finally, the system integrators. We're not a service-rich product, which is good, because customers don't want to spend a lot of money on ongoing professional services. So with SIs for us, it's more about aligning to strategic initiatives. We've signed some really exciting partnerships with SIs, and we're building out those programs.

With hard work over the year, we've seen meaningful growth in channel alliances influence business, but we have a lot more opportunity here. Another area of growing investment is security. We started our security in the typical, usual bottoms-up sales motion. This works well with our observability users, who can be champions for our security products, or if they're in the DevSecOps organization and part of the purchasing decision. We also started from the bottoms-up because we were working on product readiness. The best salesperson person for our products is the products themselves. That's why we invest so much in product, because when the product is ready and delivers value, it makes the selling motion efficient and effective. These days, the product is broadly ready across our security platform, and particularly in areas like SIEM.

A couple of years ago, we started adding to our security go-to-market motion, starting with a small number of sales engineers. These are our technically skilled staff who demo our products to customers. By specializing, they developed experience in showcasing our security products to security personas. About a year ago, we began hiring folks in channel and alliances to activate and build partnerships with our security channel. We're on our way there, and we're learning about how we go to create win-win partnerships with these partners. Recently, we just started to start a pilot of security focused sales teams. We wanna build on our security successes so far. With the product ready to go and competitors in this space consolidating, we are excited about our opportunities. Here's an example of an expansion with security.

This is one of the biggest cruise lines in the world. Over the years, they've grown to be a multi-million-dollar customer with us. They chose Datadog as a strategic security vendor. As they believe, they also believe that the unified observability and security approach is essential to maintaining operational efficiency and minimizing downtime. Over the years, they adopted nearly every product in the Datadog security stack. Recently, they told us that their on-premise SIEM environment was causing problems. It was hard to manage and scale, and security investigations were taking way too long. Last summer, they chose us to replace this on-premise SIEM with our Cloud SIEM to get faster detection and investigation at a lower cost. Today, about a quarter of our business with this customer comes from security, and we have more security opportunity with them. This is a great example of our potential with security.

All right, so that's security. I also wanna talk about investments we're making to expand our presence geographically. A few years ago, our sales teams were relatively concentrated in places like U.S., Ireland, Singapore, and Japan. But as we've grown, it's become important for us to place sales teams in more places so we can literally meet with the customers where they are. So in places like Brazil, Mexico, India, Australia, Middle East, and many other countries, we're putting people, channel and alliances partnerships, local language marketing, and our investments in place so that we can capitalize on these opportunities. I'd like to give you an example of how these investments are delivering for customers regionally. Our Latin America go-to-market teams are pursuing opportunities with grit and a team mentality. They're persistent and patient, and they are delivering.

We're winning some of the largest companies in that region, from banking to e-commerce, to retail, to telecoms. Today, our business in this region is 5% of our revenue, but they're growing far faster than our overall revenue, and the pipeline of opportunities in this region is great. And finally, of course, AI. As you heard in the first half, our product teams are delivering AI innovations across our platform, and our customers are focused on their AI efforts. It literally comes up in every single meeting we have. As you heard from Yanbing, Bits AI SRE is ready and is delivering tangible value for customers now. We just had our sales kickoff, and we are going all in on AI. Every seller has received training and is ready to have that conversation.

As we move forward in time, the preview products will go GA, and our sellers will have an even broader suite of AI capabilities to sell. So those are just a few of our investment areas. We keep experimenting and expanding our go-to-market capabilities to match the innovation of our product teams. I'm extremely proud of the sales team we've built over the years, and here's the evidence that our hard work is yielding results. This shows our bookings by year, including a stellar 2025. I've never been more excited about our opportunities and our potential to execute, and we are just getting started. Thank you for your time. I'll hand it off to Adam Blitzer now to talk about how we deliver customer value.

Adam Blitzer
COO, Datadog

All right. Thanks a lot, Sean. Good afternoon, everybody. This is the part of the presentation where we have back-to-back pocket squares, so we hope you enjoy it. My name is Adam Blitzer, and I'm the COO here at Datadog. I've been with the company for just about five years now. I'm involved really in all aspects of our go-to-market. I get to work with some of our largest customers every day, and also get to spend time on our largest, most strategic deals. I wanna focus our time right now on why our customers choose Datadog, how we solve their problems and deliver value, and why and how they continue to grow with us. So let's go ahead and jump in. You know, this is the observability market. It is dynamic, it is large, it is fast-growing, and there are many options in this market.

It is a competitive market, as we talked about earlier, and vendors stake out different spots within the market in terms of their pricing. There are premium products, there are sort of middle-of-the-road products, there are commodity and siloed products, and there are even open source products. And as you know, Datadog is a premium product. So given that there have always been a rotating cast of low-cost and siloed commodity products, why do customers overwhelmingly continue to choose Datadog and continue to grow with us once they've made that decision? Why do we keep gaining market share? Well, we've always, always focused on delivering value to our customers as our North Star, and a key way that we do that is through our unified platform, a single pane of glass for observability and security. Now, platform has always been our DNA.

You saw this slide earlier, but I just wanna highlight, we started with a rock-solid foundation, a unified platform. In fact, we started work on the integrated platform before we ever launched our first product, which happened to be Infrastructure Monitoring. We made an early bet on DevOps, and the idea was that breaking down silos between teams would empower them to solve problems more efficiently than ever before. From there, we really let our customers guide us and really have a customer-driven roadmap, and we look for places where we can break down silos between teams, between datasets, and continue to deliver mission-critical observability and security applications for our customers. But let's take a look at how that plays out in action. This is an example of the single pane of glass.

This is one of our customers, who is one of the largest technology companies in the world, and they have tens of thousands of software engineers. Now, prior to Datadog, they were using a whole host of point solutions for observability. Now, you can see by standardizing on Datadog, it's allowed them to see all of their telemetry data in one place. So no more swiveling between screens and applications, no more painful correlations between sets of data. In a single year, they saved thousands of hours for their SRE teams or engineers that are responding to incidents. But what's even more interesting is they saved over 100,000 hours of time across all of their other software engineers, who normally would have to have downtime during an incident.

So instead of twiddling their thumbs, waiting for something to be resolved, they can focus on innovation and delivering new products. That's an incredible return on investment and really is the core value proposition of the unified platform. So while the original benefit of that platform was all about productivity and speed for our users and for our customers, one other really interesting trend emerged over the past few years, and that was that buyers really sought to consolidate on platforms. So instead of using, you know, a different tool for sort of each possible job. They saw that they could gain immense buying power by consolidating onto a best-of-suite, onto a true platform. Here's an example of that playing out with a European home improvement retailer. They were struggling with incomplete visibility, alert fatigue, long incident resolution times, and high operational costs.

By standardizing on Datadog, this customer estimated that they saved over $10 million. Now, some of that was in direct cost, software costs, some of that was in engineering time, some of that was in customer experience. But we're seeing this trend more and more. Customers that consolidate on Datadog save in many different ways. So the unified platform has led to productivity gains, it's led to direct cost savings, but we think this becomes even more valuable in the age of AI. Customers who wanna make use of agents for observability, for security, for software engineering, find it much easier to do so when they have their telemetry data and their security data in one place.

It doesn't matter if the agents that they're using are Datadog's own agents, as you saw earlier, their own custom-built agents that they want to run on top of observability, or third-party agents from other companies. Now, as our customers have grown with us over time, we have found many, many ways to deliver them more value and more purchasing power. So let's take a look at a few of those. This slide really shows how our economic model works. It's very, very similar to the cloud providers, which essentially all of our customers are familiar with. We have volume-based pricing, but as our customers grow with us, we bend the curve of cost for them. So they continue to get more and more value, but more and more leverage for that value as their volumes grow.

We give them discounts for the amount of commitment that they make to us. We give them discounts for using multiple products, multiple combination of products. We give them discounts for term length. So the more a customer grows with us, the more we're going to bend that curve over time. We also work with them to optimize their usage, of course, of Datadog. We want them to use us in the best way possible. Since we're a usage-based product, right, we're only generating revenue when our customers use us, but we want that usage to be as valuable as possible, and if we find any optimizations to make with our customers, we'll work with them on that, and we find that they then invest that back into other areas of usage.

But they can also use Datadog to optimize what they're spending with other software packages or with the cloud providers themselves, and we'll get into that in a bit. And then the last piece is we're constantly delivering innovations. So maybe the way we store data, different products that we launch that have novel pricing and packaging. You heard about some of that from Yrieix earlier today, and we'll get into a couple of examples. But the key thing to remember is that we are along for the ride with our customers' growth, so we scale with our customers. And as long as a customer is growing and their technology footprint is growing, their observability spend is also going to grow.

Now, we wanna, again, bend that curve for them over time so they get more and more leverage and more and more buying power, but we're along for the ride with our customers' growth. And sometimes that growth can be quite exceptional. This is an example of one of our AI-native customers. They have experienced tremendous growth in a very short amount of time. And you can see that their footprint with us has gone from using quite a few products to quite a few more products, but the really rapid ascent of the ARR that we're earning from this customer. And you see this pattern play out. As customers grow, their business grows, their revenue grows, their Datadog usage is also going to grow alongside it. Customers also scale as they adopt more products.

This is an example of one of the largest online sports betting companies in the world. Their entire business is built on the cloud, and it depends on real-time, low-latency performance. So they find tremendous value in using our full platform, and you can see their adoption journey from one product all the way to 21 products. And each time they adopt a new product from us, it isn't the same as using multiple products in isolation, it's really a force multiplier. So when they adopt a new product, it increases the value of all of the products they've already adopted prior to that. Not only are they saving money by consolidating spend with fewer vendors, but they're also unlocking additional capability and value. Now, as I mentioned earlier, we also deliver cost savings and efficiencies to our customers as we innovate.

Yrieix mentioned this one specifically earlier today, but this is an example from one of our tech customers. They've been a longtime Logs user for us, and they adopted Flex Logs, which for many of their newer logging use cases. For Flex Logs in particular, they found that it really sharply drove down the unit cost of logs for them, and it allowed them to scale their log usage 75-fold. We delivered more business value, but we did it in an economically efficient and business-sensible way for the customer. We also launched new products that lead to very direct savings on both Datadog or on cloud costs, or on other products that a customer may be using. Examples of these could be Cloud Cost Management, Kubernetes Autoscaling, et cetera.

Now, here's an example from a major software company that turned on our Cloud Cost Management product and immediately found significant savings. So they were just really looking across six of their environments, of which they have many environments, immediately found $1 million in savings. And this is just an example of new products that we deliver to customers to help them with their spend in general. Now, finally, I wanna come back to the topic of us investing in innovation. We spoke about that a lot in the morning session, but we think of innovation as really the secret sauce of the company, and it's even more critical to us in the era of AI. Our space is incredibly dynamic and complex, and we have seen rapid technology shifts throughout the life cycle of Datadog.

So first with the era of cloud, and now in the era of AI. And our ability to out-innovate our market and guide our customers through change is a key differentiator for us. We take 30% of our revenues and invest it back into R&D, so that was more than $1 billion last year. We've always invested more on a percentage basis than anyone in our market. But not only that, you can see that this high percentage of investment, coupled with our rapid revenue growth, means that that compounds over time, and it leads to this very significant advantage in R&D spend. We're now at about 3x the R&D spend of our next closest peer. And we think that that really makes us future-proof, and that's what customers want to bet on in this fast-changing era of AI.

You can see that our platform approach, coupled with a relentless focus on R&D investment, has allowed us to deliver new products at a very fast pace. So if you look at this chart, it's interesting because our pace of innovation hasn't slowed down as we've grown. It's actually sped up. So how is that possible, right? Well, it comes back to everything being built on one platform. I have sold a lot of software in my day, and I've shown a lot of slides that said "platform" with arrows back and forth, but in Datadog's case, this is truly one platform that we're building on top of. So it means every time we launch a new product, we're not starting from scratch, right?

That new product is built on the foundation of all of the products that have come before it, and it, in turn, adds back into the platform. So it creates this amazing virtuous cycle, and delivers constant value to our customers. I want to talk a little bit about the cohort of AI customers that has really standardized on Datadog. So you can see 14 of the top 20 AI-native companies are using Datadog. We have many of them that are spending more than $1 million a year with us, and the cohort itself is quite large, with more than 650 of them using Datadog. We've always been trusted by the market at large, but specifically by the most tech-forward companies, right? That's where we've really made our bones.

As you know, cloud-native companies have overwhelmingly standardized on Datadog over the past decade. They pushed us from a technology perspective. They were using the newest tech, they had the newest types of architecture, and that all benefited our customer base at large. But what we're seeing now with the AI cohort is it's exactly the same thing, right? They trust us to meet them where they are and innovate rapidly as their space continues to evolve. And this innovation, again, will ultimately benefit all of our customers as they continue to meaningfully adopt AI themselves. All right, so that wraps it up for me, but with that, I'd love to turn things over to our CFO, David Obstler.

David Obstler
CFO, Datadog

Thank you, Adam. For those that you don't know me, I'm David Obstler, and I've had the great pleasure of being the Datadog CFO for the last seven years. I'm looking out into this group, and I'm very gratified. I see so many familiar faces in the investment and analyst community, and I wanna thank you for your support of Datadog since our IPO and your continued support. There's one other group I wanna thank. You've seen Yuka on stage. Yuka is the tip of the spear on our crack IR design and production group, and they work very hard, and I wanna thank them. Without them, we would not have had this day come off. Thank you, team. Thanks a lot. Thank God we only do this every two years. It's a lot of work.

So, let me start now. What we've heard today has been Datadog and its platform strategy in solving our clients' increasingly complex problems, and I hope you've come to understand that that is only being enhanced by the advent of AI. You also heard from Sean and from Adam that we are broadening our go-to-market, making some investments there, all to create a broader and more unified platform that solves our clients' problems and provides more value. And to use the word that we started on, all of this to try to increase the autonomy in the analysis, detection, and remediation of our clients' problems.

And so sort of that's what you've heard, but I wanna dive in a little deeper into the DNA of Datadog and what makes us grow, what are our opportunities that we see in front of us, and how do we turn that into a profitable, vital organization? So let me hit that. How do we grow? One of the most important slides that we've shown you from time to time is this slide. This shows our land and expand model at the center of Datadog, and you can see it goes back 10+ years. And what it shows is not only do we continue to land new customers, but we grow with them over a long period of time. The first cohort here says 2015 and prior, and you see the growth of our relationship with those customers over time.

It's that which creates the engine that makes Datadog. Now, within that, it is a combination of our business that we have from the previous year, plus the new clients we land, and I'll discuss that, and the growth of our existing customers, which makes up the majority of our ARR growth in any one annual period. Diving into that growth, and we've discussed this before, about a quarter of that ARR growth each year, plus or minus, it varies, comes from adding new customers, which is the engine for the future.

Then the remaining approximately three quarters comes from, one, our clients adopting more of the products that they've used in Datadog, the growth of the client's business, and they're using more of the Datadog products, and cross-sell, which is dependent upon the platform investment that you've heard in the previous presentations. And what that has produced is a rapid expansion of the customers we get, the land, 18% growth of customers over the years, and the average revenue per customer growing 17%. And together, that produces a compound annual growth rate over this time period of 42%. And it's that land and expand model and investment in the platform that creates the sustainability of that that's been driving Datadog. Now, what are the types of growth opportunities that make this flywheel continue to churn?

So I want to go over some of those. You've seen in Ollie's presentation, at the very base, there's a very, very long secular tailwind for digital migration and digital transformation and cloud migration. And this is a slide that we've shown you, which is upward sloping. It's the investment in cloud spend. And that is, and we've talked about this many times in our investment meeting, that's the bedrock, in that there is a very long-term trend, and it's got a lot of legs to it. Then on top of that, we believe that there'll be an accelerant from the adoption, and we spent a lot of time in today's presentation, of AI, both Datadog for AI and AI for Datadog, which is driving up the speed of the creation of software and the cloud transformation.

We're seeing this in a number of ways. You've seen this slide. This is the slide about the spending on AI, which everybody reads about, you know, all the time in the press. And how is that working for us? Well, we are having signs that that's translating into greater and broader platform adoption. In this case, this is our observability of LLMs, which through our LLM Observability product, is beginning to take off. We have many customers using it, and just in the last year, that has increased 10x . That's the spans sent to us to our LLM Observability. We also are dramatically increasing our integrations, and another piece of evidence that's starting to spread out throughout the customer base is the number of customers that are using those integrations, and we just talked about this in our public release.

5,500, and this has been upward sloping, of our customers are using integrations. And guess who those are? Those are some of our larger and more sophisticated customers, because that accounts for about 80%, just under 80% of our ARR. So our larger and more entrenched, more progressive customers are using our AI integrations. In addition, we've been able to service many more clients. This is a statistic that we've given out, and we showed earlier, that we're supporting our customers, which are AI native, and becoming the platform of choice for them, and that number has grown substantially, and at the end of last year, was about 11% of revenues. So a lot of trends in cloud migration enhanced by AI.

Now, when you get down to more specifically, how is Datadog growing against that very positive secular backdrop, I want to speak about our growing and retaining customers. Even though we are a leader, we have a lot of white space to go. What we've done here is curate down from the millions of customers that the cloud providers have. We then have taken other buying signals, like how much they're spending, how many hands do they have on keyboards, what are their projects, to curate down to a, just under 500,000 global customers that we believe are target customers for Datadog. And of course, we've been growing our customer base quite rapidly, over 32,000, but that's a penetration in logos of only 7%. There's a long way for Datadog to go.

Now, what we've been doing is accumulating those customers and growing with those customers. These are metrics that we've been giving out since we've become public, and this looks at our total customers, then our customers that are over $100,000, and you see we have over 4,000. Once a year, we give out our million-dollar customers, which this year grew to over 600 because of the growth you've heard of from our other customers. But here's a new one for you. This is brand new, hot off the press, debuting our customers that spend over $10 million with us. And that, here is the time series you see, it's been growing quite rapidly, over 60% growth, and that's now 34.

So this illustrates how we're landing, and you've heard from a lot of the other speakers about how we're expanding our value with our customers. Now, who are these customers? Now, first of all, they are diversified from in a geographical sense. Yes, we started in North America, but you've heard about from Sean, Adam, and others, how we've been making major investments around the world and enhancing our investments with complementary types of relationships and channel providers, et cetera. So even though North America is our largest portion, our international business is quite is growing quite rapidly.

But I find the other slide here maybe even more interesting, because when you think about observability in cloud, we started out, and you might think about software companies, technology companies, cloud natives, but we have a very diverse customer base here, with major segments in those, but also in media and entertainment, financial service, travel, consumer, et cetera. And what this is indicative of what is happening in the digital economy. All companies are digital and becoming digital and cloud native. And we, as you've heard from a number of the other speakers, address that very, very big and broad and diversified market. Okay, now, within that, what are we... what's happening?

We have very good customer references and proof points in a wide variety of industries, and some of those are what you might expect in the technology, the internet, the software, but we also have eight of the top industrial manufacturing companies, eight of the top 10 logistics companies, et cetera, 10 of the top entertainment companies. Further proof points that our end market is very broad. All of this together is coming together in value to maintain a very high growth retention percentage. It's 97% + in the company. And not only that, it's 98%+ in enterprise, but in SMB and mid-market, it's 96%+. And what does that mean? That's indicative of the high value that our customers place in Datadog and the stickiness of the product.

Once you have your Datadog, it's tough to get rid of your Datadog, and you don't wanna get rid of your Datadog. So this has been fundamental to the economic model of Datadog and the value we bring to customers. Okay, now moving on to some other things that are growth drivers is our expanding products and use cases, and you've heard a lot about that. This is at the core of what makes Datadog. And we're just looking. When we first went public, we had this metric of 2+ products, and that has settled in the 1980s. And as the years have progressed, we've added more and more because of the adoption, more and more products here. And now you see we're out to 10+ products because of this flywheel and adoption. We have some numbers.

Half of our customers have 4+ products, and we have just nine percent of 10+. I would expect, as we continue to penetrate and consolidate, you'll see these continue to go up. We have created a very, very strong business. We just announced this on our last earnings call, of what the penetration of the three pillars, or I think Yanbing referred to them as the four pillars, because when you have digital experience get as large as it, as it is, we are also calling that a fourth pillar within the application monitoring. As you know, we've crossed 1.5-1.6 in infrastructure and $1 billion in log and APM. Now, while that's really good, that shows a lot of platform momentum.

I wanna show you, just like I showed you in the penetration of the logos, how much opportunity there is for us in our core business. Only 53% of our customers use all three pillars. So we still have a penetration that's roughly half of customers who don't use the three pillars. And what we found is that when a customer standardizes on Datadog and uses all three pillars, they spend a lot more with Datadog. In fact, they spend 15x more. So half of our customers are not using the three pillars. And what does that produce? You see, over time, we have a broadening group of customers who buy more products, and the ARR, of course, is very consistent with how many products they use.

So as you go out, this has been the consolidation and the growth opportunity that we've been realizing over time with so much more to go. Another benefit from this is, as customers standardize on Datadog and use more of our products, they churn less. So this is a very important driver of growth in the future. Now, outside of observability, which I've been talking about, we are also addressing new markets and expanding beyond observability, which you've heard about a lot today. Within observability itself, it's a very large market, a $30 billion+ market, and we are the leader. We have the highest market share, and we've been growing our market share over time. But at the same time, as we've been addressing more complex client problems, what we've been doing is we've been increasing our TAM.

This slide illustrates what we've been doing over time in going from the blue of observability to add security, software delivery, service management, product analytics, to get to expand our market to well over $100 billion. These are all markets where we have products, we've made investments, and we're seeing traction. Now, one of these, just to show the slide that was showed already, is the security opportunity. It's one of those expansion markets, and you can see here that we've grown quite rapidly. We have quite a number of customers using it, and we announced one quarter ago that we've passed the $100 million mark. There are a number of different vectors for growth there.

So that's sort of going over from a little more of a quantitative lens, what are our growth opportunities and why we expect them to continue to be going. But of course, it's about getting the top line growing, adding value, and also creating a strong business model. So I'm going to spend the next bit of time on how we turn that revenues into profits and how we've been doing that over the years. The first place is gross margins. This plots out our gross margins over the last five years, five, six years, by quarter, and we've given guidance that we're planning our gross margins ±80%, leaving the flexibility to invest in additional platforms, data centers, and products. But we've been really good about working on our platform to provide extensibility, cost initiatives.

We optimize our own platform very well, and you can see we've been successful over the long period of time, ±80% gross margin. You've heard many times, 'cause it's at the very center of Datadog, that we have been investing for a long time and are the leader in R&D investment. I won't repeat what you see here, but it's, you know, quite extensive, and that means we have been able to invest enough in R&D to maintain R&D as a percentage of revenues, and this has been our target, around 30%. And that has some fluctuation, and there may be some things about the use of AI that might be leveraged here, but we've been really good at investing methodically in R&D.

At the same time, one of the reasons we've been able to manage the company economically and still invest substantially in the platform is because we are very efficient in go-to-market because of the, the frictionless adoption we talked about. And even though we've been investing at a very high rate and expanding our go-to-market, as Sean and Adam talked about, we still are much more efficient in how we do it than our competitors. That has allowed us to maintain a high growth of sales and marketing, yet keep that sales and marketing as a percentage of revenues in the roughly the mid-twenties range. On top of that, we've been really efficient already in our own optimization and automation in G&A, and we've been able to run our G&A while scaling the company. I see a number of my G&A friends sitting here.

Thank you for all the hard work you do there, at around 5%. The combination of all of that has resulted in us having an operating margin where it does have some fluctuation, 'cause as we've talked to all of you about, we set an investment plan. We try to set an investment plan based on the opportunities. We have a consumption model, and that changes faster than we can hire people when that changes. Therefore, there are some fluctuations, but essentially, we are trying to invest behind the opportunity while being having good financial management, which has resulted in us having increased scalability and margins that have been in the last couple of years, in sort of the low- to mid-20s. Now, our financial performance here is summarized on this slide.

I've talked about revenues, but while we've been investing substantially in the opportunity, we've been able to grow our operating profit at a higher rate and be good cash flow margin managers. Our cash flow margin, as you see here, last year, was 27%. Our operating margin... By the way, this is all Non-GAAP. I wouldn't want you to say anything. So it's all Non-GAAP, and it was 22%. So we've been able to be good cash flows. We're not very capital intensive. That allows us to be very efficient in the conversion of profits to cash flow. Okay, now our financial goals. We've shown this slide before. This is the same slide as two years ago, updated for the years, and I think what that shows is the consistency-...

of the management of Datadog, despite the fact we've been making substantial R&D and sales and marketing investments, scaling the company. And so here is a time series you can all take away. And, there is, there is a reaffirmation of our long-term target of an operating margin at 25%+, and that's what we've said last time as well. We think we've been able to prove, if you look at this time series, that we've been able to invest comprehensively, yet balance that in being a strong profit, deliverer, as evidenced by this time series. Just a- sorry, just a few other, things to say. On capital allocation, we get, we get asked this a good deal. We are, strongly cash flow generative. What are we doing with all this cash?

Well, one, we want to manage the company in a prioritized, efficient way to grow over the long term our free cash flow, and that starts with our revenues. If you compound the revenues, and you do it in a prioritized way, you will compound the free cash flow. We wanna make sure we have the flexibility on our balance sheet to invest, and that includes, within our own company, as well as potentially, in the M&A market. And at the same time, we've shown and will continue to maintain a thoughtful and disciplined acquisition strategy. So far, that's been focused, no surprise, on products and adding technical capabilities to the company, and we've been very effective. Some of the products that you've heard about, you heard most recently about, Eppo, and you heard about Metaplane.

These are product areas we're working on, but enhanced through strategic acquisitions. Finally, our target in net share dilution, which is 2.5%-3%. Again, we are trying to balance stewardship with being able to attract and retain the intellectual capital continue to, to continue to grow the company. With that, I think, I wanna thank all of you for coming. Hopefully, you learned a lot. We're going to repeat the Q&A session by having Sean and Adam come back up with Yuka and Ali, and we'll open it back up like last time to Q&A. Please direct your questions to Megan and Eric, who are in the aisles.

Yuka Broderick
Head of Investor Relations, Datadog

Great.

David Obstler
CFO, Datadog

Thank you very much, everybody. Great seeing you.

Yuka Broderick
Head of Investor Relations, Datadog

All right. Thank you, David. So we'll start another half hour of Q&A. Same rules apply. We will start on Eric's side. Please go ahead.

Gregg Moskowitz
Managing Director of Enterprise Software, Mizuho

All right, great. Thanks for a great presentation. It's Gregg Moskowitz from Mizuho. A question for Sean or Adam: so, 2025 saw a big increase in Datadog account execs covering major accounts and key accounts. And you mentioned that in year two, their objectives shift much more towards revenue. Well, we are now entering year two for many of these folks. So how are you thinking about the likelihood of them driving much bigger land and expand with very large organizations? Thanks.

Sean Walters
CRO, Datadog

Yeah, I think, you know, with the advent of key accounts, the hypothesis a couple of years ago was really putting a lot of focus in those must-win customers and new logos for us was the way to do it. So, you know, scarcity drives opportunity within those accounts. We've seen a lot... You know, the first year was a lot of the groundwork, getting meetings, getting contracts in place, and what we saw last year was kind of that all coming to life. And so we saw some really incredible lands and customers that we had been chasing for a long time, and we're seeing many of them all, you know, as they get onto the platform, grow from a use case and a product perspective, and they're starting to take off. So we're gonna continue with that in 2026, and we expect...

You know, we, we see a very strong opportunity there.

Olivier Pomel
CEO, Datadog

We're talking about it today, so we're optimistic about it.

Sean Walters
CRO, Datadog

Yeah.

Yuka Broderick
Head of Investor Relations, Datadog

Okay, great. Megan's side, the right side.

Andrew Sherman
Director of Enterprise Software, TD Cowen

Oh, yeah. Hi, Andrew Sherman with TD Cowen. Thanks for doing this. Sean, on the 53% of customers using all three pillars, that's gone up a bit versus two years ago, but obviously still a long way to go there. What is usually the product missing there? I think it's probably logs. It's a highly competitive area. I know your business is now $1 billion+, which is great, but what are some reasons why customers would buy that or not? And over the next few years, where do you see that number going? Thanks.

Sean Walters
CRO, Datadog

Yeah, I mean, I think every customer engagement, every opportunity that we enter is different, and what's important for us and our sales team is to listen to what the customer needs and kind of the pains that they're exhibiting at that, at that moment. So, you know, it's not- we're not going in to try to force three pillars on everybody. Everybody who comes into Datadog as a salesperson gets trained- like, three pillars is the core of what we're focusing on to get people trained up on. We get them focused, but we get them to go in there and ask good questions to understand the challenges that our customers are having, and then get started with maybe the three pillars, maybe some other products.

But staying engaged with those customers, typically, you know, we'll see the growth into the three pillars, and we're just gonna keep doing that, whether it's Logs or APM or DEM. You know, the more the customers spend time in the Datadog platform, the more value they see from the solutions and the easier it is for them to add on more of our products.

Yuka Broderick
Head of Investor Relations, Datadog

Yep. And Andrew, I wouldn't think of it as one particular pillar that's missing-

Andrew Sherman
Director of Enterprise Software, TD Cowen

Yeah

Yuka Broderick
Head of Investor Relations, Datadog

... right?

Andrew Sherman
Director of Enterprise Software, TD Cowen

Yeah.

Yuka Broderick
Head of Investor Relations, Datadog

Like, there's every combination is represented there in some scale and has a lot of opportunity for us to work on. All right, Eric's side.

Gabriela Borges
ManagingDirector of Software Equity Research, Goldman Sachs

Hey, good afternoon. Gabriela Borges, Goldman Sachs. Thanks for having us. I think this one is for Sean and for David.

...really appreciated all of the customer examples you shared with us, the ARR over time and the products on the bottom. What's curious about the charts is that they all go up and to the right, but the pattern within them tends to be a little bit lumpy, and you have periods of time where ARR may go down before it goes up. So my question for you is, what can you do to get ahead of some of those conversations to maybe initiate cross-sell, and what is the underlying driver of those temporary dips in ARR? Thanks.

David Obstler
CFO, Datadog

Well, we lived through some pretty volatile times in the last five years. We lived through a period of very rapid growth, and then after COVID, as we talked about an optimization trend, then a stabilization, and then a re-acceleration. So I think that essentially we have learned a lot, and what we are doing is we broadened the platform. We've gotten stickier. We essentially are working with our clients through some of the things that Adam and Sean can talk about with our account managers, et cetera, to get clients to use the platform in an optimal way, and to try to sell through the platform value, which I think we've gotten a lot better.

Some of that volatility has to do with the end market, and some of it has to do with our growth in helping become partners with clients over a longer period.

Adam Blitzer
COO, Datadog

Yeah, I would also say for some customers, it's very intentional account management work with them. So, yeah, the trend with a customer may be very up and to the right over many years, but in a given year, the customer may have sort of exceeded the usage that they'd originally planned for. And so we work with them to say, "Hey," you know, as I mentioned in sort of the value presentation, you know, we'll give them better terms or more discounting by them sort of committing to more over a longer period of time. So they may be sort of past what they expected. We say: "Hey, let's look at how you're using Datadog today.

Let's do a longer-term commitment with you, and it might drop you temporarily from where you were, and then over the long haul, you'll, you'll step back up to a higher place than where you started.

Olivier Pomel
CEO, Datadog

Remember, it's we are a usage-based model, right?

Adam Blitzer
COO, Datadog

Yeah.

Olivier Pomel
CEO, Datadog

So if you are looking at a seat-based company, you'd have a perfectly smooth line into infinity. In our case, customers use more or less of our products. Usually, they use more, as you can see, over time, and, you know, we didn't actually cherry-pick all that much. Like, you know, every single account looks the same way, you know, for all of our customers. They grow with a little bit of choppiness, and we do see optimization on a regular basis. You know, sometimes we actually tell customers to optimize. "Hey, it doesn't look healthy. You should do something about it, and this is how we can help." Sometimes they're about to renew, and they want to optimize before they recommit, so they have a better idea of what it is they need, so we see some of these contractions.

Then they start growing again because we deliver more value. They grow on their end. They, they buy more products, et cetera, et cetera. So that's the motion we're going through. That also explains some of the conservatism we put in guidance, because we, you know, we're a usage-based model, so we're very confident about where we'll be, you know, in the midterm. In the short term, we do not know exactly how the usage is going to trend, you know, one month from now.

Gabriela Borges
ManagingDirector of Software Equity Research, Goldman Sachs

Yeah.

Yuka Broderick
Head of Investor Relations, Datadog

The final part is, you know, our customers have different times when they're busy, right?

Olivier Pomel
CEO, Datadog

Yeah.

Yuka Broderick
Head of Investor Relations, Datadog

Some of them are e-commerce companies, and holiday is a big season for them, and then it should ramp down, right, in usage. Some of them are big media companies. They have specific events, right? And so that's the benefit of the usage-based model for our customers as well, and the reason why you don't perceive it necessarily in our numbers, right, all of that volatility, is because with a diverse base of 32,700 customers, right, they're all experiencing their own volatility, but they're all experiencing it. And with that broad set of customers, that broad diversity of industries represented, right, it doesn't tend to show up in the quarterly aggregate numbers.

David Obstler
CFO, Datadog

One of the things that cohort chart, why it's so important is when you, when you look at that and pair that with Gross Retention numbers that are so high, the clients stay with us, and when you look at the length of those cohorts, they might, they might move. They may not be a straight line up, but over time, they compound up, which is a very powerful driver of our business model.

Yuka Broderick
Head of Investor Relations, Datadog

Okay, great. Thank you. Megan Side.

Peter Weed
Managing Director and Senior Analyst of SMID-Cap Software and Cybersecurity, Bernstein

Yeah, this is Peter Weed from Bernstein here. You know, this is maybe a little bit of a question for Olivier and a little bit of a question for Sean. You know, I think you've been telling a really powerful story, not just this year, but over the years about, you know, the coverage that you're getting across all of the different personas, both in, you know, the kind of operations function and in the product function. But maybe there's a different way of looking at this, which is when you kind of step back and think about, like, how you're serving the leadership of those organizations, you know, the vice president of operations-

David Obstler
CFO, Datadog

Yeah

Peter Weed
Managing Director and Senior Analyst of SMID-Cap Software and Cybersecurity, Bernstein

... the vice president of product, and how that influences both product and commercial conversations that you're having. You know, what, what's unique about Datadog's positioning, where you could step away from being kind of trapped at something that's kind of individual persona and really kind of helping be that line of business application that, that helps these senior leaders succeed kind of almost, you know, regardless of the kind of individual tooling that the roles on their teams might be using?

Sean Walters
CRO, Datadog

Yeah, I mean, you know, as a sales motion, we're definitely trying to go very. We have a very broad landing space, having a very large product set. So we're definitely selling across lines of business, across functions, across personas all the time, and not every one of them is right at the right time. So, you know, it's a great thing to have as a salesperson because we can solve problems across, and then, you know, the best seller internally for us is we get one person to be a champion at an organization in the product team, and they do a lot of selling for us, and that's usually what you see. Like, that's why the expand happens so rapidly because-

... you know, you just see success in what—maybe we found the right person at the right time, and that, the success of that person kind of bleeds into the other functions and helps us grow broader, and having a broad solution set helps it happen pretty quickly as well.

David Obstler
CFO, Datadog

One thing I've seen recently from talking to some of our largest customers is, you know, of course, Datadog started with SRE teams, and, you know, they think about SLOs and service level objectives, and, you know, all the metrics on how their systems are running. Really, at the executive level at these companies, is they've turned that around into, like, business SLOs, right? Like, how is the business running? So I use the example from a financial services company that's processing an incredible amount of payments at any given time. The business doesn't care that there's some latency in some system somewhere, or there's some problem introduced. They care about what is the throughput of payments on the platform.

They're using Datadog, you know, which was really built for dev and ops teams, but they're, they're using Datadog to run the business in real time, and that's happening more and more with our largest customers.

Olivier Pomel
CEO, Datadog

Yeah, we're leaning more and more into that, into the user analytics, business analytics, and making sure that, you know, we are used to—I mean, look, these companies, in digital companies, the applications are in the business, so if you instrument the application, you instrument the business. And that's something we saw happen organically or originally with our cloud-native customers, you know, where the CEO is at Datadog on their desk, you know, because that tell them exactly in real time how their business was doing. Now, we are building products for the rest of the market that is not necessarily cloud-native to go through the same thing.

Another way in which we make leadership successful is that we help actually implement transition, you know, change, migrations, like adopting the cloud, adopting all those things that are actually really hard from a process and a transition and a people perspective. You know, they're more used to buying software that doesn't get deployed or people don't use, and we never have that problem. And that's one way in which we shine. We help people shine, like say: Hey, we did this. We adopted this tool last year. We actually mentioned one of those customers in our latest earnings call, like one, a financial institution in Latin America that started adopting us last year.

And, they, you know, they had a pretty large deployment initially, but we are still a small fraction of the organization. And then what they did is, they ran surveys, like, from their teams, like: What do you use? What do you want to use? What is making you more, more productive? And, you know, they, they got, like, overwhelmingly, you know, positive responses from using Datadog. It made the executives who made the choices look pretty good. They made the right choices. They got their product used. They had a business impact. They had an impact in transforming the organization, and as a result, we could scale quite a bit with them, and that's a motion that we're repeating in many different places.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Okay, on the left side with Eric.

Koji Ikeda
Managing Director and Senior Software Analyst, Bank of America

Yeah. Hey, Koji Ikeda from Bank of America. Thanks so much for doing this. Maybe to continue the conversation and the question from the previous one, when I looked at the slide on the median spend of the F 500 customers being only $450K, I was like: Oh, there must be a lot of opportunity there. And so digging specifically on that market, you know, what are you guys doing to target those customers to expand that median spend much higher than the $450K? And maybe digging a little bit deeper on the TCV bookings, you know, $4.5 billion+, it looks like in 2025.

Maybe bifurcate the growth there in the TCV bookings coming from the biggest customers like the Fortune 500 and more of the mid-market size that are spending a lot with you guys.

Sean Walters
CRO, Datadog

Yeah, I mean, I'd say in the Fortune 500 customers, obviously, they get a lot of attention, whether it's an existing customer in our major accounts group, maybe they're an existing customer in the traditional strategic enterprise or key accounts. Maybe they're, like, not a customer, and we want to focus on them and make them a customer. They're very complex organizations, and sales cycles are very long. So, you know, it's again working with the customer to understand, and many times they're in competitive contracts that are three, five. They can be very long contracts that we're working on. So we spend lots of time working with those customers and, you know, in key accounts or other getting net new logos and landing where we can land, and then working on that.

You know, we're seeing more and more over the last couple of years, tool consolidation. Once we're ingrained within those organizations, we expect to see, you know, those will continue to grow with us as well. There's lots of focus on that.

Olivier Pomel
CEO, Datadog

But there, there's a lot of opportunity to be had there, you know. And in a way, you know, you could say that for a few years, we were a bit victims of our own success, in that the product grows really well once it's deployed with customers. And as a result, you know, it was, I would say, a little bit, maybe too easy. I don't want to say easy. Nothing's easy. A bit too easy to grow an existing customer from, say, $10 million to $15 million in revenue by adding more products, driving more transformation, et cetera, et cetera. But it's, it's a, it's a lot easier to do that than to get 10 customers from zero to $500,000.

By getting 10 customers, you know, from 0 to $500,000, you create a lot more opportunity for the future. We've made a number of changes internally so that our organization got a lot better at pursuing these opportunities and not just the, you know, so-called easy growth with some of the larger customers.

Yuka Broderick
Head of Investor Relations, Datadog

Great. On the right, Megan Side.

Howard Ma
Director and Equity Research Analyst, Guggenheim

Hi, Howard Ma with Guggenheim Securities. I believe this one's mostly for Sean, maybe a little for Adam, and part in the multi-parter. I'll try to keep it concise. It's on geo expansion. So as you expand into more geographies, I guess, number one, are there notable differences in certain geographies? 'Cause you're making a pretty big expansion. India, for instance, we're hearing more about offshoring more to India. Like, I feel like there's less so Eastern Europe, more India. A lot of developers like, do you have an opportunity to shift left more there? Industry regulations, such as archiving, storing log data, for instance, like, is that something to look out for?

Then, I guess last part is, FDEs, is there an opportunity to deploy, four deploy engineers to, for instance, help organizations understand how to use Bits AI that they didn't have before? And could... Sorry, really a multi-parter. The last part is that, a new logo contribution, could that actually... It's really hard for a company of your scale to increase the, the relative mix of new logo contributions. So how big can the geo expansion play a part into that?

Adam Blitzer
COO, Datadog

... All right, so, you're like, "This is a two-part question." It's actually more of a comment or an opinion. I'll speak, you know, a little bit to, you know, markets being different from one another. The good news is that there's opportunity everywhere. And it's just the observability market is large and growing incredibly quickly. One sort of proxy, you know, not an exact proxy, but one way to think about opportunity for Datadog is where is cloud adoption taking off or where has it taken off? How mature is it? And largely, where clouds have been successful, you know, we have followed that. Sometimes we help people transition to the cloud, but also we're a very good choice for people using modern architectures and people who have standardized on clouds.

There are markets in particular that are also interesting because, you know, in some cases they've sort of skipped a generation of computing, right? So there's a lot less legacy, and they're sort of, you know, either went all in on mobile, went all in on cloud, and those lead to interesting opportunities for us. To your point on regulatory concerns, you know, that varies wildly by market as well. And so you've seen us make investments in data centers, you know, partnering with the cloud providers, and a lot of that is about giving choice to our customers, who might be- who might have regulatory requirements from a geographic standpoint. They might be in regulated industries, and we want to give them that choice. And then we're also doing that from a product angle.

So you heard this morning about BYOC or bring your own cloud. That gives us, again, more deployment options, for customers that might be facing regulatory hurdles in different ways. So that was, I think, two parts of your five-parter.

Olivier Pomel
CEO, Datadog

We can talk about the FDEs-

Adam Blitzer
COO, Datadog

Yeah

Olivier Pomel
CEO, Datadog

... so, forward deploy engineers. We now have forward deploy engineers on staff, and they're useful in a number of situations. I mean, I should say it's a term that's a little bit elastic, like it's been used in many different ways in the industry, but for us, they're useful in a few different situations. One is customers that need to adopt AI and need to transform, and we need to understand how it's going to apply to them. The second one is the new type of, you know, AI live type of customers that have large needs that are in an emerging area where we probably are building products that don't exist yet, and we do that by helping those customers in real life, so.

Adam Blitzer
COO, Datadog

So I remembered one more part. So, like on the, on the India piece specifically, you'd mentioned offshoring. You know, we see a lot of different opportunities in that market, but one of the interesting opportunities is a little bit less kind of offshoring of non-Indian companies, but to some degree, like massive, you know, cloud-first, cloud-native, you know, tech-forward companies being created in India every day. And the interesting thing, you know, about a, a tech company in India, especially a B2C company, is if you get any traction, you have instant scale, right? So if you think about, you know, food delivery or streaming, you know, anything that we interact with as consumers, when you apply it to that population, you just have massive scale, and so the need for something like Datadog becomes pretty pronounced.

It's an exciting market, you know, in the vector that you mentioned, but also in the vector of just interesting companies being created every day at massive scale.

David Obstler
CFO, Datadog

One difference that I think has been pretty important in development is the importance of channels in some of these international markets. That is true for Brazil, Korea, Japan in some ways. There are a number of markets where it wasn't enough to have a direct sales team. We had to also develop the channel relationships in order to have that work, and so that's been very important in a number of the international markets.

Yuka Broderick
Head of Investor Relations, Datadog

Okay, great. All right, I think we're going to Eric's side on the left.

Alex Zukin
Managing Director, Wolfe Research

Hey, guys, Alex Zukin with Wolfe Research. I wanted to ask a question about the AI natives, specifically the adoption cadence, both from the types of products that they start with, the size that they're starting at, and maybe the pace of expansion within that cohort specifically, and maybe what you're seeing develop that's either the same or, or different and, and kind of how we should think about that going forward.

Olivier Pomel
CEO, Datadog

It's very similar, like the types of products adopted are very similar to the other companies. You know, it starts with three pillars, and then you get more into user experience. Sometimes you get more into developer experience. Like, there's a mix, basically, of everything that is being used in these companies. The growth can be a lot faster than in other companies just because their consumption of infrastructure is pretty massive. But again, that also depends on the customer. Some customers are growing more slowly. Some customers have two distinct sides. Like, they have one side where they do research, and they train models, and another one where they run applications, and they use us on one side, but not the other. They have some homegrown thing on the other maybe.

But we've seen both. We've seen some of these labs approach us for the live side, and some other labs now approach us for the training side. So we've seen a bit of everything there.

David Obstler
CFO, Datadog

So, yeah, I think they are. We created this distinction, and we own it, but they are essentially cloud natives-

Olivier Pomel
CEO, Datadog

Yeah

David Obstler
CFO, Datadog

... fast-growing cloud natives, meaning they don't have legacy infrastructure or on-premise. Therefore, they are using—it's very important that they use a modern observability platform like Datadog. There's great product fit. They are using it for production environments principally, so the pillars plus, and then Ali made the point, like, they're fast-growing cloud natives because of their demand environment. And so I think the major difference has been maybe that some of them are growing very fast, but they're operating otherwise like cloud natives. Ali, anything else that you want?

Olivier Pomel
CEO, Datadog

Yeah. With the only caveat that they're not always. In some cases, they're not always very cloudy. Like, maybe they're going-

David Obstler
CFO, Datadog

Mm

Olivier Pomel
CEO, Datadog

... to consume infrastructure from, like, a single large data center or a couple of single large data centers that are nominally provided by a cloud provider that are really single tenant.

David Obstler
CFO, Datadog

Mm

Olivier Pomel
CEO, Datadog

... very specialized infrastructure, but that we cater to that very well as well.

Yuka Broderick
Head of Investor Relations, Datadog

... Great. The right side with Megan.

Fatima Boolani
Managing Director and Co-Head US Software Equity Research, Citi

Fatima Boolani from Citi. Thank you so much. I wanted to direct this to Sean and Adam. I was hoping you could opine on the real-world implications of taking Olivier's and the team's vision of an autonomous self-healing environment to commercializing that with customers. So I'd love to hear your perspectives on how customers are reacting to that, and specifically, if that would involve, over the course of the next couple of years, a fundamental rethinking around how you are pricing the platform.

And then, as a related question for David, you know, the question, or excuse me, the slide with the customer cohorts and the growth in those cohorts, I'm curious if you can give us a little bit more detail on if the incremental profitability or profitability profile or the unit economics of a $10 million spender is materially different from a $100,000 spending customer?

Adam Blitzer
COO, Datadog

Do you wanna go...?

David Obstler
CFO, Datadog

Should I start with easier?

So, yeah, we give volume discounts, but, given our broad set of customers, fundamentally, and one of the reasons we've been able to maintain our margins the same at the gross, is that we have a whole next set of customers coming in that are not using $10 million. So our weighted average has stayed roughly the same. So when you look at those cohorts, and you basically make them weighted average for their ARR, they've roughly you know, stayed the same, despite the fact we're giving volume discounts to the largest customers.

Adam Blitzer
COO, Datadog

Um-

Fatima Boolani
Managing Director and Co-Head US Software Equity Research, Citi

I noticed before you started answering that, you said, "Do you wanna start with the easiest question?

Adam Blitzer
COO, Datadog

That is the-

Fatima Boolani
Managing Director and Co-Head US Software Equity Research, Citi

It's not a competition, David.

Adam Blitzer
COO, Datadog

No, that's...

Fatima Boolani
Managing Director and Co-Head US Software Equity Research, Citi

Yeah, I mean, I would just say to the vision around, you know, autonomous sort of self-healing systems, when I speak to our most sophisticated customers, and they draw up where they want to be a few years from now, that's what they draw up. So they draw up that exact vision. I think, you know, it's a difficult place to get to, right? We're hard at work on it, but I think, you know, it just makes so much sense for the, you know, the future of observability. Do they want them—do you—do they want you to tell them how they should pay for these outcomes? You know, we're absolutely moving to an outcomes-based sort of pricing modality for almost all of software.

I mean, if we're not gonna get there in a year or two years, maybe it's faster than that. But yeah, it's, do they kind of want you to tell them how it should be priced, or what sort of, I guess, negotiating leverage do you have in those conversations and scenarios?

Adam Blitzer
COO, Datadog

Well, I'd say right now, we're probably, you know, too far away to be talking about pricing models for a future state with the customer. But again, I think more the customers are thinking about: How could I achieve this with Datadog in the future? And again, you know, we, we short our product roadmap around it, but I wouldn't say we're, we're at the point where customers are thinking, "Hey, two years from now, how am I gonna pay for my observability and, and how does it work?

Olivier Pomel
CEO, Datadog

We're not in the same situation as most other software companies, that we don't charge per seat.

Adam Blitzer
COO, Datadog

Yeah.

Olivier Pomel
CEO, Datadog

We charge per usage, and our usage is typically related to some other fundamental usage our customers have, such as their usage of infrastructure or network or storage or something else. So it's a... I would say that the question of, you know, how the pricing model can work, I think, is, is a lot easier to solve for us.

Adam Blitzer
COO, Datadog

Yeah.

Olivier Pomel
CEO, Datadog

It doesn't mean we have a pricing packaging in mind just yet. I think it's, we still have to see exactly what the shape of the product is and what the market will bear, but, you know, it's not a, it's not a big shift or a big turn for us to support any of that.

Yeah, Matt?

Sean Walters
CRO, Datadog

No, go ahead.

Olivier Pomel
CEO, Datadog

I will say the vision of the autonomy is something that resonates with customers, like they do want to get rid of these pains. There's a big difference from when we stood here two years ago. Like, two years ago, this was pie in the sky. Today, with the recent advances of AI, the coding agents, et cetera, et cetera, it's a pie on a very, very high shelf we can't reach, but we can see it. And our customers also expect to get it at some point. I think they can see it too, and so we're—that's why we're pretty hard at work.

Sean Walters
CRO, Datadog

Yeah, and everybody has that vision, but I think we still have a lot of problems to solve. Even customers have a lot of organizational problems to solve before they can even get there, so...

Yuka Broderick
Head of Investor Relations, Datadog

Okay, great. Left side, there.

Yun Kim
Managing Director, Loop capital

Thank you. Yun Kim, Loop Capital. If you can just talk about, Sean, maybe, talk about the partnership opportunity with cloud service providers, especially around in the fact that most of the AI, AI workloads where there's a lot of growth there, obviously, and most of them, I mean, all of them are really running on CSPs. And last time I checked, there, you know, there's a lot of CapEx spending to support that in the future years. So obviously, there's a really huge growth opportunity to target these AI workloads. You know, is there, like, a joint partnership opportunity with CSPs that you're working on?

You know, you know, for instance, you know, targeting the deployments and the workloads specifically, rather than the customers independently, and, how much of your, Datadog for AI is available on their app store and marketplace?

Sean Walters
CRO, Datadog

I mean, hyperscalers and the CSPs are the relationships that we've had the longest in our channel, and alliances are gonna, you know, we're a cloud-native company, and we started very early on working like, even in the early stages of our channel alliances org, that was the place that we spent a lot of time. We still spend a lot of time, and I think we're getting better and better at, you know, our co-sell motions, our technical collaborations, and the things that we're doing with them. So I'd say, in general, our sales teams are always, you know, in the field thinking about: How do we partner with our hyperscaler partners? And, you know, whether it's just as simple as sharing notes on the accounts to actually planning and strategizing around accounts and how we win them together.

Adam Blitzer
COO, Datadog

... To your question on products, they're all available on the marketplaces of the large cloud providers. AWS, for example, at their most recent re:Invent conference, they announced their top partners in terms of sales through their marketplace, and we were one of them.

Yuka Broderick
Head of Investor Relations, Datadog

Great. This will be the last question on the right with Megan.

Arthi Lulla
Vice President of Equity Research, JPMorgan

Hey, Arthi from JPMorgan here for Mark Murphy. Appreciate you giving me the last question and great presentation. Sad it only happens every couple of years. Any way to quantify or conceptualize the sheer volume of code that's being produced today? You know, code generators, cloud code, OpenAI Codex. Is it subtle or overwhelming amount of code being created, and is it moving into production and driving activity for you guys?

Olivier Pomel
CEO, Datadog

Well, we see the increase of code. We see the increase of the... You know, like, all the signals we get in the repositories that are linked to us or so we can see in the open source, point to a lot more code to being generated. So yes, it's there. If it actually ends up in the repository, it's going to end up being shipped, you know, and so we also see that coming. I will say most companies are still early in the transition there. I mean, the AI labs are completely in on it, the brand-new startups are completely in on it.

The rest of the companies, some of them see the light, some of them don't see it yet, and even when they see the light, it takes a while to get the engineers to all transform, to get all the new processes to work that way, to identify what the new bottlenecks are. So I would say I would expect quite a bit of that to happen this year, and so we should see where we are at the end of the year.

Yuka Broderick
Head of Investor Relations, Datadog

Okay. All right. Well, with that, I am gonna conclude our Investor Day. Thank you so much to our presenters for sharing the Datadog story. Thank you to all of you for spending four hours with us. If you wanna watch it again, a replay will be available shortly on the website, as well as the slides. So thank you very much. Have a good evening.

Powered by