Datadog, Inc. (DDOG)
NASDAQ: DDOG · Real-Time Price · USD
132.66
+3.18 (2.46%)
At close: Apr 27, 2026, 4:00 PM EDT
133.09
+0.43 (0.32%)
After-hours: Apr 27, 2026, 5:33 PM EDT
← View all transcripts

Investor Meeting

Oct 27, 2021

Yuka Broderick
Head of Investor Relations, Datadog

Good morning, everyone. My name is Yuka Broderick, and I'm Head of Investor Relations at Datadog. I'd like to welcome you to our investor meeting. We have a great lineup of senior leaders with us to share insights on the drivers of our long-term opportunities and how we are building out the Datadog platform and go-to-market to execute against those opportunities. First, I'd like to briefly run through our legal disclaimers. During this presentation, we will make statements related to our business that are forward-looking under federal securities laws and are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995, including statements related to our strategy, the potential benefits of our products and investments in R&D and go-to-market, and our ability to capitalize on our market opportunity.

The words anticipate, believe, continue, estimate, expect, intend, will, and similar expressions are intended to identify forward-looking statements or similar indications of future expectations. These statements reflect our views only as of today and not as of any subsequent date. These statements are subject to a variety of risks and uncertainties that could cause actual results to differ materially from expectations. For a discussion of the material risks and other important factors that could affect our actual results, please refer to our quarterly report on Form 10-Q for the quarter ended June 30, 2021, filed with the SEC on August 6, 2021. Additional information will be made available in our quarterly report on Form 10-Q for the quarterly period ended September 30, 2021, and other filings and reports that we may file from time to time with the SEC.

Our filings with the SEC are available on the investor relations section of our website. A replay of this presentation will also be available there for a limited time. Let me briefly discuss our agenda for today. First, CEO and co-founder Olivier Pomel will speak to Datadog's long-term opportunities. CTO and co-founder Alexis Lê-Quôc will comment on Datadog's platform and how key design choices were made to help our customers and differentiate our products. We will dive deep into our platform. SVP of product and community Ilan Rabinovitch will talk about Infrastructure Monitoring, how we expanded from there to the broader Datadog platform, and he'll talk about how we're starting to help developers. SVP of product management Renaud Boutet will speak to how our APM and Log Management products have expanded over time. VP of product management Pierre Bétouin will talk about our Cloud Security Platform.

We'll take a quick break, and then Chief Product Officer Amit Agarwal will speak to our customer focus and pricing philosophy. COO Adam Blitzer will discuss some characteristics of our go-to-market motion. CFO David Obstler will discuss some financial takeaways from this presentation. Finally, we'll have a Q&A session with Olivier, Alexis, and David. Please feel free to submit questions at any time in the Q&A submission window. With that, I'd now like to pass it over to Olivier.

Olivier Pomel
CEO and Co-Founder, Datadog

Thanks, Yuka. Hi, everyone, and thank you for joining us for this investor meeting. My name is Olivier Pomel, and I'm the co-founder and CEO at Datadog. Before I jump into my presentation, I wanted to take a moment to show you all the product announcements we are making at DASH this week. We're only going to touch on a few of these today, but you can find more information on them on the video replay of the DASH keynote. Now, you'll hear from several of us today, but I wanted to get you started with the drivers of our long-term opportunities and explain why we are excited about these opportunities right now. Because even though we've been steadily growing for the past 10 years and have now been a public company for the last 2, it is very clear to us that we are just getting started.

What's happening today in IT? We are at the intersection of two incredibly broad and deep transitions, the combination of digital transformation and cloud migration. Just to explain what I mean here. What digital transformation implies is that both the scale and impact of software applications are growing all the time, and so are the scale and impact of the teams that are building, running, and securing them. The ability to interact online with customers, employees, and supply chain is becoming a table stakes for companies of all sizes. Competitive differentiation now comes from software and the ability to understand your own data and quickly iterate over your applications. Now, cloud migration is a big part of this too.

Workloads are moving from legacy IT to the cloud, and the reason companies are moving to the cloud is because it enables them to not only scale rapidly, but also change their minds, their products, and their businesses extremely easily. We've seen a lot of that through the pandemic. Not only are these two deep transformations inherently linked, as cloud migration is enabling digital transformation, they also combine to drive an order of magnitude changing growth opportunity. What is very striking about this transformation, though, is that they are driving a true explosion in complexity. Whether you're talking about the sheer number of components in use, the scale in compute units, the frequency of changes, or the number of people contributing, they are all increasing at a pace that is unprecedented.

The teams who are designing, building, operating, and securing these systems have no choice but to operate in narrow silos as they can't possibly keep up with this complexity. That's the world we live in and the problems our customers face. Now, what does it have to do with us? To put it simply, Datadog exists to solve this enormous problem of complexity for our customers. We connect to all of their software components, and you can see an illustration of that on screen. We also scale with all of the infrastructure compute units they deploy, be it 100 VMs that barely ever change or millions of containers coming up and down every second. We understand the way infrastructure and applications are continuously changing, and we connect teams to each other across functions. Obviously, there are many ingredients to building a successful product such as Datadog.

If I were to reduce them to the two most important ideas, it would be first, that we build Datadog from day one as an open-ended unified platform. All of our products are tightly and deeply integrated at the architectural, the data, and the user interface layer. The same platform serves end-to-end use cases from one data set to another and one product to another across team boundaries. The second key ingredient would be our relentless focus on delivering a product that can be easily adopted. We call it simple, but not simplistic. Simple, because our product should be deployable in minutes by mere mortals and show value extremely quickly. In other words, Datadog should be as approachable and as easy to adopt as a spreadsheet.

By not simplistic, we mean that after getting started with Datadog, our users can endlessly build on it by adding use cases, data sets, and processes. The result is a platform that is deployed everywhere and used by everyone. Deployed everywhere because it touches every infrastructure and application component at every layer. Used by everyone across application developers, operations engineers, security engineers, but also business users, support teams, and all the way up to the C-levels who can see their business operate in real-time through the prism of their applications. You'll hear more today about some of our differentiators. At its core, this is how we built Datadog to successfully break down silos for our customers' organizations. One of the key outcomes when adopting Datadog is that it is deployed everywhere and used by everyone.

This gives us a lot of surface of contact with our customers, which in turn allows us to solve a bigger and bigger problem for them over time. All that while benefiting from our shared unified platform. We believe that we have proven our ability to execute against this opportunity by consistently innovating and adding to our platform, as well as entering new product categories at a rapid pace. We also believe we have developed an engineering culture that delivers rapid innovation, technical elegance, and operational efficiency. This culture has led to an accelerating product development cadence. You can see on screen our releases of products and major platform features over time. This shows the acceleration of our pace of innovation and our investment in R&D. We have moved from a single product in 2016 to 13 generally available products today.

In addition, we have expanded our platform with many features that solve important customer problems. Not only have we brought many new products to market, we have proven that we could scale those products as a Datadog observability platform facilitates broad adoption and rapid growth of new products. What does it mean in terms of our opportunity as a company? Well, for starters, our core observability market is extremely large and growing quickly. We believe our observability platform addresses a significant portion of the IT operation market, which Gartner says is at $38 billion in 2021, growing to $53 billion in 2025. Now, our fiscal 2021 revenue guidance midpoint is $941 million, which tells you that it's still very early for us in this market. In other words, we are barely scratching the surface of a very large opportunity in observability alone.

We're not stopping there, and we're at an even earlier stage in our journey in other areas. The next big initiative for us is in security, where the industry is beginning to think and talk about DevSecOps, the breaking down of silos between development, operations, and security teams. We are at the very beginning of this journey, but we feel confident that we will have a big part to play as the industry moves into the cloud. For one thing, this transition to DevSecOps reminds us a lot of the DevOps movement we've helped catalyze over the past 10 years. We're also confident that our unified platform, along with the richness of our observability data, will prove to be an advantage. Now, we think the market is still coming together for cloud security, but we believe it will be sizable.

We think the security TAM opportunity will grow over time to be of the same order of magnitude as observability. We view our opportunities as being far broader than that, and that's what makes me incredibly excited about Datadog for the next decades. Those two forces of digital transformation and cloud migration will continue to drive increasing complexity and the need to break down silos in many other fields in the future. Our position in our customers' organizations, being deployed everywhere and used by everyone, puts us at the center of many new use cases. We're already on our way with security with our Cloud Security Platform, and we're announcing the private beta of our Application Security product here at DASH.

We are also announcing a general availability of our CI Visibility product as we begin to bring observability to pure developer workflows, and we will share more about that in a bit. There are other sizable opportunities for us to pursue, some of which we've illustrated here. We have a lot of opportunity in front of us, and we have a lot of work to do to execute on this vision. You're going to hear today from a number of our leaders who will talk about some of the key choices we've made, how they're playing out on our platform, and how they support our long-term vision. We hope it helps you understand where we are and where we're going. That's it for me. I'd like to turn it over to Alexis, and I'll be back with you for the Q&A session.

Alexis Lê-Quôc
CTO and Co-Founder, Datadog

Thanks, Olivier. Hi, everyone. My name is Alexis Lê-Quôc, and I founded Datadog with Olivier in 2010. I lead our engineering efforts here as CTO. When we started the company, the public cloud was just getting going. AWS launched in 2006, followed by GCP in 2008, and Azure in 2010. Of the larger monitoring companies out there today, we are the youngest. We're also the only one of that cohort that grew up as a cloud-first company. There is no cloud native company but us in the space. At this point, we spent 11 years living and breathing in the cloud. What happened during this period? A massive increase in the complexity of software stacks, which in turn has driven the volume and the complexity of observability data.

The reason there's a lot more data now is summarized on this graphic. This is what an application stack looked like up to 2002. You had bare metal hardware, operating system, and an application server. You write your application, and that's your stack. By the early 2000s, virtualization took off. It was completely commoditized running on multi-core machines. Same piece of hardware, but split that into multiple machines. From a monitoring standpoint, you don't have one application per piece of hardware, you have multiple. Maybe two, maybe five or 10. You're increasing the amount of data by two, five or 10. In 2015, there is a further slicing of infrastructure with the advent of containers. Each physical machine can already hold 10 virtual machines. Each VM can run 10 containers.

It's 10 by 10 or 100x the amount of data to understand what the application is doing. Then over the past 2, 3 years, the industry has been adopting serverless architecture. Where the cloud providers manages all the underlying computational resources, and you're just sending the code. Now each single function needs to be tracked individually. The number of things that you're tracking has grown exponentially, and those things are more ephemeral. We're talking about a pretty overwhelming amount of data represented here on the Y-axis. By the way, we're paying a premium on the rate of change as well. That's the X-axis. The faster the software development cycle moves, the better off we are. The faster we can go to market, the more features we can introduce.

Twenty years ago, you are at the forefront of the industry if you could release a batch of changes once a month. Now, today, you're a lagger if it takes you an hour to get an update out. More things change faster. How do we keep track of all of this? We think it's tagging. Labeling of everything that's going on with characteristics you can search for and analyze. In the past, when there was physical hardware and an application related to each, the tagging was often done by labeling each thing with a unique identifier. For instance, a machine name. Looking at each thing individually does not work in the cloud. Unified tagging is a key insight into making this work. How do we get that insight? There's no other thing that is stable in the cloud.

Unique identifiers like machine and container names come and go. Let me give you an example. Here we can see all the containers that run in a particular cloud provider availability zone for a particular application. As part of release, I want to upgrade the containers zone by zone. What I care about is being able to find all the containers that are running that particular application in one zone. I don't care which specific containers they are, so long as they're in that zone around that application. The way we can do that is by having both the zone and the application as tags. It's a very declarative way to think about things. Instead of saying, you know, it's A, B, and C, it's really everything that satisfy the following properties. They have the following tags. That's what stays the same.

The set of things changes based on what's happening in the cloud. Unified tagging is what makes our products usable, even as our customers scale in all dimensions. As you expand your footprint, you don't need to reconfigure. That's what limited the prior generation. Other competitors will say they have tagging, but we believe no one else does unified tagging as well and as deeply as we do. In 2009, we came up with the idea. That was a key insight that made it work. Everything is centered around tagging from a technical standpoint. It's difficult because tags are free-form. Tags can be created at will. Technically, it's difficult because you have lots of them, and you have to be able to manage data with lots of dimensions and cardinality. Our tags cover all cloud providers, all products, all applications, everything.

We've added smarts so that the right tags show up automatically. We have architected the whole back-end around tags so that they're completely unified. That unified correlation happens in a way that is both automatic and invisible for customers. We have adapted our data stores to the volume, velocity, and variability of the data that arises from these tags. All these advancements make our unified tagging a powerful system for our customers to aggregate and contextualize all of their data, no matter what the source. Let me move on and talk about the effort we put in designing our platform for easy use. We strongly believe the platform makes the classical segmentation in products artificial. For go-to-market, we do sell separate products, but in the day-to-day, our users don't really care about segmentation.

In our platform, we have to give users the capability to pivot to the next logical steps in their investigation so that they can understand what's going on. We spend a lot of time on what the next step is going to be. We can use AI to help that decision-making process. Where you go next depends on what you're looking at and how it's connected to the rest of the data we have. We spend a lot of time thinking about how we can make that easier for our customers. Our thousands of small customers keep us honest. If we're making things too complicated, they'll let us know. Before I go, I want to talk a bit about product innovation and how we at Datadog approach it.

'Cause sure, we make plans and execute on them, but there's a lot of cultural elements that we've refined over the years. In that sense, we think they're difficult to replicate. Some things that impact and inform our product innovation. The first one is pragmatism. Pragmatic in how we approach our development. We use open source where it makes sense, and we build our own where open source does not make sense. Being pragmatic for us has meant to find an optimum between using open source software and building from scratch when we think it's worth it for the additional performance, scalability, or flexibility. Second, from my standpoint, the innovation in new product is largely based on what we see in terms of market demand. We train ourselves to start new things all the time.

Our broad customer base helps us make sure we solve the right problems, and we're constantly hearing from them and coming up with new insights on what we can build to best serve them. We make new bets, and we reconfigure ourselves constantly. It's always painful to start new things because everyone at Datadog is already operating at full speed, and then we ask them to stop what they're doing and do this new thing. It's a discipline that is good to cultivate and that benefits us. It avoids us being pigeonholed. At the end of the day, we're in observability and hopefully we'll be strong in security as well and other things. To do that, we have to keep building, and that is more cultural than technical. All this effort has led to a number of results that you're able to see.

To summarize those, we've launched 12 new products on our platform since 2017. We've expanded from our first product, Infrastructure Monitoring, to the three pillars of Observability, to a broader observability platform. We just launched our Cloud Security Platform as well as CI Visibility, our first developer-centric product. We're always exploring ways to expand our platform in service of the customer needs. That's it for me. I'm now going to turn it over to Ilan, Renaud, and Pierre to dive deeper into our platform. Thank you.

Ilan Rabinovitch
SVP of Product and Community, Datadog

Thanks, Alexis, and hi, everyone, and thanks for your time today. My name is Ilan Rabinovitch, and I lead our platform and community efforts here at Datadog. Let's start today by talking about Datadog's first product, Infrastructure Monitoring. This product provides real-time monitoring of IT infrastructure across cloud, hybrid, on-prem environments, including containers and serverless architectures. It provides a comprehensive view of everything that's happening throughout the tech stack that powers your applications. Migrating to the cloud makes having visibility into this foundational infrastructure that powers your apps more critical than ever because there's exponentially more things to track and more ephemeral, meaning that it's changing all the time. This product has come a long way since we launched it in 2012. We've continued to add new capabilities to the product over time, and we believe it's the best monitoring product for modern tech stacks.

We've grown with our customers as they've scaled their usage with us, both in terms of the breadth of integration coverage and the data volumes that they send us. Today, we monitor millions of hosts and containers and serverless invocations from all of our customers, covering many trillions of data points per day. When we started building Infrastructure Monitoring in 2010, the cloud was just in its infancy. We knew this world of dynamic infrastructure would change how we manage and deploy our applications. To prepare for this future, we built out a solution that's cloud-first and truly cloud agnostic, supporting all major public cloud providers as well as private clouds. In the 2010s, as things were moving forward with containers like Docker and orchestrators like Kubernetes, we started working with these technologies long before they became the norm.

From the beginning, we were focused on staying ahead of each new technology trend so that our customers could have confidence in adopting them. Today, containerized workloads are one of the most popular deployment models we see among our customers, and over half of all container users are managing it with Kubernetes. Our early bets on these technologies have paid off, and we continue to focus on embracing these leading-edge technologies and being prepared to help our customers as they start investigating these new technologies. Observability is key to adoption of those technologies. Without the ability to monitor them, teams just don't have the confidence that their launches will succeed and that their customers will not be impacted. We're continuing to build for these new technologies, for example, serverless, IoT, and others. These are growing rapidly as well.

Our recent State of the Serverless report showed that more than 50% of AWS-using organizations have adopted Lambda for their serverless compute. The pace of innovation and adoption of these technologies is constantly accelerating. Meanwhile, we have also extended our capabilities on technologies that aren't new, but these are still important to our customers' existing businesses. This means collecting data from legacy technologies like IBM WebSphere, AIX, and other pre-cloud solutions through our integrations. This is critical as it helps our customers monitor their entire business in a single platform. From legacy to modern cloud native, they can observe their businesses end to end. For those customers still operating on-premises or in hybrid environments, we've even launched Network Device Monitoring. This allows them to monitor the health of their network equipment in data centers, office environments, and branch locations.

Now, it was a purposeful choice to start Datadog with Infrastructure Monitoring. One of the reasons is that it's easier to start with infrastructure versus other monitoring areas, in that infrastructure or operations teams tend to own the entire landscape of what's running your digital business. They have access to everything and can easily roll out tools that are needed to drive stability and resilience efforts across your business. When customers adopt Datadog's Infrastructure Monitoring, we run broadly across their entire compute footprint. When a development team may want to understand the performance of one of their applications, they might deploy APM on a subset of those systems. Since our software is already running everywhere, development teams can just flip on APM without needing to install new monitoring software.

When you need to do a deeper investigation into what's going on in your environment, you wanna have access to the logs to get down deep as you troubleshoot. Again, we have a single agent. No additional software or heavy lifting is needed. When that data arrives, our product automatically correlates those logs, traces, and metrics all in a single view, helping you avoid context switching. If you're monitoring your infrastructure, you might also wanna understand the network devices that power it or how the network performance impacts those applications. Again, no additional software. We can track network connections between applications, DNS queries, and offer faster troubleshooting or performance analysis, all correlated, all richly interconnected. In security, which Pierre Bétouin will come on in just a little bit to talk about our Cloud Security Platform later.

One of the biggest friction points in selling security solutions is that the security team has to go to an admin or an operations team or an infrastructure team and ask them to install yet another piece of software to collect logs or telemetry. This creates friction between those teams. Will the new software bog things down? Why should we collect that data multiple times? Who's responsible for it? Since we're already everywhere with Datadog, there's no need to do this negotiation. Security signals are already there. That gives an accelerant to our products that others don't have. Even better for our customers, it helps them align across those organizational silos, bringing the disparate teams together to work together collaboratively on solving the problem.

Over time, as we've launched all of these new products, we've successfully moved from a single product we started with in Infrastructure Monitoring and grown into a platform of interconnected products. This chart shows our ARR for customers with one product, with two to three products, and customers with four or more products. As we've introduced more of these products, we've seen strong uptake as our customers get more value out of using the products together within a single platform. We've truly become a platform company, and we continue to build out the use cases and new products that integrate to give our customers new capabilities, all using the same data model, allowing richly interconnected, correlated data across each new data set and product that we add.

We've discussed many of our products, but there's a lot about our platform that isn't associated with a particular product we sell. Instead, it builds strength across the entire platform. One of them is the fact that we are cloud and tech stack agnostic. Our customers span public cloud, private cloud, legacy technologies, new and old. Whether they're building an entirely new green field technology with containers and serverless or connecting those cloud-native technologies back to the workloads in their data centers powered by mainframes, we want our customers to be able to pull in data and analyze it and see it across their entire business. We also wanna be where our customers are the most active, which currently tends to be the big three cloud providers. Cloud vendors generally aren't gonna significantly advantage one observability platform over all others.

They want to enable multiple choices for their customers and even often build their own native solutions. While that's true, there's some differentiating features to our relationships with these cloud vendors. We work hard to make it as easy as possible for our customers to use Datadog no matter what cloud they're in or even if they're multi-cloud. We believe we have a deeper technical relationship with cloud vendors than most. Evidenced by our support on launch day of new cloud provider products. For instance, we were the first observability player to support AWS App Runner and Fargate, being able to help customers from day one when AWS launched these technologies. In some cases, we launched these capabilities even before the cloud provider's native monitoring tools had support. Our cloud partner field teams see this.

It often results in them introducing us to their customers because they know we support their ecosystems broadly and that our presence helps accelerate customer migrations and adoption of these launches. A couple details about where we're at with each, with each of the major clouds. First, on AWS, they're our longest-standing cloud partner, and we started our journey in the cloud with them. As I said, we communicate closely with them as they launch new products, and we have a strong co-sell motion with them because their sellers know that a cloud migration with Datadog in the mix is a successful cloud migration. On Azure, we have a strategic agreement with Azure around our go-to-market. On the technical side, we've been able to work with them to build deep integrations, supporting everything from compute to platform as a service offerings right out of the box.

While others may have individual integrations here or there, only Datadog is available directly in the Azure portal alongside Azure's own native products. This is the first of its kind near-native experience that allows customers to buy, install, and configure Datadog with a single click right from the Azure console. Others are not embedded in this way, which results in a more complex flow, leaving users to do all the heavy lifting as they try to get set up. This level of integration and partnership results in a better experience for our customers. Now in GCP, we continue to work closely with Google and in our second quarter, we announced that we're on their marketplace. More recently, we've expanded our use of Google Cloud's infrastructure to include U.S. traffic as well.

We're developing a strong alliance partnership with GCP, and have closed a number of deals together, both directly and throughout the marketplace. For folks with private clouds, we have over 450 integrations that let people pull data in from their infrastructure. From VMware, to OpenStack, to Network Device Monitoring, we're able to broadly cover on-prem and virtual physical infrastructure. Now, Renaud will be on in just a minute to talk about Observability Pipelines. With Observability Pipelines, we're going to help our customers take control over what data they send and to whom they send it, including, of course, to Datadog. Being cloud agnostic, having deep relationships and integrations with each is important to our customers. They just can't silo themselves into using one cloud provider's tooling, as it provides a limited view of just that cloud.

They wanna understand their businesses as a whole, across clouds, across geographies, and across all applications. Datadog gives them that visibility. We spend a lot of time working to make sure we have integrations with the technologies that our customers use. One of the many reasons we win deals is that the time to value on Datadog is fast. We integrate with everything you care about. That means you can get going on Datadog with full visibility from the get-go. From clouds to containers, to databases and collaboration tools, and everything in between. We aim to cover your entire stack. More importantly, we build and maintain these technologies ourselves. This allows us to ensure the quality and keep them up to date with the latest product innovations. It's very frustrating for a customer if a third party builds an integration, but then fails to maintain it.

We offer a platform for observability and security, but we also provide all the integrations that populate those products with the data that matters to you most. From the minute you set up Datadog, we can start collecting everything immediately, bringing insights and value to customers. This makes it very easy to get set up and running without a lot of heavy lifting. We invest in these integrations, iterating them as the technology landscape changes, working very closely with customers and observing industry's trends just to get ahead of these developments well before the broader customer group ever hears about them. We're ready for them on day one when they're ready to start exploring these new technologies. Now, until recently, we focused on solving customer pain points with observability, primarily focused on production workloads.

I want to spend some time to speak to another way we can expand our platform and help developers with observability long before code gets to production. You may have heard of a term CI and CD before. This stands for continuous integration and continuous delivery. With CI or continuous integration, teams create automated build and testing pipelines that validate each code change as it's committed to ensure no bugs or regressions were introduced. Continuous deployment focuses on the release and rollout in a similar fashion. Together, these workflows let teams deploy smaller pieces of code much more frequently instead of batching a bunch of code changes together into larger, less frequent leases that cause more risk. If it's implemented well, this should enable faster development cycles, faster productionization of code that delivers new features and better experiences to your customers, hopefully faster than your competition.

It's not that simple. Similar to DevOps, it requires changes in tooling and culture to make it work. Tests can fail, they might be flaky. Builds can take many hours. All this drains productivity and can be frustrating to your teams. We think Datadog can be useful to customers at every step of the software development lifecycle, including CD. The first step for us in helping developers is our product announcement of CI Visibility, which is going GA here at DASH. We saw the challenges developers were having with CI/CD and acquired Undefined Labs in August 2020. Now, I wanna take a moment to point out that every acquisition we make, we are very deliberate with them. We will typically turn our acquisitions into a meaningful product in about a year after that acquisition.

It takes us that long to rebuild and integrate them into the Datadog platform, which is very important to us and our customers. The level of integration of products is key, as we mentioned earlier. In about one year from the announcement of Undefined Labs, you're seeing CI Visibility, which is based on that acquisition. CI Visibility takes us from the production observability context focused on client-facing workloads and shifts us left into development environments. Developers don't tend to have the budget and ops teams do, and dev budgets might be smaller than ops budgets and operate at a different scale, but they still control a lot of what gets used down the line. The earlier we can start with people who develop code, the more helpful we can be for our customers as we become more deeply integrated into their workflows.

The way we can help with the CI/CD process and culture is to help developers understand the tests that they're running, what's failing, and give them visibility into their pipelines, so they can see where the bottlenecks are and what can be done to improve on them, and even find performance regressions long before production impact is seen. We wanna make Datadog useful to customers at every step of that software development lifecycle. CI Visibility is our first product for developers, and it's just our first step. There's a lot more we think we can do to help developers, and you'll see us continue to get closer to the code and development workflows over time. Thank you for your time. I'll pass it along to Renaud for his comments on APM and Log Management.

Renaud Boutet
SVP of Product Management, Datadog

Thanks, Ilan. Hi, my name is Renaud Boutet, and as SVP of Product, I lead our APM and Log Management teams. I'm going to start today by talking about how we advance our humble APM products from beta in 2017 to the best of breed product suite that it is today. When we started APM in 2017, we started with the core function of APM, which is distributed tracing. Distributed tracing is about context propagation. It's one of the hardest problem to solve. How do I make sure that when a request goes through the stack of services, I don't miss the context anywhere? With distributed tracing, engineers are able to follow a request through the stack, every single line of code down to the DB. It's a very sophisticated product that requires maturity and development over time.

It took us a few years to get it right, a lot of iterations, but we finally did get it right. When we also captured the power of the platform by making it easy to move from APM's flame graphs to logs and metrics so that engineers can identify the source of the problem. APM is not only about distributed tracing and following these requests across services and microservices. Developers and SREs need visibility at every level, at every layer, and very deep into each layer. Over the years, we have introduced a number of products and features to build out our APM suite. In 2019, we introduced Tracing without Limits. Tracing without Limits ingests all traces by default, and then the customer can decide which traces to keep. It gives customer full control over the ROI of the product.

We also announced Synthetic Monitoring in 2019 and Real User Monitoring in 2020. Both products relate to the front end or client side. We help our customers to simulate their user experience or monitor the real user experience to ensure their customers are seeing fast, reliable performance from an application. These digital experience capabilities are key parts of any APM suite, and we are seeing tremendous uptake of these products. Ilan talked with you about how we will shift left and become more helpful for developers with CI Visibility and other tools in that area. With Synthetic Monitoring, we help developers run tests in early-stage workloads, like their CI/CD pipelines, to run tests on code before they are deployed. We also launched Continuous Profiler in 2020, allowing customers to understand what methods or processes are consuming the most resources, like CPU, memory, disk, et cetera.

Our Continuous Profiler, unlike typical profilers, has low overhead, so it can be used in production all the time, which is why high-performance developers use it every day to optimize any single part of their code, ultimately resulting in impressive cost savings on cloud providers as lean applications consume far less resources. 2020 was a prolific year as we announced Deployment Tracking to help customers automatically identifying potential disruption of key health metrics on rollouts of new versions. They then get notified on faulty deployments. For our customers, it's data science and machine learning helping them identify what might have been dramatic production incidents. In 2021, Datadog announced a full-fledged Error Tracking solution, which helps users monitor errors and crashes front to back across web and mobile applications and all the backend services powering them.

We also went GA with our deep Database Monitoring product so that customers can get deep visibility into the performance of database queries. Here at the DASH conference, we are announcing the GA of Session Replay. With this, you can watch individual user sessions using a video-like interface, allowing support engineers, product managers, and front-end engineers to view exactly how your users interact with your website, saving you time and guesswork. All these products are essential to truly understand how the applications are performing and how users perceive the performance, and above all, to detect incidents fast and to find the root cause even faster. Importantly, APM works best when all the suite of product capabilities are used alongside Log Management and Infrastructure Monitoring to really understand where the error or performance drop is and why. Alexis talked about unified service tagging on all these types of data.

What it means is that all traces, logs, and user sessions, once they hit the backend, they all share the same tags, so it's super easy to correlate all of them on the same screen. It's the same for infrastructure hosts and containers, as well as the over 450 integrations we build and maintain. This is seamless correlation as the context is always kept and the reason why our customers see such a difference. If you narrow down data related to a certain set of tags from anywhere in Datadog, you can seamlessly pivot from any other of these products and get different insights without losing the context.

As we built APM out to become a large suite of products. The product moved from one capability useful to a small group of customers to a widely useful suite for many customers, particularly when used in conjunction with the rest of the platform. As utility improved over time, so did our penetration with customers, as well as our upsell of other pieces of the APM suite. Today, four years after we launched the first product, and despite its size at well over nine figures in ARR, APM continues to be in hypergrowth mode. The products that contribute to the broader APM suite are also growing rapidly. Now, I also wanted to spend a bit of time here talking about AI and machine learning. To be clear, we enable AI/ML capabilities across our platform, not just APM.

The key purpose of AI/ML for observability is to decrease alert fatigue, reduce mean time to detect and mean time to resolve. That is to say customers want prepackaged AI/ML solutions to resolve these problems instead of toolkits to assemble, as we tend to see everywhere else. Watchdog is Datadog's AI engine, and we have been developing for years. Over time, we accelerated and applied Watchdog to an increasingly broad set of products, broken down into three main categories of problems we are solving. First, with Watchdog Alerts, we are detecting the unknown unknowns with an engine constantly looking at the data and identifying abnormal symptoms, this to reduce mean time to detect. On this screenshot, for example, we show a well-identified problem on the Kubernetes infrastructure.

At DASH, we are also announcing a new category of Watchdog alerts, log pattern anomalies, automatically detecting disruption in log signatures service by service. Secondly, Watchdog Insights help the user within the troubleshooting journey embedded in our various views, dashboards and explorers. We call it augmented troubleshooting. We know what the user is interested in because we can see what they are looking at, so we can give them more context around something they are looking for. This to reduce mean time to resolution. In this example, while the user is looking at the production logs of an important service, Watchdog Insights is surfacing a major problem in the code called deadlock, which was spotted by the Continuous Profiler. The user can then quickly pivot in the profiler for further analysis. It's important to understand that the user might have probably never guessed this issue, only looking at the logs.

Third, by the end of the year, we will soon be out with our first version of Watchdog Root Cause Analysis, which will help customers automatically identify root causes of performance incidents. We are then slowly but surely converging to the next generation of troubleshooting, where the machine is automatically able to narrow down to the root cause, a large but extremely promising investment of ours. With Watchdog Alert, Watchdog Insights, and Watchdog RCA, our customers get a clear idea and understanding of how Watchdog can help them in becoming a company with zero incidents impacting their business. Let me now change the subject and talk about Log Management, our third major product line within our observability platform. I'm going to start from the beginning and talk about what Log Management is.

Logs are coming from every piece of the tech stack, and they are telling the story of what's going on in their infrastructure and their applications. Every host, container, database, endpoint, and piece of software code is throwing off logs to let you know what's happening. It's a massive amount of data that developers need to look through it to figure out what's going on or to answer the most questions. Which is why we invented Log Management products, ingesting and indexing all of them, acting like a search engine to help users finding what they are looking for. We have to understand that logs are like oxygen for engineers but also for people gravitating around them. They can explain how the users, the services, and the system around it is going.

In other words, they are the most granular but holistic embodiment of the business operations, and we need to access them in near real time, but also in the past. It is indeed of the utmost importance to keep them for as long as possible in corporate archives, where we are going to run potential security postmortems, financial or any kind of audits. While logs are so ubiquitous, they are also very unstructured, coming from everywhere and sometimes overwhelming. As you understood, it's also a fundamental facet of your infrastructure and applications. On the other end, Application Performance Monitoring and Infrastructure Monitoring solutions are actually the opposite, built on top of extremely structured and explanatory workflows for our users.

The important thing about our Log Management product is that it was designed from the start to be deeply embedded in the Datadog platform, working hand in hand with and within APM and Infrastructure Monitoring. If you're an application engineer, you are always looking at your host, your containers, cloud assets, application services through metrics and traces. In your user journey, logs are just appearing where it's the most logical to you. This is why we call these the three pillars of observability. When Log Management was first released, it somewhat became the glue between two silos that were not necessarily communicating together before. When I started at Datadog in 2017, Log Management solutions were known to be costly due to the very high log volumes, to the point of being cost prohibitive.

Also, the large variation of the volume of logs due to any changes in the system or business spikes generate cost uncertainty. Logs are nearly useless when things are going well. When something goes wrong, the ones that tell you what happened are incredibly valuable. This is impossible, though, at the present, to keep only the ones that will matter in the future. It then created potential impactful behavior in companies' organizations that I'm going to illustrate as follows. Traditional log management solutions in the marketplace are charging a fairly high price while volumes keep increasing over the years. These rising costs are causing engineers to have to choose which data to throw away because they couldn't afford to store all of it. Almost all our customers are telling us roughly the same story.

Managers have to constantly ask every single development team to remove some logs here or there and take risks to not have them when they need it the most. Something obviously painful for everyone. To summarize, you have to choose between two non-optimal solutions, spend a lot of money or take operational risks. Which is why we provided in 2018 a major solution for this that we named Logging without Limits, which was a pricing and a technical innovation. The idea is that you no longer have to choose what to collect and what to ignore. Everything is ingested and processed, and the teams decide what to do with it afterwards on the fly, depending on the situation. This was made possible because we decoupled the ingestion and the indexing of the logs flow.

You can ingest at 10 cents per gigabyte designed to be so affordable that you can now send all your logs. Then teams selectively index and retain logs with surgical precision under preset volumes and costs decided by the company. You get all the troubleshooting and analytics you need with the cost you define. ROI is the key thing here. To close the loop, everything is archived, so even if you are oversampling and some logs need to be analyzed, teams can rehydrate on-demand and still run the analysis or they need. Logging without Limits, combined with the deep integration in the platform, were really the big insights that we had that has fed this high growth and strong customer uptake since we launched the product. This chart shows our ARR in log management over time.

We have told you that this product, like APM, has well over nine figures in ARR and remains in hypergrowth mode. As we add products, such as our Cloud Security Platform, for instance, and on the developer side with CI Visibility and synthetics testing, there are more reasons for our customers to send us logs to be able to monitor what's going on in their systems and take actions. Now, let's conclude with another major initiative in the same vein. When Logging without Limits brought solutions for 99% of the companies out there, when we were talking with the very large corporations, we discover another class of problems. For example, these companies cannot send us all these volumes because their network will be clogged. Also, there could be a data sensitivity issue.

They can't necessarily take the risk to send the data to Datadog and then have a log that is stored by a third party. Finally, and probably more subtle, the bigger and most mature companies had migrations of observability and security solutions over the years as their technology shifted from mainframe to physical server to virtual server to cloud. Each time, there will be a painful multi-month migration process across hundreds of teams and tens of thousands of machines. Observing these widespread and recurring problems was the trigger of the acquisition of Timber Technologies in February of this year. Timber Technologies is the company behind the extremely popular open source project called Vector, already downloaded more than 10 million times . Also, this week at the DASH conference, we are announcing official support of Vector by Datadog.

Alongside the launch of the private beta of the new solution we have been at work on since the team joined us, called Observability Pipelines. When customers utilize Observability Pipelines, they simply connect the Vector open source solution to Datadog while, importantly, it runs in their premises or their cloud accounts, which is a shift from a pure SaaS provider like us. End users of Datadog can then remotely control and decide on the fly which data to acquire, how to transform it, and also to which vendor or internal services they want to stream without necessarily leaving the company's networks. This lets customers take back full control of their observability data. It enables them to lower costs, use the data when and where it's needed, keep sensitive data under tighter control, and facilitate migrations between vendors and/or internal solutions.

We are excited about this product, and we've had a lot of positive feedback from customers so far. That's it for me. I'll pass it along to Pierre now to discuss our Cloud Security Platform.

Pierre Bétouin
VP of Product of Cloud Security, Datadog

Thank you, Renaud. Hi, everyone. My name is Pierre Bétouin, and I've spent my entire career in the security space. I was the CEO of Sqreen, an application security platform for the modern enterprise that I co-founded in 2015. I joined Datadog through the Sqreen acquisition six months ago to lead the cloud security platform. You know how much we care about performance and reliability, and we're equally excited about security. We believe that we are uniquely positioned to help engineering and security teams build more secure applications in the cloud. Let's take a moment to step back on the challenges to secure cloud environments. Production systems are becoming more and more complex.

A single transaction now crosses an average of 35 different services with hundreds of heterogeneous applications, APIs, microservices, web services, thousands of developers pushing code now in production every day, and thousands of ephemeral cloud instances scaling up and down. We still protect modern services today the way we protected legacy services back 20 years ago. Firewalls, IDS, and other point solutions. Security tools have not been designed for the cloud. Cloud technologies brought a new set of attack vectors that malicious actors are actually trying to exploit. Traditional security is still very much centralized and often network-based. It's usually not real-time, not distributed, and doesn't really have visibility into what's going on in the system. These limitations introduced by the technology also cascaded on the organizations. The fragmentation and the overspecialization of the different security solutions broadened the gap between the engineering and the security teams.

On the left, DevOps teams already have a lot of observability insights into the code and the runtime, errors, exceptions, logs, data flows. Datadog already monitors millions of services in real time. While DevOps teams see the code and the data, they don't typically see what's happening outside, and there is traditionally nothing to connect the runtime context with the security threats. On the security side now, dozens of different solutions are deployed, but none of them can provide code and contextual data that DevOps teams actually need to operate. Traditional security lack the ability to understand the deeper context of what's happening across the stack, which is becoming increasingly critical as things get more complex in the cloud. It's only by bringing and sharing high quality and deep insights at the different layers that we can enable better collaboration between those teams.

This is why having together observability and security capabilities on the same platform makes so much sense. By bringing all this data and all these insights together, you can quickly identify what you need to ask and what you need to fix. That's what DevSecOps is about, and we are working on enabling that change for the millions of services we already monitor with no friction. As we mentioned earlier, security teams are willing to find opportunities to work closer to the engineers who are developing and maintaining the different services. Why are those teams siloed from each other? First of all, they are usually organized into different functional teams, and they are aligned to different goals. One of the reasons behind that, they are using different tools and don't share the same data sets.

They obviously end up with different levels of visibility and different perspectives. The other thing that's important to point out is that no matter what the budget allocation is, you just can't hire enough security professionals to solve this problem. We believe that security will be decentralized and will become the second line of defense. You don't have to call an ops person to spin up a new database anymore. Engineers already do it by themselves. This should not be different for security. In the end, it's everyone's job. You need to have security already backed into the code as it gets released. In a new DevSecOps context, DevOps teams are the primary owners of the performance, reliability, and now security of the applications and services they develop.

Security teams keep full visibility on the asset inventory, the threats, and the vulnerabilities, so they can quickly prioritize and better partner with the engineering teams. That's what modern teams are doing because you can't gate the release process anymore. No one would accept that. Technology will be key to enable that change by providing a unified set of tools and insights to those teams. Why are we excited about security? We actually bring three strengths to the transition to DevSecOps. First, we know how to break down silos, and we did it with DevOps teams ten years ago. Our unified platform helps users share insights and better collaborate together. Fostering collaboration between teams has been Datadog's mission since day one, and we can see a critical need for this in the DevSecOps world. Second, we are removing organizational frictions.

DevOps teams are already using our platform every day, all day long with the high quality and deep insights that they need to understand the DevOps implications. As you heard from Ilan, this also includes data from hundreds of external integrations. Third, we are removing technical frictions. Apps and services are already instrumented in production with Datadog to monitor the performance and the reliability. No need to deploy yet another agent for security. We are leveraging the same technologies that the ones already deployed by our customers in production. Our users are now exactly one click away from integrating security into their stack. We learn from the observability. We decided to approach the problem the same way, by providing a holistic platform to protect applications, infrastructures, and cloud environments.

If our customers are already monitoring their infrastructure with Datadog, they can basically get compliance and workload security in just one click with CSPM and CWS. If they are APM customers, they are now able to correlate their application traces with security insights, threats, attackers, vulnerabilities with little to no additional friction. Our Cloud SIEM customers gather all the important security activity from their logs in one place. This is our Cloud Security Platform. Our Cloud Security Posture Management product continuously detects misconfigurations that would be pushed in production. Our Cloud Workload Security product detects suspicious files or process activities across hosts and containers. Malicious activity is reported in real time with rich context. Our Cloud SIEM automatically detects critical threats for systems in production. Unlike any traditional SIEM, Datadog will automatically provide all the related observability data to help them collaborate.

Information about EC2 instances, CPU, data flows, users down to the application stack traces all in one click. Finally, our AppSec solution monitors attacks and correlate them with application traces. Built for modern distributed systems, it will detect threats such as injections, business logic attacks, and more. We just released the private beta for DASH. We'll continue to augment our product's capabilities, and we'll introduce new products over time as we learn more from our customers. Thank you for your time, and I'll hand it over to you, Yuka.

Yuka Broderick
Head of Investor Relations, Datadog

That's it for the first part of our investor meeting. We're gonna take a short break. We'll see you back here in about 10 minutes.

Nick Miceli
Site Reliability Engineering Manager, Wayfair

My name is Nick Miceli. I work for wayfair.com in Boston, Massachusetts, and I am a staff engineer there. Wayfair is one of the largest online retailers of home furnishings. We run three on-premise data centers across the globe, and we are currently working on moving into the cloud as well. We're currently using Datadog to monitor our on-premise infrastructure, utilizing configuration management tools. When we build a server, it's already automatically monitored in Datadog. Once Puppet runs on a machine for the first time, the agent's installed, the profiles are applied, and the metrics that we would expect to be there are just there. Having Datadog's automatic configuration management through Puppet allows us to spend time on building new features and improving what we currently have rather than just maintaining what we have. We are running containers on premise through a local Kubernetes installation.

We ingest metrics from Kubernetes and events from Kubernetes through the Datadog container agent that allows us rapid visibility into our container infrastructure. We're very excited that we're moving forward with containers and container management platforms such as Kubernetes, and monitoring distributed systems like this is extremely difficult. Datadog has made it very easy to just plug and play and understand exactly what's going on.

Yony Feng
Co-Founder and CTO, Peloton

My name is Yony Feng, and I'm the Co-Founder and CTO of Peloton. For the past five years, we've been working on combining a media software and hardware product into one to offer this unique exercise fitness experience for all of our users that can experience a fully distributed indoor cycling virtual classroom. One thing that we think about at Peloton is our

Highest priority always is that core experience, making it as easy as possible to hop on the bike, find the class that you wanna take that class, and get the best workout of your life. It's crucial for us to make sure that we monitor how our system is performing across the board.

We have a virtual leaderboard that is ranking everyone together. From that capability, you need to kind of perform real-time, low-lag data processing for thousands of users all at once. Datadog's platform was instrumental in offering observability and instrumentation and transparency in what the user experience is like during those live classes. Most recently, we started using Datadog's APM product to help us further in solving some of our performance challenges with the growth that we had in the past couple years. The integration is actually pretty smooth for us. We use an async controller layer on Python Gevent, and that was particularly troublesome with other APM platforms that we found fairly simple with Datadog.

One of the important things for us is how quickly we can respond when a user searches for a class, and we cut that by a factor of four with the insights that we got from using APM. We can see exactly what's going on, and that might not have been apparent to us without the visualization of how the code is running.

Within the first 30 days, 45 days, we were able to quickly identify some of the top endpoint that had performance issues. I think some of the top 3, top 5 performance issues we had with our endpoint, we were able to reduce those response time by 80, 90% within the first 30, 45 days.

One of my favorite things about Datadog is that Datadog is easy. It's easy for us to integrate with in our code. It's easy for me to investigate issues with, to look at metrics, and it's easy to share with my non-technical colleagues.

Ben Hughes
Staff Software Engineer, Airbnb

Hi, I'm Ben Hughes. I work on observability and reliability at Airbnb. Datadog is one of the main places where we're putting metrics data and has kind of like a large set of data that we have historically, so we can compare to the past. It's been a huge thing for us, though, because we have a stable metrics system that we can trust and build on top of without having to maintain. Datadog, by making it very easy to collect data and then also work with and visualize that data, setting up alerts, dashboards, like all of these things, it makes it really.

Like it makes it just kind of like natural that you would use this to kind of like make decisions and like make sure that things are still working. I mean, like, we have examples where we take data from like our offline data infrastructure type jobs that we've processed out with like spreadsheets and, you know, reports and all these other things, but we'll also feed it back into Datadog. Like, add a delay, but, like, have it there just so that we can still see the trend lines and use the visualization tools.

It's definitely something that's integrated into a lot of our workflows and has enabled us to, you know, just understand that there's data there, and it will help us make decisions, and it really move the organization forward.

Dan Reeder
Senior Manager of Capacity & Performance, Zendesk

My name is Dan Reeder. I work at Zendesk in San Francisco, and I manage the Performance and Capacity Planning team. We use Datadog to monitor our environment and our data center, as well as our cloud infrastructure. One of the things that we find extremely useful is APM and Trace Search. Our team uses it every day. I don't think we can function without it. What Datadog allows us to do in terms of helping other teams optimize their capacity, is it provides a fantastic visualization for what exactly is going on in their code, and allows us to point out what things need to get fixed. It makes it so that there really isn't any argument. I can do a trace search, and I can show them immediately, "Here's something that's taking too long. Here's something that is using up capacity that doesn't need to be.

Let's fix that." I can show them the data, and everybody in the room knows what the right thing to do is. The benefits that Datadog brings to our business and would bring to any business is that you immediately understand if you are deploying assets the right way. Because not only can you see if something's going wrong, you can see if the solutions that you're implementing are doing what you expected them to do.

Yuka Broderick
Head of Investor Relations, Datadog

Hi, everyone. Welcome back to the Datadog Investor Meeting. Up next is Amit Agarwal, who will share more about our customer focus and our pricing philosophy.

Amit Agarwal
Chief Product Officer, Datadog

Thanks, Yuka. Hi, everyone. My name is Amit Agarwal, and I'm the Chief Product Officer here at Datadog. I wanted to start today by talking about our customers. This is because at Datadog, we are focused on serving our customers in ways that provide tangible value to their businesses. Now, we're often asked about competitive environment by many of our investors and who we think we compete with the most. Culturally, I just want to emphasize that we don't really think that way. We're not trying to compete with anyone. Our singular focus instead is on finding ways to add value to our customers. For instance, consolidation of observability tools on our platform is primarily driven by customer need to improve the time to detection and resolution of problems.

It's very time-consuming and very expensive for customers to use different products to solve all of their security and observability issues. It's not that we're trying to displace this or that, it's that the customers are trying to find the best ways to solve their problems in their applications. Without Datadog, they've been faced with the daunting task of manually piecing together that data. We make the sign-up process super simple, where a customer can sign up for a self-serve trial of our platform themselves. We work with a huge and diverse set of tech stacks, which means we try and support most everything that users try and observe with us from on-premise to the cloud, which in turn means that after signing up, a customer can immediately start to see data and value on the tech stack they use from legacy to the cloud.

We then go beyond that to make sure the users are engaged with the platform. We try and make it so that they're able to intuitively discover and use features that are useful to them without needing specialized training on our products. Our over 16,000 customers give us lots of great feedback to help us keep the product easy to use. Unlike traditional enterprise software vendors, we don't focus on customers of a certain size or push them into long proof of concepts or lock them into upfront commitments. We strongly believe in letting customers start at whatever entry point they like, whether it's a month-to-month subscription for a small project, a proof of concept, or a multi-year commitment for a Fortune 500 company-wide deployment to thousands of users.

Another thing we focus on is each of our products must work well and differentiate on its own merit. We do not gate our customers. You can, at any time, switch usage from one product to another. Of course, Datadog is competing in the marketplace for customer dollars. But even within Datadog, all of the different products are competing for the mind share of the customer. Every one of our products has to get customer engagement and show how it can bring value. There are many competitive products in these categories that do similar types of things, so we take pride in making each and every product very competitive in and of itself. Each product stands on its own, but the true silver bullet differentiator at Datadog is how all of these products are tightly and deeply integrated with each other.

The moment you turn on two or more together, it's magical. From a user experience perspective, they don't appear as separate products. They appear as and work as parts of an integrated platform. Our value comes in the ease of troubleshooting problems across products, the ability to centrally manage in one place, the ability to quickly switch from one view to another. It becomes very easy once you start to connect these products like little puzzle pieces all in one interface. Let me give you a couple of examples. This chart shows the annual recurring revenue and number of products used quarterly for a global shipping company. Before using Datadog, the company had no way of understanding the performance of their applications. They started with us on Infrastructure Monitoring to get a view of the cloud performance.

As shipping demand accelerated during COVID, they began using APM and log management, and our unified monitoring helped them scale and manage the increased load on their applications. They adopted RUM and Synthetics to proactively monitor and improve the user experience of their highly valuable business flows, like bookings and so on. They began leveraging Continuous Profiler to improve their ability to identify bottlenecks in their applications. They've also been using our Cloud SIEM product to detect threats in their logs. With Datadog's full suite of products, this customer now has a pulse on their entire bookings flow, from the user front end to systems of engagement in public cloud, to systems of record in private clouds and back to the customer, all in one platform.

With all of these products and the Datadog platform, the company has been able to get a holistic view of the health of all of their services and better enable capacity planning, pricing, booking, and shipping fulfillment. Here's another example, this time of a company that's a global payroll and HR provider. Their engineering teams found immediate value in consolidating their observability solutions to a single infrastructure monitoring platform for their cloud-native business units. Tool sprawl was a huge problem for them. They wanted to enable DevOps engineers to self-navigate a single UI to troubleshoot issues. They expanded their adoption to include APM and logs and added Synthetics and RUM to advance their understanding of user experience. They also adopted security monitoring to easily capture security signals directly from their logs. This customer continues to look to Datadog to gain maximum visibility at every layer of the stack.

As these customers continue to grow globally, we expect Datadog to grow with them and remain their standard for unified monitoring. We believe we deliver a lot of unique differentiated value to customers. If we serve our customers well, they benefit from, number one, better visibility on infrastructure usage. Number two, better optimized infrastructure costs, lower downtime, better customer experience, faster innovation, more productive engineers. Probably anyone would say that. How do we know we really provide value to our customers? We know this because we have a dollar net retention rate of over 130% for the last 16 quarters consecutively. As our customers recognize the value we provide, they use more of us. We are best in class on this metric.

The other way we understand if this is all working and if our customers are happy with us is our gross revenue retention rate. Now, many companies focus on net retention, which is just the dollars retained across customers. We're best in class in that metric. What we really, really care about is not just the growth of customer cohorts that bought our product, but also that there's very little churn of customers. That's what gross retention rate points to. Our customers come and stick around with us for a long time. We've told you that our gross retention rate has consistently been in the mid-90s. Here's a time series since 2018. Our gross retention has been solid and is quite similar if you look across products or customer segments.

We think that this is a good indication that our approach to our products and the value they provide resonates with our customers. Now, we get a lot of questions about our pricing strategy. I wanted to take a moment and comment on some key characteristics of our pricing model. The first thing to point out, which we believe differentiates us from our competitors, is that our pricing is completely transparent. You can go to our website today and see the pricing structure for all of our 13 products. There are volume discounts as well as discounts if you move from month to month to an annual contract. But the philosophical construct for pricing is all based on what you see on our website. You can buy month to month on demand annually or multi-year if that's what you want.

We are open to working with our customers on how they would like to work with us. Another characteristic of our pricing is that we try and align as closely as possible to the usage and value you're getting out of the product. We charge for things based on proxies for value for what is important to you. There's no perfect proxy for value that you're getting, but there are some easy proxies. For example, we charge for the size of the infrastructure as it grows. The bigger the infrastructure, the more you have to manage. If you can manage with a fewer people using Datadog, that's value to you. We know that people often look to compare us against competitors on pricing, but we don't think our product is expensive or cheap.

We think of it as value creation for our customers, and that value comes in the form of reduced opportunity cost of monitoring with more people working on core business apps and not building monitoring. It also comes in the form of faster remediation, shorter mean time to resolution, and shorter downtimes. These all translate to what's most important to your company's business, which is the growth in revenue and protection of revenue and continued expansion into new products and new things. We put a lot of thought into the best ways to price the product, which we believe should relate to the way our customers get value out of our product. As we go through beta releases of our products, we evaluate how customers are using our products, and we hone in on what makes the best sense for them relative to the value we are providing.

Now, you've heard Olivier say this before, but the data volumes are increasing all the time. The amount of data that machines can produce and the amount of data that products can process can grow exponentially. The cost of processing all that data can become unsustainable for customers, which can create all kinds of problems. Some of the previous pricing models in the space have sometimes made more money from customers without giving them enough lift in value. We've actively been thinking about that problem specifically because again, we are really focused on helping customers solve their problems and their pain points. Whenever we can, we try to give our customers levers to choose how they use our products. We do this across our products, across all of them. Renaud has already talked about how we do this on Log Management with Logging without Limits.

We also have Metrics without Limits for Infrastructure Monitoring and Tracing without Limits for APM. Let me talk about one final very important characteristic of our pricing model. We don't price per user. We feel that the value our platform provides isn't driven by the number of users in the company. We think that the value comes from the importance and volume of data that the users look at. We want any user who can get value from the data in our platform to use it, from DevOps teams to security teams to business users, all the way up to the C-suite. By bringing more users together to understand and share insights on the same deep, rich set of data, we help to break down silos between teams, and we help them communicate and collaborate with each other. That's how we help to democratize data for our customers.

Thank you for your time. I'm going to pass it over to Adam Blitzer now to tell you a bit more about our go-to-market.

Adam Blitzer
COO, Datadog

Thanks, Amit. Hi, everyone. My name is Adam Blitzer, and I joined as Datadog COO six months ago. I was excited to join Datadog from Salesforce because of the great product, platform, and team here. What I've seen so far has only increased my excitement for our opportunities and our ability to execute to them. I'd like to talk with you today about how our go-to-market is evolving and advancing. Our primary motion today is land and expand. We're a rare company that serves all market segments with our products. Most companies focus on SMB or mid-market or enterprise, and their products end up specializing on a single market segment. We go after the entire market with a single code base. This focuses us on keeping our products simple, but not simplistic.

Our first principle is to get our customers using our product as early as possible because it lets them see the ease of use and value of the Datadog platform. That ease of use and the benefits of the unified platform drive increased usage and adoption of new products. That relates directly to our dollar-based net retention, which, as Amit said, has been over 130% in each of the last 16 quarters. I want to show you a couple of examples of this land and expand motion in action. This is a chart showing the quarterly annual recurring revenue from a major financial information services company and how many products they've been using. We initially started working with them as they moved to the cloud, and they found that open source and cloud vendor monitoring tools were insufficient.

In 2019, this company accelerated its strategy to modernize its architecture in order to keep up with market demand and improve speed of development for new offerings. They were moving to use multiple clouds and shifting more towards containers and serverless technologies. More business units began using Datadog, and they expanded usage to log management, APM, synthetics, and network performance monitoring to speed up development cycles. Here's another example. This is a U.S. grocery chain. They were operating completely on legacy on-prem servers in the past and had just begun moving to the cloud with Datadog. Because of competitive pressures and the COVID pandemic, they shifted their business model to accommodate curbside pickup and food delivery, which accelerated their plans to migrate to the cloud and modernize existing applications.

With Datadog, they were able to consolidate their APM, infrastructure, and logs to gain one unified view across business units, helping them reduce their mean time to resolution and give engineers time back to innovate on new delivery and curbside product offerings. Let's go back to the go-to-market. All of our sales motions are fundamentally land and expand, no matter which way we do it. Different types of customers buy in different ways, so we meet our customers wherever they are, and our go-to-market is organized that way. For our customers who just want to get going and don't really need any help from us, you can see when you go to our website that you can get a two-week free trial, and if you find value, you can put in your credit card, and we'll bill you monthly.

We have a very strong commercial sales team which has an inside sales motion. They work with our SMB customers with 1,000 employees or fewer and our mid-market customers with 1,000-5,000 employees. About 5 years ago, we started to build out our enterprise sales team, and this is how we go after customers with 5,000 employees or more. Compared to our commercial sales motion, you won't be surprised to hear sales cycles are longer, but per customer land value is higher. What's interesting is we have some customers in higher segments who start out the same way as SMB customers. They self-start or start with very little interaction with Datadog, and then the sales motion around these bigger organizations really starts to develop as it goes. Let me give you an example.

In 2019, we signed a multinational distribution company to a deal for $2.50 per month. By this year, the company had grown to become over a $1 million ARR customer with us, and they continue to grow rapidly. As Datadog gets larger, there will be more and more opportunities to specialize within the company. We're constantly working to meet our customers where they are and adapt to their needs. As we've grown, we have more products to sell, more geographies and territories to cover, and there are more ways that it could make sense for us to segment to make sure that we're able to service our customers in the best way possible. Here's how it's played out for our support teams over the years. As we've gotten bigger, we've become more and more specialized over time.

The most recent evolution has been our addition of our professional services capability, which we're just starting to bring to our customers. To this point, our product has been very easy to use, and professional services have not been needed by most of our customers. As we continue to reach into larger and more complex enterprises, there are some customers who want to get started with their deployment right away with minimal risk and uncertainty and leverage all of our best practices. In the future, we will likely get even more sophisticated and specialize and segment in order to best serve our customers. This land and expand sales motion, coupled with our increasingly sophisticated go-to-market motion, has been incredibly successful. As you can see, we have seen very strong customer growth, but we've also seen customers continue to spend more with us.

We've highlighted this with you in our earnings calls, but you can see how that group of larger spending customers is becoming an increasingly large portion of our ARR. Meanwhile, the large number of smaller sized customers keep us honest with our products and their ease of use while expanding with us over time. We're really pleased with how our go-to-market has evolved as our product has, and we're proud of its execution. The changes we are making now set us up for many years of success to come. That's it for me. Thanks for your time. I'll hand it off to David now.

David Obstler
CFO, Datadog

Thanks, Adam. Hi, everyone. I know many of you, but just to reintroduce myself, I'm David Obstler, and I'm Datadog's CFO.

To go over some key takeaways. One, we are a product-led company driven by continuous innovation. Two, we have a relatively frictionless customer-led sales motion that helps us drive our land and expand relationship with our customers. Three, we have a strong customer growth trend, including with our largest customers. Lastly, all this leads to high revenue growth with operating efficiency. Over the past 10+ years, we've had a strong and accelerating product innovation as we built out our observability platform and have begun to enter other areas. This is a result of the culture of innovation and relentless customer focus that you've heard about today. It's a result of our aggressive investment in R&D. This chart shows our trailing twelve months R&D spend as a percentage of sales against some of our peers.

As you can see, we spend a lot in R&D, but we believe we get a very strong ROI from that investment. That product innovation, combined with the frictionless land and expand motion, has resulted in a very strong gross and net retention, with mid-90s% gross retention and over 130% revenue retention rate. Another way we know our platform strategy is resonating is that we see our customers adopting multiple products in our platform. These charts show the percentage of our customers using 2 or more and 4 or more products. Our frictionless land and expand, and our strong go-to-market efforts have led to strong customer growth overall, with over 16,000 customers using our products today. We've seen strong growth in our highest spending customers.

On the left is our customers who spend over $1 million in ARR with us, and on the right is our customers that spend over $100,000 in ARR with us. As you can see, both are sloping up sharply. Among our customers who spend over $100,000 in ARR with us, their average annual spend is about $500,000. This customer acquisition and platform adoption is happening with best-in-class sales efficiency. This chart shows our CAC payback period relative to peers for the most recent quarter. Our sales team is executing very strongly as they help our customers understand the value of adopting the Datadog platform. All of this has resulted in excellent financial performance, very high revenue growth, and very strong profitability and leverage in our model. Thank you. That's it for the presentation portion of our investor meeting.

Now we're going to move along to Q&A.

Yuka Broderick
Head of Investor Relations, Datadog

Hi, everyone. We're going to start the Q&A session now. Let me introduce our executives today. On my left, CTO and co-founder Alexis Lê-Quôc, CEO and co-founder Olivier Pomel, and CFO David Obstler. A couple of instructions. As a reminder, we are taking questions from both the audio line and the Q&A submission window. For those of you on the webcast, feel free to put in questions into the Q&A window at any time. For those of you on the audio line, the instructions are, press one on your telephone keypad to be placed into the queue. You can withdraw your question by pressing the pound sign. To reach an operator, press star zero. Please mute your speakers and webcast when your line is opened so you don't get any audio feedback.

Finally, I would like to remind you that we're a week away from reporting our financial results for the third quarter. Please hold your questions about financial performance for that earnings call. We will not be discussing financials today. With that, we will take our first question from the Q&A submission window. It's for Olivier Pomel and Alexis Lê-Quôc. There were a lot of product announcements yesterday at DASH. Is there anything you're most excited about or particularly proud of?

Olivier Pomel
CEO and Co-Founder, Datadog

I'll start with the thing that's most exciting to me is just the sheer number of announcements we've made and both the breadth and depth of innovation we've seen, you know, which really speaks to how well our product and engineering teams are working and how productive they are. If I were to summarize what we've seen at DASH, I would say there are three areas that really excite me. The first one is that we're doubling down on observability with announcements such as Universal Service Monitoring. The second is that we're really giving more options for our customers to manage data at very, very large scale. We are seeing that with Observability Pipelines, for example. We're seeing that with live archives. They are all very important as data volumes are exploding.

Really the last thing that I think is exciting is that we're breaking down more silos. We're bringing more personas. We're doing that by going deeper into development use cases with CI and CD visibility. We're getting the support teams involved, the product teams involved, with products such as Session Replay, or, you know, funnel analysis. We're getting network teams, network engineers involved with Network Device Monitoring. We're also getting finance team involved. You know, David, you'll like that, with Cloud Cost Management. You know, we still have, I think, more silos to break down, but I'm very excited that we're making progress, and we've shown that yesterday.

Alexis Lê-Quôc
CTO and Co-Founder, Datadog

I think as for me, it's a couple of themes that are particularly interesting, particularly, you know, from a technical standpoint. I think the amount of data we're able to capture and analyze, you know, continues to increase rapidly, and I think that's allowing us to really provide a lot more value. It also allows us to, I think, continue to exercise the AI and machine learning muscle that we've been building over the years. That's really exciting for me, and I think that the teams, the engineering teams at Datadog are particularly eager to tackle the challenge.

There are things around, I think, Observability Pipelines that are also very promising, in that it's a new way for us to, I think, to continue to grow, and so that makes me happy.

Yuka Broderick
Head of Investor Relations, Datadog

Great. We'll take the next question from the audio line. Chris, can you introduce the questioner, please?

Operator

Yes. Thank you. Our first question will be Sanjit Singh of Morgan Stanley. Your line is open.

Sanjit Singh
Executive Director, Morgan Stanley

Thank you so much for taking the questions, and congrats to the Datadog team on all the innovation that you've announced this week. It's been a heritage of the company, and it seems like it's going quite well. I wanted to focus on the security opportunity because that was a focus of the presentation today. When I think about, like, how the observability market converged, there was a number of factors that were at play. I think one of the factors at play was sort of the access to data with, like, the OpenTelemetry movement and the OpenMetrics movement, right? It allowed bringing in data into the data platform and others made that more easy to do.

When we think about the security and converging these two teams, it seems like security data is much more proprietary, particularly when we think about some of the network and the endpoint security. How do you think about that as a friction in terms of breaking down the silos between these two teams, and what initiatives do you have to get access to those other data sets?

Olivier Pomel
CEO and Co-Founder, Datadog

Well, actually, we think that the security data, most of it is not pure security data. Most of it is data about what's happening on the systems, what applications are running, what these applications are doing, who the users are interacting with those applications, and what outputs they produce, you know, in the form of logs. We think the vast majority of security data is actually pretty much observability data. Of course, there's some additional data sets that are specific to security. If you zoom out a little bit, if you look at what makes it difficult for security teams to do their jobs, it is that typically to get all of that observability data, they have to jump over many, many different hurdles. They have to instrument systems and control.

They have to basically stand in the way of development and operations teams. We think that's one of the reasons why it's been so difficult to get those security teams to work. Where we can play a role, where I think we are differentiated, is that we have all these observability data. We have the attention of the development teams or the operations teams, and we can help the security teams instrument and add a little bit of their own data sets to that, without incurring the friction of instrumenting everything else. We think that's one of the areas we differentiate in.

Yuka Broderick
Head of Investor Relations, Datadog

Great. We'll take the next question from the audio line again. Chris, the next questioner, please. Chris, can you introduce?

Operator

We have Brent Thill from Jefferies. Your line is open.

Brent Thill
Tech Sector Leader of Software Research, Jefferies

Great. Thanks for today. The question was just around large enterprise adoption. I think you highlighted a number of customer case studies that showed number of modules added, you know, the revenue growing over time. Can you be a little more specific around kinda what you're seeing in some of these much larger enterprises that have mixed cloud and on-prem data? What you're seeing in that specific cohort of customer. Thank you.

Olivier Pomel
CEO and Co-Founder, Datadog

Maybe I'll start, and David, if you want to add a few things.

Alexis Lê-Quôc
CTO and Co-Founder, Datadog

Sure.

Olivier Pomel
CEO and Co-Founder, Datadog

What we see is that it's still very early for most of our large enterprise customers in terms of their digital transformation and their cloud migration. They're still in a phase mostly where they're adopting the cloud, and they're growing, and they're getting into to critical mass in these cloud environments. We are only seeing, at this point, a handful or a small proportion of our large customers that are getting deep enough into their cloud migration that they're asking themselves what they're going to do in terms of standardization across cloud and on-prem and legacy environments. The vast majority of our business today is those customers that are going into the cloud and are still getting to critical mass.

We're starting to see some totalization, and we're starting to extend back into some of those legacy environments. That's one of the reasons why we built new products such as the Network Device Monitoring product, which we discussed also during the conference, which allows us to connect back to the network equipment our customers are going to deploy or still have deployed, on-premises, and that they want to see alongside their cloud environments.

Alexis Lê-Quôc
CTO and Co-Founder, Datadog

I'd like to add, in the parts of the large enterprises which are focusing on migration and cloud workloads, we're seeing similar motions as in our other types of customers, SMB and middle, which is land and get usually an infrastructure position, and then we're seeing the same growth.

David Obstler
CFO, Datadog

Of the use of the platform, both in terms of number of units and breadth of products. It's evidencing in those cloud areas, in enterprises, very similar types of motions as we have land and expand, grow, spread across the whole platform as we see throughout our customer base.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Thanks. All right. Our next question is from the Q&A submission window. You mentioned the importance of tagging to your monitoring tools. How does this work for your security tools?

Alexis Lê-Quôc
CTO and Co-Founder, Datadog

I can take this one. Yeah. I think the tagging, as we explained a little bit earlier, is really the backbone of how we, you know, built Datadog. We think, you know, much like I think Olivier's comment on the fact that security data and observability data tend to be really. There's a lot of overlap. That's exactly how we think about tagging in the security context. You know, we'll do the security analysis with the same tags that are relevant to not only security teams, but also development team, operations teams, so that, you know, everybody can talk about the same thing effectively.

Olivier Pomel
CEO and Co-Founder, Datadog

You know, the one way to think about it is, tagging is about context. The problem you have if you try to look at security in isolation is that you have all these data flows, but you have no idea what's normal, what actually corresponds to what's legitimate, what's not legitimate. You know, through tagging, we connect the security data to the operational and development use cases. Because we have operational people and developers use the product all day long for developer use cases and operations use cases, all of the metadata and the tags that we capture are clean. Like they are validated because they are used.

You know, when data is used to wake up people in the middle of the night, you know, if it's not clean, it gets cleaned up pretty quickly. Data that is clean is valid, has been entered, and then the security teams can rest on that data and those tags showing up on their end of things to make sure that they know what they're actually looking at.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Thank you. All right. Our next question is gonna be from the audio line. Chris, next questioner, please introduce.

Operator

Thank you. Next we have Raimo Lenschow of Barclays. Your line is open.

Raimo Lenschow
Managing Director, Barclays

Hey, thank you. Really impressive conference here with a lot of new stuff, so it's difficult to find a question. So I have, like, two quick ones. First, can you talk a little bit about Vector and its importance for the long run for you guys? Because, like, most people will have on-premise and cloud, and it seems to me like Vector could be an important role to bring more on-premise data into the cloud and hence make you more the center of the universe. Am I thinking the right way, and where are we on that maturity curve?

Alexis, as you go and broaden out the product portfolio, and there's so much stuff coming out of you, like, how do you think about, like, keeping that kind of platform together and making sure all these teams are still kind of working towards, like, one unified kind of solution offering? Thank you.

Olivier Pomel
CEO and Co-Founder, Datadog

Cool. All right. Maybe I'll take the first one, Vector. Vector is very interesting because Vector, for those of you who are not quite sure what the name is, it's the open source project that's at the core of the Observability Pipelines. That's the part of it that lives on customers' infrastructure. It's very interesting because it first of all, it's one of the ways we give customers more flexibility about what they can do with their data. How much of it they send to us, how much of it they archive, how much of it they get rid of, how much of it they sample, how much of it they anonymize, because of some of the regulations that they might be subject to.

You know, it really puts all of the control, all of the levers in the hands of the customers, and we think it's a good thing. That's all we think about our platform in general, like what we've done, Logging without Limits, you know, Metrics without Limits, and Tracing without Limits. This is all the ideas to give to put the customers in control. That's one thing. The other point you brushed on was the fact that it really gets us closer to a lot of the data sources our customers might have in their legacy data centers. They have a lot of data flowing already or going into some, you know, silos in the or data silos in their data center.

It gives us a way to get deeper into this legacy world and to get some of those data or pieces of data flowing into Datadog, which plays again into what our customers can do later down the road as they have to standardize everything into one management platform that spans both their cloud environments and their existing and legacy platforms. It's interesting in both directions.

You know, if you zoom out a little bit more and you think about the evolution of these data streams in the enterprise, like the data volumes are exploding so fast, that to us it's a given that we will need to give customers more flexibility, and we will need to keep inventing new ways, to make data movement easier to manage and more efficient at scale. That plays into that. Alexis?

Alexis Lê-Quôc
CTO and Co-Founder, Datadog

I think on the question about how we grow the platform and how, with so many products being announced, I think it's a couple of things. One, it's us being pragmatic. I think, you know, that's a theme that was recurring and paying a lot of attention to details. I think, you know, pretty much all engineers and product managers at Datadog use the product every single day. They are in conversation with customers every single day. What that means is that they can have a very direct and sort of almost visceral understanding of how the product, you know, feels and how it behaves.

It gives them ideas on, well, how does it, if I work on a particular product, how does it connect to the rest of the platform? The way it usually connects to the rest of the platform is, you know, both from a technical standpoint in terms of API calls and data structures and so on, but also from a sort of a user experience standpoint, what you can see, what context we bring in front of you, where, as a user, you can go, you know, from one screen of one product to another screen, another product.

I think that's something that we try to keep alive and that attention to detail is extremely important, I think, to deliver an experience that doesn't feel disjointed, which is obviously the risk I think you alluded to as we continue to grow the platform. We'll keep a keen eye on making sure that we never get there.

Olivier Pomel
CEO and Co-Founder, Datadog

Alexis is being modest, but I think his team is doing a fantastic job not only at building software incredibly fast, but also at maintaining empathy with the customer. That's something that we insist in at Datadog. I mean, we use the term dogfooding, which is appropriate for us, I guess, but for using the product, you know, day in and day out. I'll quote Alexis again to finish it, which is he likes to tell his teams that we build the software, but we sell the service. That's really, you know, how everyone feels about maintaining the user experience there.

Raimo Lenschow
Managing Director, Barclays

Thank you.

Yuka Broderick
Head of Investor Relations, Datadog

All right. Thanks. We'll take another question from the audio line. Chris, next questioner, please.

Operator

Thank you. Next we have Michael Turits of KeyBanc Captal Markets. Your line is open.

Michael Turits
Managing Director, KeyBanc Capital Markets

Hey, guys. Thanks very much. This is an excellent presentation. Very concrete and very broad and strategic as well. I think that it raises the question of where there are or are not boundaries or limits to where you can go or what's most natural for you right now. I wanna ask in two different areas. One is in the CI pipeline product, CI Visibility pipeline. So you're partnering with these two CI people. Where does what you do from a visibility perspective go? Or where does it end, and where does what they do begin? I wanted to ask it in another area too, a very similar question with Session Replay.

Where do you extend what you're doing, and where do the people who are really focused on product management teams and product analysis, where's the boundary between what you're doing and you're doing there in those two areas?

Olivier Pomel
CEO and Co-Founder, Datadog

Maybe I can take a stab. In terms of the CI/CD side of things, the boundary between what we do and what our partners who specialize in CI/CD do, it is fairly similar to the boundary we have everywhere else in the system, which is we typically don't engage with the runtime. Like, we don't run things directly. We engage with observing, managing, securing everything around it, but not the actual runtime. Our customers will actually combine many different runtimes. I mean, as they combine different clouds, as they combine on-prem and cloud, as they combine, you know, different types of frameworks, different languages of applications, that's typically not what we do. Like we manage and we watch, and we secure. That's for CI/CD.

In terms of the products such as the Session Replay, we already are seeing quite a bit of engagement in Datadog from product teams and business teams, and quite a bit of usage from product teams and business teams. Because when you run a business that is digital, either through transformation or because it was cloud native, your application is running your business. If you understand your application, you understand your business, there's a lot you can do and understand by looking at the application itself. We already have a lot of usage from those teams.

I think what we're doing with Session Replay and, you know, Funnel Analysis, a number of new products we've introduced over the past couple of years in that area is making it easier for those product teams, support teams to actually make use of the data. There, I think, we keep going deeper and deeper. I think there's many more use cases we can go after.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Thank you. Our next question is a webcast submission question. It's from Sterling Auty at JPMorgan . Looks like you are moving into on-premise infrastructure management with some of the announcements. How deep do you plan to go, and how effective do you think these initial announcements will be with competing with traditional on-premise infrastructure solutions?

Olivier Pomel
CEO and Co-Founder, Datadog

Right now we're just getting started there. I think it's an area of investment that's going to, you know, keep us busy for the next few years. We think for now there's only some parts of that that are really relevant to us, starting with the network devices as we announced yesterday. We also think that, you know, at the horizon of, you know, 2, 3, 4, 5 years maybe, when the bulk of the market or a large fraction of our customers are reaching critical mass in the cloud, and they have to think about standardizing the rest of their infrastructure into some common management and monitoring, we think that is going to play a bigger role there.

We'll keep adding to that over that period of time. I will say that we have to be very deliberate about which parts of the legacy stack we support because there's a very large amount of systems we might have to connect to there. We're really following customer demand there to grab where we go. Then Alex here has to go on eBay and buy some old equipment so we can rack it, and we can integrate it.

True story.

Yuka Broderick
Head of Investor Relations, Datadog

Okay. Thank you. Next question from the audio line. Chris, introduce the next questioner, please.

Operator

Yes. Next, we have Kash Rangan of Goldman Sachs. Your line is open.

Kash Rangan
Managing Director and Co-Head of TMT, Goldman Sachs

Thank you so much. Congratulations on a terrific, as a conference and analyst day. Two questions. One is as you talk about the common platform that can, that can address opportunities in APM, infrastructure monitoring, and, security, logging, et cetera, what are the trade-offs that you're making? I do believe that you have a common platform that addresses these different markets as opposed to others that might have put things together. If the other strategy is the wrong strategy, that is you make acquisitions to develop this suite, how does it manifest in customer value or, deployment success, and therefore your ability to win more business?

I guess is one question because I think we're all trying to figure out, you know, if the common platform is the right way to do it, how is it gonna show up? Secondly, when you look at the CI initiative for the company, do you foresee yourself doing other things? If you get down that path, then you gotta get into CD, you gotta get into software code management, you gotta get into planning, a whole bunch. You gotta keep moving up that food chain, right? How do you see that playing out for the company, in the midst of a couple of other companies that do cover more of that spectrum? Thank you so much.

Olivier Pomel
CEO and Co-Founder, Datadog

Yes. I can start. What was the first question again? It was the

Yuka Broderick
Head of Investor Relations, Datadog

It's a question about if our strategy works.

Olivier Pomel
CEO and Co-Founder, Datadog

Oh, yes. On the unified platform. Sorry. Yes.

Yubin Kim
Managing Director, Loop Capital Markets

Yes.

Olivier Pomel
CEO and Co-Founder, Datadog

Yes. What I would say there is, like, if you look back at what we've discussed during the presentation today, really the big problem that our customers are facing that we're here to solve is escalating complexity and an explosion of systems, an explosion of data, an explosion of everything. To us, it's pretty clear that what our customers need us to do is to simplify everything and bring everything into one roof, one platform, one UI, one language, really is a core part of this simplification. There's no question to us that this is the path forward.

If you take it more specifically, to the question of one platform across DevOps and security, the way we think about it is there's so many more employees at every one of our customers in development and operations than there are in security. They typically outnumber Ops and dev people security people by a factor of, you know, 20-to-1, 50-to-1, 100-to-1. To us, it's also pretty clear that if you want to solve the problem of securing those workloads, at scale, you need to get everybody involved, otherwise it's just not going to work. To us, it's almost a given that the platform is the right strategy. We hear that from our customers.

They don't want to be integrators. They don't want to be to spend their time connecting everything and trying to get different teams to talk to each other. Of course, you know, it does come with some, I would say, extra constraints. I mean, one is you actually have to build a unified platform. You know, that sounds easy, but then it's Alexis's job to make sure that we actually build the components and have everything coming together for that. It also means that when we do grow through M&A, which we've done successfully in the past, we have to be very careful about not losing the unified platform aspect of what we do.

In many cases, there are large parts of the companies we buy that we actually completely replatform on the existing Datadog platform. It doesn't have to be everything, though. You know, if we take the example of observability pipelines, they rely on Vector, which is a piece of software that lives on our customer's infrastructure, that is open source. We didn't have to replatform that. Like, we could reuse that completely as is. You know, you'll see a little bit of everything, you know, moving forward with our the acquisitions we make. The second question was on CI/CD, and we'll see where it takes us. I think, you know, right now it's exciting to us because it's a big problem area for our customers.

They spend an enormous amount of time, and we also spend internally at Datadog to build all these beautiful services. We spend an enormous amount of time with continuous integration and continuous deployment and testing and validating software and cleaning up our pipelines for that. We know it's a big problem for our customers, but it's a bit more of a nascent category. We'll see where it takes us.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Next question from the audio line again. Chris, please introduce.

Operator

Thank you. Tyler Radke of Citi, your line is open.

Tyler Radke
Managing Director and Senior Equity Research Analyst of Software, Citi

Hey, thanks for taking my question. Wanted to just ask a couple things on security. First, if you could just kinda contrast your approach with security monitoring and analytics with some of the XDR approaches that we see from some of the security vendors and why you think you're differentiated. Then secondly, from a go-to-market perspective, you talked about how security could be as big, if not bigger than the opportunity in observability. From a demand or, you know, bookings perspective, when do you think that kind of-

Gets as big as your core observability business today from a new business perspective. Just give us a sense for the timing of when that market really matures and you think the products are ready. Thank you.

Olivier Pomel
CEO and Co-Founder, Datadog

Yeah. Look on the differentiation, I think for us, differentiation is clear. It's, we're going to have very low friction for security because we already have all the observability data, and it's a very big deal. I mean, it's very hard for security teams, as we mentioned earlier, to get instrumentation deployed. It's also costly for enterprises because every time you add new instrumentation, you pay a performance overhead cost. You don't want to put too much of that. The other differentiation for us is we speak to all the right people. We speak to the larger crowd, which is developers and operations folks and the users day in and day out.

You know, it's really hard, you know, coming from a pure play, you know, security product to actually get in front of those crowds. We think this is the high ground. Now, of course, I mean, we still have a lot of work to do to fully deliver on our promise there. I don't want to sound like we've got it all figured out, but I think we're starting from the right spot. In terms of when it's going to be massive, I can't really tell you that. I think, we know we still have a lot of investment to make in the product. We also know that the market itself is still developing for DevSecOps.

The way we think about it is that the market today, in terms of convergence of, you know, DevOps and security, is somewhat similar to the market we saw ten years ago in terms of convergence of dev and ops. We're not exactly sure when it's going to coalesce yet, but we're pretty certain it's going to be the destination.

Yuka Broderick
Head of Investor Relations, Datadog

Okay. Thank you. The next question is from the webcast submission window. It seems like you have many deep technical integrations with Azure products. When it comes to shifting left, how do you think about aligning more closely with GitHub versus keeping integrations open with potential competitors like GitLab?

Olivier Pomel
CEO and Co-Founder, Datadog

Well, the answer is we're doing both, so we integrate with everyone. Alexis, anything to add?

Alexis Lê-Quôc
CTO and Co-Founder, Datadog

Yeah. I mean, I think we do have, you know, deep technical integration with Azure, but you can see the same with the other cloud providers. We try to stay relatively agnostic there. Again, you know, that's true for cloud providers, that's true for various say CI/CD source code management. You know, we're not trying to play favorites. We are largely driven by what our customers use.

Olivier Pomel
CEO and Co-Founder, Datadog

Again, the value we provide is we bring everything under one roof for our customers. Very often, our customers are also going to be using multiple of those solutions. They'll have, you know, some GitHub, they'll have some GitLab, they'll have some Atlassian, they'll have other things. Our job is to make sure we play very nicely with everything. We simplify it for our customers. We make the complexity go away.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Thank you. Let's take the next question from the audio line. Chris, please introduce the next questioner.

Operator

Thank you. Next we have Bhavan Suri of William Blair. Your line is open.

Kamil Mielczarek
Equity Research Analyst, William Blair

Hi, this is Kamil Mielczarek on for Bhavan Suri. I have a question about the Synthetic Monitoring solution and the evolution of that product. As you think about helping customers preemptively identify problems, do you plan to add AI to recommend what should be fixed? Will you ever build a product that does that next step of work of automatically moving the workloads to a place where it doesn't break?

Olivier Pomel
CEO and Co-Founder, Datadog

We're adding AI everywhere. Every single product is looking at ways to automate and get in front of problems and recommend and accelerate resolution by pointing in the right direction. We're doing that across the board. We actually already have quite a few smarts built into the Synthetics products, for example, you know, so that they're you don't have to modify your Synthetic test every time you modify your application, for example. Like, we automatically are going to understand what's changing and how to adapt the test for that. Of course, there's going to be much more. We don't have any other features to announce on that today, though, but I can tell you that the teams are fairly busy with all this.

Yuka Broderick
Head of Investor Relations, Datadog

Okay. All right. Thank you. Next question from the audio line again. Chris, please introduce the next questioner.

Operator

Thank you. Next we have Gregg Moskowitz of Mizuho. Your line is open.

Gregg Moskowitz
Managing Director and Senior Enterprise Software Analyst, Mizuho

Okay. Thank you for hosting a great event. First of all, I was glad to hear that one enterprise customer that Adam referenced decided a few years back to spend their $2.50 on Datadog instead of a cup of coffee. I think that's worked out pretty well for both the customer and for Datadog. My question is for Olivier or Alexis on unified tagging. Naturally, your broad product line and your ability to correlate data for monitoring purposes is a competitive advantage. You know, if we can put that aside for a moment, it sounds like you also think your tagging technology itself provides deeper contextual awareness.

Given that a number of other companies also employ tagging, wondering if you can expand on why your tech is able to provide customers with deeper insights?

Alexis Lê-Quôc
CTO and Co-Founder, Datadog

Sure. I can take that one. You know, I think first of all, you know, I think when we think about how we build our products or where we should build, we obviously focus pretty much on customers and don't spend too much time on competition. You know, I'd say one of the things that has made our tagging approach successful is also one where it's been baked in from the start, and so we haven't gone back.

We haven't needed to go back and add it sort of after the fact, because that's so essential to how we build products that, you know, sort of everybody who builds product at Datadog sort of it's obvious. It's evident. It's completely obvious that it should be built, you know, around tags. I think it's, you know, it comes as the technical pieces around, you know, how we scale that, of course, you know, and that's a differentiator too. From a customer standpoint, you know, it's really at the end of the day, our tagging should be completely pervasive, should scale, you know, without any limitations and, you know, should be in the service of breaking down those silos. It's as much, we're here to enable that.

You know, the job of, I think, the engineering teams at Datadog make the complexity of making that work at scale go away. The mere fact that there's a common language between development, operations, and security, and you know, product teams, with some of our products, is really powerful. Every single new product that we put out has the tagging capability built in from day one, and I think that will continue to compound and build up.

Olivier Pomel
CEO and Co-Founder, Datadog

You know, it goes far beyond, "Hey, and you can also add tags." As Alexis said, you know, it's built from the inside out, you know, as tags everywhere. But also all of the use cases, all of the workflows, all of the ways people collaborate in Datadog is all based on tags. So it starts with the right smarts, the right tags show up by default, and it's built into all of our integrations. Then we have all the right interactions between developers and operations folks as they go and solve issue, and they deploy new software, and they deploy new infrastructure to make sure that all the tags show up and they're all up to date and they automatically populate to where they need to be.

It really goes to the way the product and the user interactions are architected end-to-end, and we think it's actually really hard to replicate. We haven't seen anybody replicate that to date.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Thank you. Our next question will come from the webcast submission window from Jack Andrews at Needham. Could you speak to what types of metrics your largest customers are focused on deriving from observability solutions these days? Is it still just about number of critical incidents or mean time to resolution, or are they looking to tie observability into tangible business outcomes like measuring net promoter score or something else?

Olivier Pomel
CEO and Co-Founder, Datadog

Oh, it goes way beyond that. We have customers measuring revenue in real time with Datadog. We have customers measuring the quality of the service they're delivering and the key metrics. For example, customers in mobility that are, you know, getting their executive team directly alerted based on the amount of traffic and the fraud rates and the prices at the city level at any point in time. We have customers that are tracking their operating telecommunication platforms that are tracking the quality, the number, and the duration of calls in real time. We have a number of CEO dashboards built on Datadog, you know, for that reason. We see that happening.

The applicability of these use cases to our customers really depends on how far along they are in their digital transformation. Basically, it becomes really transformative for them when the application runs their business because they can actually get that real-time window into their business through Datadog. That's where we realize the full value of bringing everybody on the same page, you know, because we have the full continuum of information that goes from the CPU usage all the way up to the business outcomes and with everything in between and all of the various teams involved in between.

Yuka Broderick
Head of Investor Relations, Datadog

Great. Okay, next question from the audio line. Chris, please introduce the next questioner.

Operator

Thank you. Next we have Brad Reback of Stifel. Your line is open.

Brad Reback
Managing Director, Stifel

Great. Thanks very much. Maybe one quick one for David and one for Oli, both related to the on-prem side of the business. Any sense of what % of ARR comes from on-prem today? Oli, as you sort of look at the workloads you're picking up, are those predominantly net new cases, or are you replacing legacy vendors? Thanks.

Olivier Pomel
CEO and Co-Founder, Datadog

I will start with the bit on the replacements. Whenever we start connecting to legacy on-prem equipment, it's always in a way a replacement because there was always something in place to look at these environments to start with. We never start with that replacement. We always start with the net new cloud environments, then we grow into those cloud environments, then our customers grow themselves into these cloud environments, and they expand their footprint with Datadog, and then they bring us into their legacy environment in their network equipment in this case. Yes, it's a replacement, but no, that's not where we start. By the time we get to that replacement, it's a foregone conclusion that customers want to standardize on Datadog.

David, do you want to comment on the other one?

David Obstler
CFO, Datadog

Yeah. What we said is we haven't given specific numbers, but we said that, on the monitoring side, we're able to monitor the public clouds, private clouds, and which would be on the on-prem side. That a few years ago, it was more of a 50-50, and it's landed more towards the public cloud, but there's still a very large percentage of private clouds that we're monitoring. That's on the monitoring side. On the delivery of our product side, as you know, we deliver almost everything through a multi-tenant SaaS, so that would be delivered in that way.

Brad Reback
Managing Director, Stifel

Great. Thanks very much.

Yuka Broderick
Head of Investor Relations, Datadog

Next question from the webcast submission window, it's from Itai Kidron at Oppenheimer. How far down DevSecOps do you plan on going? Is SASE, Secure Access Service Edge, something you could enter?

Olivier Pomel
CEO and Co-Founder, Datadog

We don't know how far yet. I think right now, the problem is so big, and I would say the observability part of security alone, like observability and response, that you know, we're very busy with that. We'll see, you know, as the market develops and as the world gets closer to DevSecOps, we'll see where this takes us.

Yuka Broderick
Head of Investor Relations, Datadog

Okay, great. Next question from the audio line. Chris, please introduce the questioner.

Operator

Thank you. Next, we have Yubin Kim of Loop Capital Markets. Your line is open.

Yubin Kim
Managing Director, Loop Capital Markets

Thank you. I just have a question on the go-to-market. Obviously, you guys are larger and growing faster than your competitors. Question is, how much can you start to leverage partner ecosystem to offload some of the direct sales and marketing efforts? You also talked about growing your professional services business. How vibrant today is that partner ecosystem around professional services today? Is there an opportunity to leverage that? Or is this something that you have to build largely yourself? Thanks.

Olivier Pomel
CEO and Co-Founder, Datadog

On both questions, we're still super early in developing those. I mean, we're investing in the partner ecosystem. We're investing in the channel. There's a lot of interest from the partners as well, but it's not a dominant part of our business today. Right now we're still largely direct and growing very fast. Yes, it's important. Yes, we're investing, but we're still super early, and we don't have much more to share today.

David Obstler
CFO, Datadog

Just to point on the professional services, given how easy it is to implement and get up and running, given the platform, we don't expect that professional service to ever be a significant percent of the business. We use professional services or are beginning to try to facilitate the education and usage of the product by our clients. We're very early there, and we'll let everybody know as we, you know, develop that model going forward.

Yuka Broderick
Head of Investor Relations, Datadog

Thank you. Our next question is from Jonathan Ruykhaver from Baird. Are there any comments you can offer on what newer products are seeing a lot of success outside of infrastructure, APM and log management? There's still a pretty big delta between customers using 2 and 4 products for you. Wondering if there are any specific products to call out driving customers to 4 today, and what's the path to see more adoption of 4+ products?

Olivier Pomel
CEO and Co-Founder, Datadog

Right now the adoption follows the order of introduction of products pretty much. You can look at that. One of the slides we showed today was the rapid acceleration of the number of products we've shipped. You've seen that for the first few years of the company, we had one product, then we carefully added APM and logs, and it's only more recently that we started adding many more products on top of this platform. Many of those products are more recent, but we're, I would say, very happy with the uptake of all those products today. We'll, you know, again, if you look at the order of introduction, you'll get the order of adoption pretty much.

David Obstler
CFO, Datadog

I'd like to add that when we give that metric, that's a good metric to see platform adoption. In a number of cases or a lot of cases, we're not fully penetrated in the incremental products. They may have adopted, they're in the process of adopting, but just because they've adopted 4, 5, 7 doesn't mean they're fully penetrating. We're seeing that in our land and expand and in the growth rate of the adoption of the size of the adoption of those newer products by our customers over time.

Yuka Broderick
Head of Investor Relations, Datadog

Okay, thank you. Another question from the webcast submission. Can you talk about the state of your enterprise selling motion and how it has evolved? Has the typical profile of a new enterprise customer changed over the past two years, given how much deeper your portfolio of modules is?

Olivier Pomel
CEO and Co-Founder, Datadog

Well, I would say our the profile of our new enterprise customers is pretty much the same as it was two years ago, which is enterprises of any size and in any industry. What differentiates them is how far along they are and whether they started or not in their cloud migration. That's what makes them a good candidate for us, basically. Again, our enterprise motion hasn't changed much over the past few years, but we're very busy scaling it. We're busy growing the teams. We're busy getting into every single geography, every single part of the market. There are still some parts of the market where, you know, we need to get to critical mass. We're busy scaling that.

Yuka Broderick
Head of Investor Relations, Datadog

Okay, great. Thank you. One more question from the web queue. Can you discuss how Datadog is adjusting go-to-market strategy for international markets? Is this a near-term opportunity or more midterm? And is there any reason why international clients would ramp differently?

Olivier Pomel
CEO and Co-Founder, Datadog

Our international clients are growing, and we're growing that side of the business as well. We're growing the teams, I would say, a little bit faster in the rest of the world compared to the U.S. The U.S. is also growing fast. The mix is not changing very fast, you know, in that respect. We see the exact same opportunity in the rest of the world as we've seen and validated in the U.S. The dynamics are fairly similar, actually. We see enterprises, you know, whether they are in Asia, in Europe, or in the U.S., adopting the cloud, and we see them growing very fast in the cloud.

We don't see any drastic differences there, even though, of course, there might be some differences in the way we go to market in specific countries.

Yuka Broderick
Head of Investor Relations, Datadog

Okay, great. Thank you. Another question from Pat Walravens at JMP Securities. Given all the announcements and progress at Datadog, what are Ali's top two or three strategic imperatives for next year?

Olivier Pomel
CEO and Co-Founder, Datadog

My top two strategic imperatives are hire more engineers and hire more salespeople. These were the same last year, and this is what we need to do. This is, again, we're constrained by capacity. There is plenty of demand on the market, and we need to make sure that we keep adding the right products fast enough, we keep developing the existing products fast enough, we keep scaling the team so we can support our customers, and we get enough of the right go-to-market teams, so we get in all of the right conversation in front of every single customer that needs us. That's all I work on, pretty much. Alexis has to make sure we build it all. That's the easy part, right?

David Obstler
CFO, Datadog

Yeah.

Yuka Broderick
Head of Investor Relations, Datadog

All right. Well, that's the end of our Q&A session. Thank you very much for your time. May I toss it to you, Olivier, for any closing comments?

Olivier Pomel
CEO and Co-Founder, Datadog

Well, thank you for attending DASH. I think it's been very exciting for us. It's always a bit special to run a conference online because you basically record it before and watch yourself, you know, during the conference, which is a bit weird. But this was very exciting for us. Lots of new announcements, and it really shows that the teams at Datadog, be they in product, engineering, go-to-market, are doing fantastically well, and I want to thank the team for that.

Powered by