Good afternoon, everyone. My name is Yuka Broderick. I'm Vice President of Investor Relations and Strategic Finance, and I wanna welcome you to our investor meeting at Dash 2022. Before we dive in, a few reminders. During this presentation, we will make forward-looking statements, including statements related to our strategy, the potential benefits of our products, our investments in R&D and go-to market, and our ability to capitalize on our market opportunity. These statements reflect our views only as of today and are subject to a variety of risks that could cause actual results to differ materially from expectations. For a discussion of the material risks, please refer to our latest quarterly report filed with the SEC on August 8, 2022, and other filings that we may make with the SEC.
Our filings are also available on the investor relations section of our website, along with a replay of this presentation. We also discuss non-GAAP financial measures, which are reconciled to their most directly comparable GAAP financial measures in the appendix to this presentation, which is available at investors.datadoghq.com. Let me briefly run through our agenda for today. First, CFO David Obstler will talk about long-term dynamics in the cloud and next gen technology space, our growth drivers and market opportunities, and our investment in innovation. We'll follow that by a Q&A with David and our CEO and co-founder, Olivier Pomel. Then we wanna show you how our unified platform works to help our customers monitor system health and quickly identify and fix problems. That will be presented by Omri Sass, our Group Product Manager for Application Performance Management.
Omri and President Amit Agarwal will join us for a second Q&A following that demo. We will do another demo focused on our cloud security management capabilities. Prashant Prahlad, who leads product management for our cloud-native security products, will take the lead there. Finally, Prashant and SVP of Product for Security, Pierre Betouin, will join us for Q&A following the security demo. Okay. With that, let me turn it over to David to kick things off. David?
Thanks. Welcome, everybody. It's great to see so many people in person, seen mainly on Zoom, and to provide this opportunity to come to New York. Before I start, I wanna thank Yuka and Mike Yang and Tina. Where are you, Tina? Tina, back there. You there? For arranging all of this. They worked very hard at this, and much appreciation and also all the great effort of the DASH team. I hope most of you had an opportunity to go to the keynote. You know, I work at Datadog. I know about a lot of the things that are gonna be said, but I walk away from something like that really impressed by Datadog.
When you think about the product velocity that's demonstrated and the focus on client problems as evidenced by a lot of who was up on stage and the clients coming up and then who was speaking, the range of people that participated, it really makes me appreciate, and I hope it'll come out clearly through this conversation that Datadog is a very product-focused, innovative company that is constantly innovating. Before we get to the demos where you guys can see a little more about that from some of our product leaders, I wanna spend a little time talking about Datadog and the topic of how our platform helps our customers, a little bit about what we think about our long-term growth opportunities and how we're executing about that. One second, let me catch up here.
Now I'm caught up. Sort of to start out, wanna start out by what are the problems that we are trying to solve for our clients, and how are we doing that? As backdrop to that, all of this is driven by two interrelated, very long-term secular trends that we think are still in very early stages. Digital transformation and cloud migration, and they're related. What is digital transformation? Digital transformation is what we are seeing in the world of all companies becoming software companies. Companies investing at higher scale or impact in software applications that are enabling them to interact with their customers internally, the supply chain, employees, et cetera, in a very proactive way. Then cloud migration, the movement of IT workloads from legacy IT to the cloud. Why are these related?
The reason is that this cloud migration, because of the benefits of going to the cloud, because of the scaling benefits, the technology benefits, the ability to change business plan, enables this digital transformation. Those two trends are behind, I'm sure many of you know that are following this industry, what is driving Datadog and their growth and their product innovation. To dive in a little bit farther, those of you that have been with us on the whole journey know that this slide, you've seen it before. It's one of Ali's favorite slides. It's what he's used to explain why Datadog at this time.
The reason is that this combination of digital transformation and cloud, which is because of the agility and flexibility, also means that there's a lot more complexity introduced into the IT environment. Because of that, the legacy tools and the siloed tools have proven to be ineffective and thus Datadog. In order to monitor what's going on in modern DevOps environments, there's a huge growth of complexity of number of pieces of hardware and services, the atomization of that in containers. In order to do that, clients have found and Datadog's mission has been to break down those silos and make this complexity more digestible. You saw that in the keynotes. More able to be monitored and have more speed and flexibility in deploying. That's sort of Datadog.
Datadog was birthed and is relentlessly innovating to help solve the complexity, that clients experience in this environment. It's on a number of different levels. First of all, at the very core, in the middle here, you see our products, our SKUs. Datadog is a platform that is able to seamlessly integrate metrics that come from the infrastructure, traces that come from the software, and logs to analyze how everything is working and put it in one platform. In addition, and this has gone up, I think when we went public, it was a 300 or 400, over 600 data sources that we're integrating with in order to bring all that information into the platform and allow clients, to analyze problems with as much of the data. What they want is to see everything that could be affecting the software and the system.
All of that is enabled by a platform. We talked. There was a little bit of a discussion, a piece of the keynote about Watchdog and a piece of the platform and workflows. These are elements of how it all is knitted together to provide utility for our clients. All of that delivered in a single pane of glass. Two, in a very ubiquitous and democratized way, which breaks down silos to lots of constituencies, to developers who build the code, to people in IT ops who put that code into production, to security engineers, to support groups that help maintain that, and then all the way into the business owners who rely on the Datadog metrics to see what's going on in the business.
This is real-time data on what's going on in clients using products, and it increasingly has been used by business folks. That is the sort of the crux of a Datadog platform. Here is th platform that's divided more into its products. It is a unified end-to-end platform that can be used seamlessly with an UI that contains a lot of data, a lot of things prepackaged. Earlier today, there was a discussion of Powerpacks. How can clients just go in, point and click, and get very different metrics and charts, etc. All of that, so that it is simple, but not simplistic. These capabilities are offered together. We have it here, infrastructure, application monitoring, digital experience monitoring, logs, security, and the developer suite, and all of our SKUs that we talk about fit underneath of this.
On top of that, they operate on a platform that increasingly is enabled with Watchdog AI, we talked about, and then share platforms, collaboration, dashboards, workflow. A number of these are things that were announced today in the keynote and that we're developing, and this is how the whole platform fits together. In looking at the platform, we've talked a little bit over the years about the expansion of the platform. Today, there will be a whole section on security, so I'm not gonna go over it in great detail right now. I will say that what we're talking about with security is cloud security, the monitoring of the security of cloud-native applications. It isn't endpoint security, and we're not entering the firewall market competing with the legacy. I'll leave it at that, and Pierre and team, Prashant will go over that later.
In terms of sort of summing it. That's sort of the platform. Some of you have been following this for some time. Others are newer to the story, we want to go over it. With that as a groundwork, what are the growth opportunities and investments we're making, and how do we decide on those? Starting on a summary of this. There's some real growth drivers that are persistent and long-term that we are investing against. First of all, I talked about the secular tailwind of digital transformation and cloud migration. One of the big growth drivers is that growth itself and Datadog's increased penetration in providing a platform to monitor those services that are delivered.
In addition, we are relentlessly investing in expanding the products and use cases, and we'll go over that, in the observability space and expanding into new markets beyond observability, some of which are security, CI/CD, in addition to some things that you can see in the platform in terms of alerting and workflows, et cetera, expanding those use cases. First to go into the long-term drivers. I think we said this is a big market and a rapidly growing market. For this portion of the presentation, we're gonna be using publicly available metrics that, in this case, are put out by Gartner.
You can see that the forecast of global cloud spend is expected to grow to over $1 trillion, and it's been growing for a while, and the growth rate continues to be very strong. Like we said, we believe we're still in the early days. If you look at the line on top of that, you'll see that we still have only 11% of IT spend on the cloud. Low penetration, and we think lots of room to grow. Observability itself, also a very large market. This is also Gartner information, and we put that in our financial disclosures. We believe that the observability market today is $41 billion growing to $62 billion in 2026. Large and growing.
We think we're focused on the vast majority of this market and notably the higher growing portion of the market, which is the DevOps and cloud part of the market. Very big TAM that we're investing against. At the very core of it, what you saw today is not possible unless we are investing very rapidly in innovation across the market. One of the metrics we look at is the growth of our non-GAAP R&D expenses, both in total and as a percentage of revenues. You see from this chart that over many years, we've been maintaining and growing this investment. This enables the kind of innovation that you saw today.
Now, it's our efficient business model in terms of our frictionless go-to-market, the way the platform's designed, et cetera, that creates the efficiency that enables us to invest at this pace and be a leader in terms of the amount of R&D that we are spending and investing and the growth rate, yet still be a profitable cash flow generative business. In terms of how we're investing, we're really doing it, as I mentioned before, everything I showed before was about our organic investment. But we also have complemented that and accelerated, in fact, our investment. If you look at this slide, and I think this is the first time we've put it all on one slide, how we've been investing inorganically over time. This shows nine acquisitions that we've made over the last five years. It's six years.
I wanna highlight the rapid pace here, but also what we're doing. We are not acquiring companies to run as standalone platforms. One of the key competitive advantages of Datadog is that it's on a common platform. What we are doing is largely acquiring talented R&D teams. What you'll find is that we spend the first 12 to 18 months replatforming, putting this technology on our platform in order to continue to develop, deliver that integrated experience. You start to see revenues being recognized in that period of after the 18 months, 12 to 18 months. We've done that very successfully in the past. The ones that are newer, the 2022, we're still in the process. The CoScreen, the Hdiv Security, the Seekret, et cetera, we're still in the process of putting them on our platform.
Finally, I wanna mention one of the great competitive advantages that have come from our acquisitions is we've been able to partner and bring into Datadog fantastic entrepreneurs. They stayed with us. They've been part of the vision. They've been part of the scaling of the Datadog R&D organization and have become very important leaders. Many of these acquisitions have turned into very large businesses at Datadog. For example, Renaud Boutet. He was acquired as part of the Logmatic acquisition in 2017, and today he's the head of our product organization in APM and Logs. You will meet later Pierre, who, if you look in, I guess it's 2021, was part of the acquisition of Sqreen in AppSec, and he today runs on the product side our cloud security business.
This has been a very important way in which we've expanded the talent base. Those two things together, the organic investment and the bringing in through acquihires of talent and capabilities, and you've seen this chart over the years, has really enabled us to go from in 2010, 2012, starting out as an infrastructure and monitoring company, and really expanding into application, log, digital experience, etc. All of these are individual products all knitted together as part of one platform used by everyone, deployed by everywhere. This history of innovation has kept up over many years. This is a little more of a graph, and when you step back, you see the pace. The pace has actually picked up.
As we have been announcing more and more functionality, the platform has expanded, and you see that every year we've been able to increase the velocity of product introductions. Are we effective at this? This is a slide which we've shown that looks at a metric of how clients are using the platform. Are they buying bigger parts of the platform? The answer is a resounding yes. Starting out from the Infrastructure Monitoring, now almost 80% of our customers are using two or more products, 37% using four or more products, and 14% using six or more. What that tells us is that we have been effective in listening to customers, increasing the functionality, and getting adoption. It also tells us that we have a lot of opportunity.
If you look at what's happened and you see the number of customers that are using the full platform, there's a lot of cycles to go through of our existing customers landing with more products and those that we have using more products. Finally, I want to reflect back where I started on the keynote and on Dash, looking at just to sum it up for everybody, everything that was announced today, which has really been what we've been working on the last few quarters, and it is quite a long list and comprehensive and across a number of ways that we feel that we are providing more value to clients. That's sort of what I wanted to talk about. Now I'm gonna have Yuka come up to the stage.
Oli's gonna come to the stage, and we're gonna basically open it up for questions to both everyone in the room and online. Yuka.
Thank you. Can you see?
Mm-hmm.
Okay. Everybody, can you hear this mic on?
Mm-hmm.
Okay, great.
That's on.
Okay, we're gonna do a bit of Q&A now. You know our CEO and co-founder, Olivier Pomel. You've just heard from David. As a reminder, we are in quiet period. We will be announcing our next earnings on November the third. No questions, please, about near-term financials, near-term demand environments, guidance, what have you. We will address that in a couple of weeks. We are gonna take questions from our in-person audience. My colleague, David Fisher, has a handheld mic, so I'm gonna ask you to raise your hands, to ask a question. For those of you who are joining us on the webcast, there's a chat window there for you to input questions, and we will take questions from the webcast audience as well. We will start with Sanjit. Please go ahead and ask your question.
Yeah. Thank you for taking the question. Sanjit Singh, Morgan Stanley. I was watching the keynote, Olivier and David, and once again, like past years, really impressed with the pace of product innovation. But this year is, like, a little bit of a different environment. I was looking at the keynote through two perspectives. Your users, right, who are probably giddy at some of the stuff that you announced, and then maybe the managers and the business leaders who look at this and say, "Wow, this is a lot of stuff. How do I manage this all?
How do I have to potentially pay for all this?" If you look at all the stuff that you're innovating on, and you look at what the business is trying to solve for in this new regime around, I wanna do things cheaper, faster, more secure, do you feel like that the pace of innovation's way ahead of, like, your customers' ability to consume and ultimately monetize just with, you know, with all the announcements that we've seen today?
Thanks. Can you hear me by the way?
Is the mic working okay?
No.
Are you on? Is it on?
Just taking this.
Okay. Yeah.
All right. Here's the thing. In general, the problem space is still growing faster than the solution that we're providing going after that. Whatever we're shipping corresponds to real problems our customers have and is valuable for them to use our products to solve those problems. That's generally what the order in which those things come. Now, you saw a few themes in the keynote this morning. One overarching theme is that we are covering a broader set of use cases from what used to be separate, distinct categories. As part of that particular value we provide to our customers is we help them consolidate what used to be different areas of spend, different tools they need to manage, different things they need to integrate themselves.
Like, they had to turn themselves into system integrators for that. We help them consolidate all that into under one roof, which is more efficient for them, and it's something that in times of crisis or, you know, a perceived crisis or feared crisis, you know, I think different parts of the world are in different stages when it comes to that. This is something that customers want and ask for. Another thing you saw is that we also are providing more product for our customers to help improve their own efficiencies. We talked this morning about Cloud Cost Management, which is something that customers all need pretty much. You've seen also the Intelligent Test Runner, which helps improve developer productivity, helps spend less CPU cycle time on running, use less tests.
There's more initiatives of the sort that we're not even showing in the keynotes on things around profiling and helping customers get insights from Watchdog to improve the performance of their programs. These are all things that help our customers get more efficient. At the end of the day, what we want customers to come out of with is to be in a situation where what they pay to Datadog has an extremely high yield when it comes to the efficiency of their cloud infrastructure and the efficiency of their engineering teams, both of which are by far, you know, the largest amounts of spend for them.
Next question from the audience. Great. Brent?
Hi, it's Brent Thill with Jefferies. Olivier, in three years, what are your aspirations in security? What are the milestones you're setting? What do you look like in security? And David, can I ask a quick follow-up?
Yeah.
I know we're not talking about the quarter. I'll abide by Yuka's view, but assuming the macro headwinds continue to get stiffer, what steps are you putting in, if anything, to, you know, cut through what we're starting to see overall in the overall environment? Many software companies are seeing things slow. What's your action plan? How are you thinking through this over the next year?
You wanna go first then?
Yeah. I'll start with security. Look, the problem is gigantic in security. It's a problem that's been running away from our customers for a very long time. They all have bought, you know, generation after generation of security tools, and still the problem is getting bigger and is not getting solved any faster. Our ambition is to solve that problem for them, at least when it comes to cloud security, securing their homegrown applications, as well as the infrastructure they're running on and all the processes that surround that. Our strategy for that is to lean on the largest teams, and we'll hear more, you know, from Prashant a little bit later on that topic.
We're leaning on the largest teams, which are developers and the DevOps engineers, to really extend the yield of the security specialists that tend to be few and far between in enterprises. We know we have a long way to go for that. There's a lot more product we're building internally. There's a lot more we know we need to build. What you saw this morning is that we are progressing very fast in terms of giving new capabilities to our customers. We today have a very good offering for the infrastructure side, which is Cloud Security Management, and a very good offering for the application side. You know, we both talked about this morning, we'll talk about again in a bit.
We're also getting to a point where we go past just observing what's going on, but we also help our customers take action directly. Again, we still have a long roadmap, but we think we are in a unique position to solve this problem, and that's why you see the investment you see there.
David?
Yeah, as far as choppiness in the market and what might come ahead, I think number one, we're really focused on the long term, as we talked about. We think there's a very long, huge opportunity here. There might be choppiness along the way, but we really believe in the long-term opportunity. In terms of how we might be affected, the things we look at are, for instance, our gross retentions, and to see how sticky we are, and to see that when cost-cutting does happen, it doesn't appear based on what we've said so far as of our earnings call, that it affects keeping Datadog, because if you're delivering products to clients, you need to monitor them, and you need to have something like a Datadog. The stickiness.
As far as Oli mentioned also, we don't know, it's too early in the cycle, but there are some efficiencies in automation and bringing tools together that complement the adoption of a platform like Datadog. Of course, there may be choppiness, and there are a couple things I would note. One is we've always been a company that's lived within its, our means. We've invested in a very methodical way, a very consistent way that isn't boom or bust. That's how we structured the investments in the company, and we intend to continue given the long-term opportunity.
Because of our efficiencies that we have in the model we talked about, the go-to-market, et cetera, that enables us to make very strong investments in R&D on a continuous basis, and not have some of the problems that other companies may have who don't have that particular cost structure or maybe the history of investing very rapidly but living within their means.
I'll just add that with two points, you know, one which is, should remember that we only consumed a bit more than $20 million from the inception of Datadog to going public. So that's how we've run things.
Mm-hmm.
As of today, you know, we're also hiring as many engineers as we can, as quickly as we can, so we can keep this level of investment and make those developer charts that David showed a little bit earlier, you know, grow a little bit steeper. We know we're still only scratching the surface of the problem space we're faced, and we're in full investment mode on that side.
I'm gonna read off a question from one of our webcast participants. It's Muji from Hypergrowth. Please talk about the opportunity with workflows. As you seem to be moving into workflow automation akin to Monday or Asana, but more for infrastructure.
Yeah. This is also. It's one of those themes from this morning keynote is really about taking action. It's fairly new for Datadog. We've been focused on observing and monitoring initially, and now we're starting to close the loop and add the parts where we respond. You know, whether that's running workflows or running tests more efficiently or, blocking users, from a security perspective. We're closing back the loop there. On the workflow automation product, the two use cases we have in mind right now are DevOps and security, automation. This is what we started with. You know, in the demo this morning we showed a DevOps use case, then we had a customer show a security use case. These are the two we're investing in.
We're not in it to replace the monday.com and the more general purpose workflow automation products. We're not in here to automate customers, you know, front office or, you know, whatever they have that is automating their customer data. That's not what we're doing. We are here to work on use cases that are grounded in our data and that are also most relevant to developers, DevOps engineers, and security engineers. Those present some very specific constraints, you know, because they deal with sensitive data, very large data volumes. They have to run in specific environments, so some of them are going to run server side in the cloud. Some of them are going to run on-premise in secure environments.
There are a number of problems that are very specific to those workflows that we deal with our workflow product.
Okay. An in-person question. Can you raise your hand, please? Okay, great.
Hi. Sterling Auty, MoffettNathanson, SVB. You had the slide with all the new product and functionality that you announced today. Can you give us a sense what ones of those are gonna be included in what customers are already paying, and what ones are gonna be incremental, opportunities in terms of revenue generation?
Well, it's hard to give a general answer to that, you know, so I would say it's about half and half, you know, new capabilities and new products from what we have announced. Some of them are in beta, and you know, we haven't finalized packaging for those yet, but we know they're going to be separate SKUs in some form. Some of those are in beta. We also started charging some customers for them already as we've developed those products hand-in-hand with customers.
Another in-person, please raise your hand. Yeah.
Thanks. Andy Nowinski with Wells Fargo. Last quarter, you talked about cloud migration slowing and the impact on Datadog. Then, you know, today you launched a dozen or so new products, some of which are likely new, you know, new revenue streams, including some of the shift left modules, the workflows and cloud cost management. It seems like Datadog's growth is becoming more and more decoupled from cloud migrations. You know, regardless of the pace of those migrations, I'm just wondering how you're thinking about that decoupling.
Yeah, I just wanna make sure and clarify. We didn't talk about slowing cloud migrations last time on the earnings. What we talked about was a view more concentrated than previous quarters on optimizing what was being used. We said a couple sectors had been affected, like consumer discretionary. Just wanna clear up in advance that we didn't talk about a slowing cloud migration.
In fact, we actually.
Yeah.
See exactly the same demand from customers in terms of moving to the cloud.
Mm-hmm.
If not more demand, you know.
Mm-hmm.
in terms of starting new migrations and you know getting into the cloud environments. We did see with the thing that we-
Mm.
We talk about slowing at certain levels, which David alluded to.
Mm.
was on certain volume-based products, such as no logs, for example, where there can be some.
Mm-hmm.
some optimization without actually changing your workloads.
Mm-hmm.
All right, I'm gonna read off another question from our webcast participants. This is from Fatima Boolani from Citi. The portfolio expansion is leaning more and more towards to shifting left. How are you aiming to solve for monetization obstacles common with developer tools given end user fickleness?
We're shifting left because there are many more developers than there are ops engineers, and there are many more engineers than there are security engineers, and that's the center of gravity of most enterprises. We already have a lot of those developers' time and attention because they use us to deal with their production environment. We're trying to integrate what they do with us in production environment further upstream with how they're coding, how they're testing, and how they're shipping their code. We're going up, you know, in that order, pretty much. We start with production, then we go up to deployment, then we go up to testing, and that's what you've seen today. In terms of fickleness, I think it depends what you do.
Like, if you look at developer environments, and editors, like yes, this is very fickle, and this is not necessarily a market that we're interested in. Developers are going to use many different kinds of those, and it's a matter of personal preference, and that's fine, but there's also no value in providing a unified platform there. For everything that has to do with integrating with other developers' work across the enterprise, there's value in having a platform, and that's what we're shipping there.
Okay. Let's take one more question for this Q&A session. Okay, great.
Okay, great. Thanks. Gray Powell with BTIG. A question on the security side and just your go-to market there. What kind of incremental investments should we expect you to make in sales and marketing as you try to more deeply penetrate the security space? And are there any material changes that we should expect as you go after that market?
It's possible there will be changes. We just don't know yet. We're trying a few different things, but right now the vast majority of our security customers and usage are acquired in a bottom-up fashion that is very similar to the way the rest of the Datadog platform has been adopted. We're not completely, you know, married to that approach. We know we might need to do a few different things. We've tried a few different. We're trying a few more. But there's nothing that we're doing of significant scale yet, so nothing to announce.
Okay, great. We are going to stop this Q&A session. Don't worry. There's gonna be two more. I'm gonna invite Ali and David to leave the stage. Thank you very much.
Thank you.
Thanks.
Okay, great. I need to click this. With that, we're going to turn to our next section. We're gonna focus on our observability platform and products, and we really just wanna show you the platform in action. To do that, joining me on stage is Omri Sass, our group product manager for Application Performance Management. Omri.
Thanks, Yuka. Can everyone hear me okay? Is the mic working?
Yeah.
All right. Thank you. Hi, everyone. It's a pleasure to be here. My name is Omri, and I'm one of the product managers who work on our APM suite. I've been with Datadog for about three and a half years, and throughout that time I've focused mostly on APM, and in doing that, I've gotten exposure to kind of the broader suite of observability tools. That's what I'm gonna talk to you about today. We're gonna have three different sections. In the first one, we're gonna talk about how we take an issue, however we detect it. Today we'll talk about a user complaint, and how our users take that and turn it into an action they can take to resolve. After that, we'll talk a bit about where most of the Datadog users spend the bulk of their time.
Our most popular features are dashboards. Then finally, we'll talk a bit about how AI and ML can help troubleshoot faster. For the first part of the demo, I wanna introduce two kind of concepts that we have. The first one is monitoring for front-end applications. These would be web applications or mobile applications. In the future, we might talk about things like, you know, smart TVs. That's where users actually engage with their applications. That's monitored with our Digital Experience Monitoring suite. Specifically today, we're gonna talk about RUM or Real User Monitoring.
That in Datadog is seamlessly integrated with the monitoring tools for our back-end, so application performance monitoring or APM, which gives us the application view and infrastructure monitoring that gives us the health of the actual machines or containers or anything like that folks rent off of the cloud providers. We'll see how APM infrastructure monitoring and real user monitoring all play in tandem. Across both front-end and back-end, we'll see how log management helps our users identify root causes. I think we need to switch mic. There we go. Before we actually jump into Datadog, I wanna start by introducing Shopist. Shopist is a bit of a fake e-commerce application that we have. It's available on a website called shopist.io. If you go to that website, you'll be able to see exactly what I'm seeing here.
This looks and behaves like any old e-commerce. In this case, it's a furniture store where we're able to do things like identify a couch that we might be interested in, add it to our cart, try to check out. We can even apply a discount code for $100 off and try to check out. Now, obviously, our expertise is in software, so if you try to do this on our website, we're not gonna ship a sofa to your location. But let's see what happens when we try to check out. In this case, we get an error. Something went wrong, and it's not very informative.
Usually what happens in these cases is the first thing that happens is that users try to click all over the place, and then they click the Return to page, and they try to figure out how to reach out to support. Sometimes there would be a little chat icon that allows you to chat directly with someone, or it would be an email exchange. I don't know when's the last time anyone here had to go through that process. I tried to book travel recently, and when I was trying to type in the three-letter security code for my credit card, it actually led me to a blank page. I reached out to the support. They asked me for a step-by-step reproduction of what I did. In the industry, we call that repro steps. I'm familiar with it, no big deal.
Gave them that. They started asking me for screenshots, and that was slightly odd because I told them, "Hey, I got a blank page." Again, they were very adamant that they wanted a screenshot. I gave them a screenshot of a blank page. It was a fairly frustrating ordeal, and it's very indicative of a lot of exchanges that folks have with kind of support. There's a lot of back and forth. With that in mind, I wanna jump into Datadog, and we will talk about what support personnel and then engineers do in Datadog to make sure that we don't have any of that friction. This is Real User Monitoring. I already mentioned that on the slide. This is an entry point for a lot of investigations and a lot of our users.
You'll see at the top I have this, search bar where I'm able to type in anything that I might be looking for. This is the same experience that we have throughout Datadog. It's a very consistent user experience, so anyone who lands on Real User Monitoring or on Log Management or in APM gets the same experience. It's always familiar. They can just jump in and look for anything that they might care about. In this case, for example, in an exchange with support, we can look for an individual user. Alternatively, in this case, I wanna make sure that we look at the entire list. Every one of the rows that we have here is a user session, and if we click into any one of them, we'll be able to see a lot of information about that session.
Where is this user from? How many pages have they visited? How many buttons have they clicked? We have this little frowny face that says that this user was frustrated, and we'll talk a bit about what that means in a second. Underneath all of these, we have the timeline. We know exactly when a user loaded a page, when they clicked anywhere, navigated to another page, and so on and so forth. From our perspective, if we're trying to provide a good service, we don't need to ask our users when they reach out to support for a step-by-step reproduction. We actually have slightly more than that. We have the ability to replay their session. Session Replay is a pixel perfect video reproduction of the user's interaction with our website.
If I click on play, what we'll see is the exact actions that a user has taken through the same application. I'm not gonna do all of this, but we have that capability. We no longer need to ask users for every individual step. We can just keep track of it. On the right hand, I have that timeline again that lets me center on exactly what I'm looking for instead of just following the video. In this case, I can see that we have that frowny face that says that a user hit an error. This is a frustration signal, and that's something that Datadog identifies on behalf of our users, so they don't even need to wait for a complaint. Remember how I clicked around on the error button or the error message?
When that happens, Datadog is able to identify a rage click or in what we're looking at here, an error click. What our users do in these cases is they go to the page load that happened right before, and they click into the details. Here we have all sorts of information that's relevant to identify what was going on in the front end. For example, we have what we call Core Web Vitals. These are the same things that our users use to identify how long it takes for a page to load, right? These have an impact on user experience or SEO, but I don't wanna spend too much time here. This part of the experience is tailored for front-end engineers who are trying to troubleshoot their front end.
What I wanna do is I wanna show you how easy it is to pivot into monitoring for your backend. We do that with traces. Traces are part of our APM solution, and if I click on View trace in APM, we're gonna get exactly the same view on a bigger screen, but we've also pivoted from Real User Monitoring into APM. It looks and feels exactly the same. It doesn't matter where we start, what type of user we are, we always get one very consistent user experience. Now, putting on my APM hat, this is what we call the flame graph. The flame graph is a visual representation of how a request propagates through a stack. Not gonna spend too much time on that, but I will say it has some very clear visual signals for backend engineers.
What they do is they look at the bottom part of the graph that has an error associated with it, right? What is the lowest part that has a red marking on it? That's usually the root cause of whatever issue is happening that then gets propagated upwards all the way up to browser.request. This is the user who's experiencing our error, right? We can see all of the information from Real User Monitoring and all of the information from APM in one place, and we have very quick to use troubleshooting tools. Here, I'm able to see information about this particular part of the request. In this case, I know what service on the back end it's coming from, and I have a lot of associated metadata generated in APM.
I also have a seamless integration with Infrastructure Monitoring, where I'm able to see what machine is actually running the service, what is its overall health, things like CPU load, memory, or disk usage. The same types of metrics that you might think about when you're looking at your own laptop as it's slowing down. In addition to that, we also bring in every individual log line that was generated during this request. This is a very nifty capability because if you think about the world before Datadog, this used to require multiple tools and several minutes of querying just to identify these individual logs. Then finally, I wanna make sure that I mention code hotspots. Code hotspots are how we represent Continuous Profiler in APM.
I think Ollie mentioned it earlier in the Q&A, where he said that the tools that we generate are here to make sure that users can optimize every part of their technological stack. What we're looking at here, again, I won't show you all of the nitty-gritty, Continuous Profiler lets us see exactly what line of code is slowing the system down. For an experienced backend engineer, this thing on the left, synchronization for 18 seconds, tells us that we have what's known as a deadlock. A deadlock is when two lines of code prevent each other from operating, and what tends to happen in that case is an error after a while. We now know what we need to do. We should probably go into Continuous Profiler and figure out what lines of code we need to fix.
Alternatively, we should check if this is a new piece of code. If it's a new piece of code, what we should do is revert back to the last known good state, the most common thing that happens for a faulty code deployment. Datadog has tools that kind of help us figure out exactly what to do. It gets very technical very quickly. I don't wanna spend too much time on it, but I do want everyone here to leave this part of the demo realizing that Datadog offers our users the ability to seamlessly navigate through Datadog, through all of our products, regardless of what they're looking for. Think back to what we just did. We saw how a user fails to do a checkout. This is a business-critical operation, right?
We are losing money by not allowing a user to check out with the $1,200 couch. We figured out the root cause of it. We can revert back a faulty code deploy within minutes. If we go back to the presentation for a minute. There we go. We'll take a look at the slide that David talked about a bit earlier that has all of the various products from Datadog. Clearly, I'm not gonna go through every one of those. There are quite a few of them today. I wanna make sure that we focus not only on the individual products, right? We talked about Real User Monitoring, Infrastructure Monitoring, Log Management, APM, and Continuous Profiler. I wanna take a moment to focus on the platform.
What are the things that are shared across all of Datadog's products that all users can access and make sure that they have the best experience tailored to their own needs? With that, we're gonna talk about two main components. The first one are dashboards, which is, like I said, by far the most popular part of Datadog. It's where most of our users spend most of their time. Then we'll talk a bit about Watchdog, which is our AI ML engine, and how AI can help troubleshoot faster. If we go back to the demo. There we go. Here we have an example for a dashboard. This is one of the dashboards that we created for the backend and business operation of the same e-commerce application that I showed you earlier. For Couch Cash, we have information like its overall uptime, right?
This is what we call an SLO, a service level objective. This makes sure that we have a big green or red number that tells us whether we should focus on feature work or alternatively, on reliability. If the situation wasn't as optimistic as it is here, where we have 100% in past 7 days and 100% in the past 30 days, if it would've been lower than our targets, we probably would've diverted our engineering capacity towards reliability. We've done what's known as a reliability sprint or even a quarter of just focusing on our ability to scale our operation. Under that, we have information like active monitors. Do we have anything that's currently alerting? Do we need to take immediate action? Under that, we start to bring in additional business KPIs,
This speaks to the power of Datadog's platform, where we show you data not only from our own products, right? All of the data that we saw in APM or Infrastructure Monitoring. We also bring in data from the 600 and more integrations that I think Yuka mentioned earlier. We bring in, or we allow our users to bring in custom metrics, anything that they might care about to help run their business. In this case, we see the overall throughput, average request per second, alongside a business metric like the overall furniture sales. If we dig deeper into those, we'll be able to see how something like an increase in latency for an application is correlated with a decrease in conversion on our checkouts. We can quantify exactly what happens when user experience degrades or improves.
Our users are able to do all of that, and they can very easily create any of these nifty visualizations based on any type of data, not only the over time metrics or big numbers that we've been looking at. From APM, we have information like the service map that allows us to identify the status or health of downstream dependencies for our services. If we scroll down, we'll be able to see things like our host map. What is the health of every machine that underlies our services? We can also do things like leverage non-deterministic functions like forecasting to try and look into the future and identify the expected behavior of all sorts of metrics. Everything that we see here on the dashboard is also available as a metric, so users don't even have to go into Datadog.
They can get alerted if any metric goes into a negative state. We're also able to bring in individual log lines here or even use RUM data to identify what's going on with our user experience with something like the funnel analysis that helps us run analytics queries and identify where we might have attrition in our checkout process. We can dig into anything here and jump from the funnel through RUM into individual sessions. This dashboard is wonderful, but it was also built by folks here at Datadog, and clearly, we know how to build the nice-looking dashboards. I wanna make sure that everyone here understands how easy this is to build for any user, as it takes just a second of dragging one of these widgets into the dashboard in order to create it.
The actual creation flow is super simple, but we wanna take it a step further, and I think we heard some questions earlier about Powerpacks. Powerpacks are how we enable power users to share knowledge with their constituents, right? This allows us to say, "Hey, every time someone rolls out a new microservice, you really should have this service overview set of widgets." That allows us to say, "Okay, there are seven or 10 or 17 widgets that you should always look at when you roll out a new service, and that's unique to our organization." Hey, Stephanie just made this Powerpacks for us, and we'll be able to drag that out completely. Then finally, I wanna make sure that I very briefly mention apps.
We now have a third-party marketplace that allows companies like Rookout or PagerDuty or LaunchDarkly to create widgets within Datadog, allowing their users to leverage their own data with their own user experience alongside Datadog's data where most of their users are already. Those are dashboards. This is where most of our users spend their time, and that leads me to one last bit of this demo, Watchdog. I've mentioned it a couple of times. This is where we do not only traditional types of monitoring, we also leverage non-deterministic algorithms, AI and/or ML, if you'd like to go down that route. That lets us do quite a few things. I'm gonna focus on one here, and that is Watchdog alerts.
These are alerts that we're able to generate with no additional setup for our users and no a priori knowledge. No one needs to know what the baseline behavior of a service is. Watchdog establishes that on our users' behalf by looking at all of the billions of data points that we process from metrics and from logs, and then identifies any anomalies in key metrics, right? We're able to say things like latency is up on an individual endpoint for an individual service, and that's still ongoing. It might be relevant. We also try to make sure that we never create noise for our users. For every one of these cases, we correlate them to anything that might be impacted by the same anomaly, right? We're able to say multiple services have a cascading failure, right?
We saw that in the flame graph, where a failure went up the flame graph. The same thing is represented here, where a single alert represents a failure that happens across multiple places. Watchdog does something even cooler than that. It tries to identify the root cause of an issue. In cases where we have this pink banner for root cause analysis or RCA, we're able to identify exactly what's happening down to an action that we can take, right? In this case, we know that a new deployment on some address service. Address service is the thing that resolves user addresses to make sure that we know where to send the sofa, right? If you've ever typed in an address, and it says, "Hey, wanna use the suggested address?" This is what that service does.
Someone rolled out a new type of code or a new deployment there that led to errors and latency in one place, which themselves led to a failure in four other places. Through correlating Real User Monitoring data, we're able to say not only what services are impacted, but who are the individual users that are impacted. We can see that 163 users are impacted by this bad code deployment, and they're impacted on two separate views, on the checkout page and in the cart. This should be enough for a user to go and remediate. At this point, a user knows that they need to go to the address service and revert that code deploy. Then offline, they can figure out what part of the code caused it. First things first, go back to the last known good state.
If users are not content with that, they wanna see additional evidence, they have everything that they need here. When was this code released? Was there a remediation that already happened? What are the critical failures that happened? From which we can always jump into APM or Log Management or infrastructure, and who was impacted. With that, I wanna make sure that we kind of wrap things up. In this part of the demo, what we looked at is what does a dashboard look like, and what are the data sources that our users are able to bring into their dashboard, where they can do things like correlate the health of their business with things like revenue or analytics data, with everything that's happening on their front end or back end, the more technical things or the operational data.
We looked at Watchdog and how AI and ML can troubleshoot or help our users troubleshoot faster. With that, I think I'm done.
Okay. All right. We're going to go to our second Q&A session. This one is going to be focused on our observability platform and products, as relates to the demo that Omri showed you. Again, just a reminder, and maybe especially this time, we will report on November the third. We're not talking about near-term financials, near-term demand. We'd love to focus on product and platform in this Q&A. Joining Omri is our president, Amit Agarwal. We'll start with questions from the in-person audience. Can you please raise your hands and Fisher will come to you with a mic. Anybody with questions? Great.
Great. Thanks. It's Steve Koenig from SMBC. Thanks for the demo and the product talk. You know, we were talking earlier about where you were seeing short-term choppiness, and it had to do with some of your use cases outside of your core. I'm also wondering, and this is both a short-term and a longer term question. You know, maybe some of your biggest competition is customers trying to do things on the cheap themselves with open source, with hyperscaler tooling. What things from a product perspective are there that you are arguing to counter that competition? Maybe this is a related question. You know, to what extent do you think you'll be leading less with infrastructure monitoring in the future, basically? Thanks very much.
Thank you. Thank you. That's a great question. What you saw in the demo, you know, our industry is full of PowerPoint presentations with chevrons pointing at each other and big marketecture diagrams, right? Like, you guys all know that, and everyone gets really jaded by that. That was one of the reasons to bring Omri on to show you exactly what we do in the product. What you saw in the product today is a combination of. Like, you know, ultimately, your customer, what they want to solve is solve the problems of their customers. In this case, if you are an e-commerce customer, e-commerce client, you want to solve the problem of your customers who are trying to check out, buy furniture, and so on.
You're not thinking, like, you know, at a business level, whether it's an infrastructure problem or a code problem or a user problem or some other issue with something else. What you saw in all of this was actually like maybe six or seven or eight different product categories that have been created within observability, where different people put together, like, different individual tools that you run and so on. All in one single pane of glass, which actually is not like a marketecture diagram, but things that work seamlessly from end to end, allowing you to troubleshoot a problem much faster than looking through one thing and then looking through another thing, then looking through another thing.
Think of it like a pivot table in an Excel worksheet, which I'm sure everyone here understands, and being able to drill down, drill up, drill down, filter, et cetera, et cetera, as opposed to looking in 50 different places to try and figure out what's going on. That is the beauty and power of what you saw today. That experience is extremely difficult or impossible, I would venture to say, to build with individual open source type of products. Hope that answered the question.
Okay, great. Thank you. We'll take another in-person. Fisher, maybe Catherine.
Amit, this is a question for you because you've been at Datadog for a while, and I was wondering just the pace of innovation continues to accelerate. I'd love your view on organizationally how you've been able to drive that. Is it you've changed the structure of the organization in terms of who reports to whom? Or is it something where the more users you have, you just have more places where you have problems to solve? Any perspective on that would be great. Thanks.
We learned this lesson very early on at Datadog. Give you some background probably not a lot of people know. When we were a very small company, we just started out, we were like 10, 15 engineer people in the company. Total people in the company, maybe 10, 15 people. We were trying to do more integrations, right? Like, we were trying to do more infrastructure integrations. The same engineers, the same product managers that were working on many features, were working on these integrations as well. There was always like this struggle between, "Hey, what should I work on first and what should I work on next?" We learned from that experience that the only way to grow and grow into all these adjacent areas is by cell division. We keep dividing.
Today, to give you a sense, and this is not MNPI. Today, to give you a sense, we have about 120 to 130 product managers. Product managers alone, which are all backed by engineering teams, pods of engineering teams that each work with these product managers. Now, each of these product managers has a different area of focus. Like for example, Cloud Cost Management, different product team. We spin it out. We don't say, "Okay, you're gonna do double duty on Cloud Cost Management and Infrastructure Monitoring." We say, "Okay, Kayla, your job is Cloud Cost Management.
Go research it, talk to customers, figure out what exactly they want to do, and then bring it back into the product and build it with the engineering team that's assigned and works just on this. So that's been like. It's all common sense, but that's been our way to continually expand. We always look for opportunities in adjacent areas. We don't go. Like, we don't venture out like completely left field into a completely new foreign area that has no adjacency, that we cannot provide any differentiation in, that has nothing to do with our core strengths. We always look at adjacent areas. Like Olivier was talking earlier and David was talking earlier, it's like security because it's right adjacent to observability. Cloud Cost Management because we do also do cloud observability, and joining that data brings us, brings our customers additional value.
That's how we kinda think about it.
Great. Thank you, Amit. I'm gonna read a question from our webcast audience. It's from Kingsley Crane at Canaccord Genuity. A question about Cloud Cost Management. How are you communicating the value prop of the Cloud Cost Management offering to buyers? Is it most useful to deploy across all cloud hosts, or would it be feasible to apply to a few target hosts?
Well, if you ask my biased opinion, you would deploy it everywhere all at once. The answer here is more nuanced. Like, you know, Cloud Cost Management, Cloud Cost Management for Datadog, the value proposition for the customers is like Cloud Cost Management, there have been lots of tools, a cottage industry of things that have come about in this area. The value proposition that Datadog provides, the true differentiation, is not just in the ability to attribute costs and do, I don't know, activity-based costing and this and that to figure out where the cost, who gets to get the blame for the cost, but to really be able to take action on these costs, action in real ways where developers can act on it and save money immediately by doing something. So that is the value proposition of our product, right?
Like, that allows you to look at cost data and marry it to observability data to go down to the level of the container, the host, the whatever that is the single biggest contributor to the cost, by marrying these two things together. Now, you may have a small environment where you're just monitoring a very small thing, and you look at the cost of that one thing, and you could use our Cloud Cost Management for that. You could get started really small, or you could have a massive environment with hundreds of thousands of hosts and various different applications, thousands of applications, millions of microservices, containers, et cetera, et cetera, and you could derive a lot of value in those type of organizations as well.
Ultimately, what this product provides is the ability for a developer or an operations engineer, for them to have direct control over agency, over the things that they control and understand their costs, as opposed to giving aggregated views that are mostly relevant to finance teams.
Great. Thank you. Let's do another in-person question. Fisher?
This is Peter Weed from Bernstein. Super impressive, the detail on the product management sells that you have. You know, the thing that's come to mind as I see the exploding portfolio of opportunity, not just within APM, but across the organization, is how do we not overwhelm customers with choice, right? Particularly new customers as they're getting going, and how do you help provide direction and a thread among everything that's going on so that we don't wander into an environment where customers actually almost stop adopting because they're overwhelmed with what they can do, and they don't have a common path?
That's an excellent question, and this is an important part of our DNA. Like, the way we look at each different product, whether it's Real User Monitoring, Cloud Cost Management, et cetera, we think of each product as standing on their own and being compelling products for customers on their own as well if they're not adopting everything else from Datadog. When you look at all of our cloud adoption journey that Yuka talks about, or the numbers we present there, you see an adoption curve where customers are adding more and more products. When we go and sell to customers, we don't compel them to buy an entire solution with 20, 30 products, like the way traditional enterprise software sales work, right? Like, where you go and say, "Hey, here's like 30 things.
You need to buy all 30 SKUs in order for you to get value." You can start your journey wherever you want with Datadog, and you could have other things in your ecosystem where you have products from other vendors, things that you already bought from other places, without it diminishing the value that Datadog gives. Of course, our hope is that we can show you by connecting these things together, by adding more things, you get incremental value that you would not get from just doing all of these things independently on their own.
Thanks, Amit. We'll take another in-person question.
Hey, guys. Kash Rangan from Goldman Sachs. Thanks for doing this event. I had a question with APM. The market's seen generational, multiple generations of technologies come and go. How would you characterize your structural advantage that could ensure the longevity? Maybe it's related to microservices, that is the pivot, maybe it's something else. Secondly, I was curious to get your thoughts. When you look at the market and look at customers that have deployed legacy APM or looking at new solutions, what are the problems that they face that you're uniquely qualified to solve for them that others cannot? Thank you so much.
Yeah. One of the things that you saw in the demo, one of the things that you saw Omri do so expertly that I would never be able to do is how seamlessly we can pivot from one view. Like, Let me step back. When you think of application performance monitoring, some of these things are categories created by vendors and this and that. Like, ultimately, what do customers want? They want their applications to be performant, available for their customers, right? Like, that's what they want. Now, in the past, the way the world was organized, the way the world organized itself was siloed teams of network engineers, DBAs, developers, operations people, so on and so forth, each with their own set of tooling, right?
Like, each with their own set of tooling. A ticket would make the rounds from one team to the next team, to the next team, to the next team. Now, as we move into the cloud, many of these things, silos are breaking and teams are reorganizing into single teams. That have to look at every facet to see where the problem is. Application performance monitoring today, at least the way we see it with our customers, it's a process of elimination on one hand, where you are looking at data in various different ways. You're collecting data from your infrastructure, from your application, from your logs, from front ends and so on and so forth.
You are doing a process of elimination by starting at one end, no matter which end you start at, and getting to the other end to figure out like where the problem resides. That's one way to think about it. To get to a root cause of a problem as quickly as possible, the interconnectivity between these things can't just be a PowerPoint with chevrons pointing at each other. It needs to be a tightly integrated data model where everything allows users to right-click on things and drill down and drill up and move from one part of the product to another. That's one key differentiation that Datadog provides.
The other one is, the more data you collect from all of these different areas, the better you get at predicting and understanding problems before they occur. Because, you know, when you look at modern environments and large scale applications, typically problems start out with like a little innocuous-looking thing that happens in one corner, and then it spreads out, and then there's cascading catastrophic failure everywhere. The fact that we gather all of these things suddenly gives us the ability to use, like Omri was saying, nondeterministic ways to figure out, hey, this is where the problem's happening, and maybe address it before anyone else can. That is also a differentiation that comes from bringing all this data in one place.
Great. Thanks, Amit. I'm gonna take a question from the webcast audience. It looks like the cloud skill shortage is becoming more evident, so how do you deal with it as Datadog grows? How does the management team see it, and will turning it into net positive opportunities be possible through high productivity from new products that can ease the pains of skill shortage?
Trying to digest the question.
I know. Yeah. I think cloud skills shortage at the customers.
Yes. Look, you know, our products provide customers the ability. Ultimately, as Olivier was saying earlier, our products allow customers to improve productivity of their engineers. The ways we improve this productivity is by making our products simple to use, easy to use, easy to deploy, easy to adopt. This is a part of our M.O. internally, right? Like, making products easy to deploy, adopt, use, grow into. That directly ties into the skills shortage, because if the products are simpler, easier to deploy, you can have people ramp up on these products much faster without requiring a Ph.D. in understanding what observability should be. So that kind of goes to the core of how the products are built, what they're used for, et cetera.
Okay, let's do another in-person question. Can you raise your hand, please? Fisher Kamal.
Thank you. Kamil Mielczarek from William Blair. I wanna follow up on pricing. You're accelerating the pace of new product introductions, and if I look at your website today, I counted 18 modules on the pricing page. How do you think about the pricing strategy evolving long term? Do you think things like AI functionality could provide an opportunity to raise prices over time? And on the other side, where do you see the biggest threats from commoditization and competition on price?
Look, you know, we think along a completely different axes when we're thinking about pricing products, et cetera, which is mostly around value we provide to the customers and giving them the ability to choose how much of our products they use. When you look at all of our pricing, it's all based on customers' ability to make the choice of using certain products and not using other products. When you look at all of these, I could have very easily bundled it all into one big gigantic package, which would go back to the other speaker's original question, which was like, you know, how do you deal with the complexity of so many different products and so on and so forth.
Which is what you see provides customers the ability to start their journey with Datadog wherever they choose, and then grow into other things if we do indeed provide compelling solutions in all those other adjacent areas. It's on us to show differentiation and for customers to grow into those, as opposed to a gigantic bundle that's shoved down your throat and so on.
Thank you. Another in-person. Can you raise your hand? Thank you.
Thank you for taking the question. Datadog has always been sort of right at the leading edge, particularly in the core observability space. I wanted to get your take on eBPF and how that might be disruptive to how customers implement observability and potentially make it better across networking, APM, and infrastructure. Where are you on sort of the eBPF movement?
eBPF is like we made a tremendous amount of investment in eBPF ourselves, including the number of different acquisition-related things and internal teams that we have built specifically around it. The promise of eBPF is specifically around low friction in deployment and collection of observability data without customers having to instrument their code to various different things, discovering what is talking to what and so on and so forth. That is what you will see in Datadog is like we don't sell eBPF as a technology. We sell our products that solve a problem for the customers.
What you will see over the next short period of time is it manifesting itself in various different parts of our of our stack, our agents, and so on, to make where it makes sense easy for customers to collect data without doing extra instrumentation, easy for customers to understand what's talking to what and so on and so forth. You will see technologies like eBPF and maybe something else that hasn't even arrived on the scene yet appear in Datadog in variety of ways and us incorporating these things. Because our goal is always to make it as low friction as possible for customers to deploy and use Datadog and get the most value out of it without having to do a lot of work.
Okay, let's take one more question from the in-person audience. Anybody else have any questions for Amit and Omri? You're all satisfied? Okay. We'll let you go. Thank you very much, Amit and Omri. You can leave the stage.
Thank you, guys. Great questions.
Okay.
Okay.
With that, we are going to move on to our security offerings, and we're gonna focus today with you on cloud security management. To do that, let me introduce Prashant Prahlad, our VP of product for cloud-native security products. Prashant.
Thanks, Yuka. Can folks hear me okay?
Yeah.
Thank you. All right. Hi, everyone. My name is Prashant Prahlad, and I'm responsible for cloud security products here at Datadog. Just in terms of introduction, I joined Datadog about five months ago. Really excited about the opportunity ahead of us, and also excited about the focus this team has on customer and execution. Spent nearly a decade prior to this at AWS building and operating what became foundational DevOps and security services, and I was also at VMware during the very early days of virtualization, before we had cloud. As I mentioned, I'm excited about the opportunity because it's a really hard problem to solve, right? As you heard in the keynote today, security in cloud-native applications is quite different from traditional security.
As cloud environments grow, dev and ops teams naturally grow along with it, and this increases the pace of change and the chances of misconfigurations. Misconfigurations are accountable for about 90% of security breaches, and they're all avoidable. You can see how a catastrophic event, potentially that could have been avoidable, just doesn't get caught. Securing cloud environments, by definition, brings a broader attack surface than what you had in traditional deployments, right? Lastly, what you should understand is it was normal earlier for specialized tools to be built by specialized experts in the area. Security tools focused on network and alerted security teams with a wide range of alerts. Security engineers then decide which ones are important and then dive into that with minimal information. This is just not possible today.
When you have an incident or investigate an issue, you have to work across many different teams, so you can't afford these overheads. Let's focus on the state we wanna get to, right? Let's start with the most complex and expensive parts of it for a business. It's people and their productivity, right? Developers use a bunch of old and new software to develop something quickly, but may not be aware of code-level vulnerabilities in them. Operations teams get deep observability data to help them monitor and debug issues, but rarely have security context. Security teams, well, they actually can't hire or scale their teams, yet they're responsible for raising the bar in security and securing everything, right? This setup just cannot work, right? Given the scale and size of the teams and the responsibilities they have and the tools they use.
DevOps teams, the intersection of dev and ops in this slide, use products you heard about today, to develop and operate cloud infrastructure. They're often responsible for securing and remediating security issues, right? The security teams identify these issues, and DevOps teams investigate and fix this issue. They are becoming the natural owners for cloud applications with help from central security teams. This intersection of DevOps and security or what people like to call DevSecOps is quite real. We're seeing that happen in organizations.
DevOps team typically nominate a person, a role, a security ambassador role, or a security guardian role to help them manage risks proactively or interface with that central security team when there's an issue. Others recognize the shift too, and security companies will sell this persona a bunch of traditional security tools that were designed for top-down security and ask them to go remediate issues. They'll sell these tools at a premium and overwhelm you with volume of alerts. Remember, that's what they did traditionally, but rarely provide that context that helps these DevSecOps engineers to go investigate this issue. What if CISOs and security teams can bring about that security awareness that they wanna do in organizations and enable them to proactively handle security issues using tools that they already know, understand, and like, right? Now, this is the part where Datadog comes in.
This is why I'm excited about this opportunity, right? We bring several strengths as we transition to DevSecOps, for lack of a better term. We unify what DevOps and security teams are seeing and acting on, right? A unified platform shares insights and helps them collaborate better together. Fostering collaboration is nothing new. We've done that with Datadog for DevOps teams in the past, and we need that more than ever now for the security area. Second, we are leveraging the richest context we already have from this observability platform, right? This is not magical. Our investment in the platform allows us to leverage this data for security issues. Lastly, we are reducing that technical friction or even organizational friction.
Like, you don't have to convince another team to install an agent, you don't have performance penalties, and you don't have to deploy various other tools or move data around to get this product to actually work, right? Datadog has 600+ integrations, which means that if you have a workflow where you use something else, Datadog would seamlessly fit into that, right? As I said before, we've been hard at work at this. We launched Cloud Security Posture Management that helps DevOps teams proactively mitigate their findings. We launched Cloud Workload Security, which is an eBPF-based product that allows you to monitor threats and attacks. Today, we announced the availability of another product called Resource Catalog. Now, Resource Catalog is a very foundational element that allows you to get an inventory of all your cloud assets and their security posture.
With CWS, CSPM, these are acronyms of those products out there, and Resource Catalog, they're very important tools that solve very specific problems for our customers. As Amit mentioned, this is what we do. We find a problem, we build a product to help solve the problem. Now, earlier today, we announced that we're bringing it all together under Cloud Security Management. Cloud Security Management is what our analyst friends call the Cloud Native Application Protection platform, or CNAPP for short. Now, if you've dealt with security, we put an acronym to everything, and CNAPP is probably the closest one that fits this. What's unique about what we're doing at Datadog is that we have rich context-aware CNAPP solution without the friction to install and use it with over 20,000 existing DevOps customers, right?
Now, this unified solution brings specific characteristics that allow for collaboration between DevOps and security teams. Let me contextualize this overall and where we stand with Cloud Security Management. Now, Cloud Security Management is our newest pillar of cloud security. As a customer, if you're already monitoring your infrastructure with Datadog, you're able to get Cloud Security Management without having to install any new agents, so zero overhead. If you're an APM customer, you're able to correlate application traces with security insights, threats, and vulnerabilities with Application Security Management, which is our second pillar. If you're a Log Management customer, Cloud SIEM will extend that to gather important security activity from those logs, and that's our third pillar. Now, as you also heard, we're not a slide company, so I'm gonna jump right into the demo here and show you what we built.
This is going to be a live demo to make things exciting. If things don't work, that's all part of the plan. Cloud Security Management is this. You can see that if I'm a security engineer, I see an active threat in my system that I can analyze, which is very useful. I can look at misconfigured resources in my account, a posture score that tells me how I'm doing, and more importantly, how I'm doing relative to the last 30 days to tell me if I'm improving or getting worse. Now, this is a demo environment, so please don't freak out. Things are getting worse. How much of this product is covering your security, your cloud infrastructure, right? If I was a security analyst, I would be very focused on active threats in the system.
If I was a DevOps user, I have this product called Resource Catalog that we launched again in beta today, and I'm gonna dive deeper into it. If I was an administrator, I get to see how I'm doing over time. I also get to see, you know, how I'm doing against specific compliance reports. Here I'm showing you PCI, SOC 2, CIS, and I can dive into these, look at a report, download it as a PDF, give it to my auditor. Like, it's really that simple. I can also look at, you know, what this report looked like three months ago, download that, and just give my auditor some peace of mind that we are focused on it and reducing the amount of compliance risks. Now, I wanna show you what a security analyst would do with this page, right?
A security analyst who's interested in active threats in the system would naturally gravitate towards like attacks and risks that I need to do something about. There's an action I need to take from this page. Now, I scroll down and see various, you know, again, demo things that we're doing. If I look at password utility executed. Now, password utility is just a command that you can run on a machine. It's just password, and it asks you for your current password, and you can change your password. Now, if you do that on a regular basis, someone's trying to lock a user out, and typically IT policy will say, "Someone changes their password more than N times, you get locked out," or a script is trying to crack a user's password. Either way, as a security person, the healthy paranoia asks.
Requires you to investigate these type of issues. You see, I can see now this is running on a regular basis. I just click into Password Utility Executed, and then I can see this is the host on which that execution is taking place. I can get a full view of what command was executed on that host, what's the process tree associated with it, and context on the environment, who the users are, and all the relevant tags. Guess what? I didn't install an agent to get this to happen. This is the context Datadog already had because this host was monitored by one of our observability products.
Now, I'm using one of the pillars of Cloud Security Management, one of the capabilities with Cloud Workload Security, and this is that eBPF agent that gives you this level of detail into process integrity monitoring and file integrity monitoring. Now, as a good security analyst, I don't want others to get alerted by this, so I mark this as being under review. At this point, I want to collaborate. I wanna take this to the DevOps team because, you know, I have. I don't really know exactly what this workload or this host is doing. I need to find a different team to help me out with this. Now, I can declare this as an incident from this incident page, and you can see I have various sets of context that's already pre-populated here, right? I can add various folks on the team.
I don't wanna page these people, but I know the team members who are involved in this. Even that process tree you saw, that detailed context of who ran this, what commands were running, what the process tree looked like, all of that is packaged together into this incident page, and I can quickly declare this as an incident. Right? That creates this incident timeline that tells you what happened from the time I declared an incident, and this will also automatically create a Slack channel for me. This Slack channel is one of the 600 integrations we have, but you could use Jira, you could use ServiceNow or whatever you use internally to collaborate. Each of them is not just a checkbox integration. These are deep integrations that customers actually use.
If I was collaborating with my team and I had some comments that I made on the Slack channel, you can see how that quickly shows up on this detail page without me having to, trying to transfer that context back into the incident page. Now, let's say I mark this incident as being resolved. Once I resolve it, as you know, you saw today, it generates a postmortem. I can go back and kind of analyze what happened. Typically, the best practice in this case is a security person will sit down with the DevOps engineer and make sure that we kind of dive deeper into what exactly happened, so it never happens again. This is a very important part of cloud security that we enable.
You know, a lot of this, as I mentioned earlier today, this part can be completely automated through workflows that you heard about today, notifications. This can happen quite seamlessly. The idea behind this product is to allow anyone to be able to use it. If you wanna try it right now, you can do that too. It's also meant for investors, not just DevOps people. All right. As a DevOps engineer, I can also use this page to dive into the new product that I wanna showcase. It's called Resource Catalog. Resource Catalog is in beta, so we just launched it. It gives you an overview of your infrastructure, storage, compute, network, and so on, overlaid with risks. Now, this is something that customers typically struggle to understand as their cloud sprawl increases.
They wanna know who's creating instances, who's creating S3 buckets, and what the security risk is. To get this working was trivial for me, right? Even within Datadog, we had all this context already. To get to this view was quite a lot of hard work, but it was very easy for us to use our platform to get this kind of information. If I was working in a specific team, we are able to identify who those team members are and look at the infrastructure that specific team owns. If there's an issue, like a database encryption or whatever, I can share those findings with the specific team members, right? Through this interface. Again, as you heard today, Jira, ServiceNow, Slack, whatever you use internally to communicate within your team, we support that.
You can proactively mitigate these kind of issues without involving the security team. This is what a security ambassador or a cloud ambassador does in a DevOps team. They work within their team to go mitigate issues, and we are trying to enable that. That's it for the demo slide. Now if you can flip back to what we learned today, and hopefully this was easy for you to follow. Security and DevOps teams can resolve these kind of incidents in minutes because we enable collaboration, right? We use that deep observability context that makes cloud security manageable, easy to use for everyone, right? All of this is built on reducing friction and enabling collaboration between those using our products and customers already love and use this.
Second, DevOps teams are able to get visibility into what they need to fix, which they did not have prior to this, and they're able to do it proactively without involving the security team. Now, I showed you one pillar, cloud security management today. We have advanced our other pillar, application security management, also here at Dash, right? With vulnerability management and the beta of native protection, right? You can check out our comments and demo from the keynote earlier this morning to learn more. With that, I will hand it back to Yuka for the Q&A session.
Fantastic. Thanks, Prashant. Okay, we are gonna do our third and final Q&A session. Prashant, please take a seat. Joining Prashant will be our SVP of product who leads all of our security offerings, Pierre Betouin. Okay, same format as before, everybody. In person, please raise your hands. If you are on the webcast, please feel free to type in questions into the chat window, and we'll start in person. Please go ahead.
Hey, thanks for doing this. This is Eric Heath with KeyBanc Capital Markets. I wanted to ask two questions. One quick one, just should we think of resource manager essentially as a beginning or actually a CMDB, if you will? Is that the right way to think about it? Two, appreciate DevOps is a much bigger kind of user base within an organization, and they're increasingly doing DevSecOps, and security falls on their shoulders. Do you still find within your customers that security still rests with the security team, like the budget, and it's more of a push onto the DevOps team, and that's really where the budget sits with the security folks? How do you kinda sell into that motion, if you will?
That's a lot of question. For the first one, the Resource Catalog is the central place where you will see all your cloud resources, wherever they are, so on the different cloud accounts that you have. You'll have the overlay of the security posture, the potential misconfigurations, the vulnerabilities, or the threats of those assets. We are really focused on the cloud use cases here, we are not doing any IT corp stuff. The example that you had is much more weighted towards the IT corp use cases. That's the central, the foundation of the Cloud Security Management. This is something that you will find in most of the CNAPP solutions. You know, you always, you will always find that source of truth, that repository where you can find all the assets.
It's not really playing in the, in another category. It's part of the CNAPP category. The second question was about the DevOps budgets, right, on security. Cloud Security Management is probably one of the most adjacent security categories with the customer base that we have today. Datadog is a DevOps platform in the first instance, and we're expanding the capabilities of that platform. If you look at the spectrum of the security maturity, Cloud Security Management is very adjacent to the DevOps customers that we have today. If you think about vulnerabilities, misconfigurations, most of those topics are under DevOps' hands. They are the ones who can mitigate. These are not problems that security teams can actually mitigate. You need to redeploy an instance.
You need to change a cloud configuration. That's not an action that the security team could make. Those topics are really between DevOps teams and security teams, and we see many customers who don't really tackle that on a security point of view, and want budgets on the DevOps side.
Okay, great. Another in-person question?
Yeah.
Okay.
Adam Tindle, Raymond James. Prashant, I just wanted to kinda double-click on cloud security, broadly speaking, because a lot of what you're talking about here is, you know, frictionless installation, which sounds really interesting. It sounds like you need to be an existing Datadog customer to really get the benefits of a lot of this. Do you envision a world where you can kinda decouple and potentially lead with the security platform at some point versus having to be an existing Datadog customer? Maybe lead us into that vision. What would it take to achieve something like that? Thank you.
Yeah. Currently, we are targeting existing Datadog customers, and that is the most easiest way for customers to start using Cloud Security Management. Would we ever target customers who are not existing Datadog customers with Cloud Security Management? Potentially, but we're only getting started in that journey right now, and we will discover a lot of things based on customer feedback, and hopefully build that product someday.
You know, the same way we often land with one of our three observability pillars, and then we expand on those use cases, we have the same approach with security. It's very simple for us to extend usage from observability towards security. If you want to start with new customers on the security side, you will need to build a stronger brand, and that's what we are working on right now, but it will take some time. We're at the very beginning of that journey today. The easiest path for us to grow is really to focus on the expand motion right now.
Okay. I'm gonna take a question from the webcast audience. Actually, there's two questions here from Fatima Boolani at Citi. I'm gonna just break it apart, actually. First question, what portion of the aggregate telemetry the Datadog platform collects can be repurposed for cyber defense outcomes or use cases?
Um-
I think from the observability.
Yeah, yeah. Absolutely. How much we can leverage from the observability side. What you've seen here, in Prashant's demo, all of this would have required probably 10 different tools to be put together and come together to get that level of value. You would have needed a Cloud Workload Security product, maybe an APM product, many different solutions like this. Most of the traffic that you will see today, most of the malicious traffic, is not harmful to workloads. It will be generated by bots. Without the observability insights, it's impossible to know if it's relevant, if it's actually harmful for the application or not, and if you need to trigger any remediation, if you need to have any action on the DevOps side.
The observability context will help us reduce the noise, focus on what's important and prioritize the work that we have here. It's basically the central piece of the security solution. You know, one of the first problem in that space is noise. The way to reduce noise is usually to get more data, leverage more data, more depth. It's not necessarily volume, like you can see in the SIEM space, where people have a lot of volume, a lot of data, but they still struggle to reduce the noise. It's usually by having more depth and more quality in the insights that you gather. Observability is the pillar of everything we build on the security side. It helps us reduce the noise, provide very deep investigation capabilities to our users.
That's also the way we can bridge DevOps with security teams by bringing a unified shared data platform where we serve the information through different lenses, a DevOps lens or a security lens.
Great. Fatima, I'm gonna restructure your question a bit for our purposes. So this new CSM, Cloud Security Management umbrella, can you talk a little bit about why that's one SKU, and then you've got different SKUs for SIEM or Cloud SIEM and Application Security Management, and why are we structuring it that way?
We learn every day from our customers, and what we learned is our customers are interested in a broad set of security capabilities. They are not that much interested in a very narrow problem. They want to solve more things than buying a small solution that is focused on one set of challenges. What we learned was from the DevOps perspective, so the customer base today, if you have a one solution unified, one dashboard showing the threats, so the attacks, the attackers, showing the potential vulnerabilities, weaknesses, is a much more compelling story than a very specific product, which would be focused on bringing visibility on lateral movement or FIM, which is what for instance CWS provide.
The sum of those products make it very compelling.
Great. Okay, let's take another in-person question. Fisher?
Thank you for hosting us. This is Yi Fu Lee in for Jonathan Ho at Canaccord Genuity. Just a small philosophical question. We just wanna get the panel's opinion on We obviously have seen the convergence of observability and security happening. What's the benefit, do you think, that starting with observability first and going to security versus the other way around? The second part of the question is, I guess, why should a customer, I guess, purchase the security portion, you know, such as CloudSec, from an observability first provider, versus the other way around?
One of the biggest friction in the security industry is the deployment of technologies to monitor or to actually protect, and they need to be vetted by DevOps teams. Usually you ask DevOps teams to deploy another agent, which could be a loss of performance, which could impact the reliability of their scope without bringing a lot of upside. When you sell to security teams and you try to deploy security technologies, you face a lot of friction. We are tackling the problem in the opposite way, and this is very unique to observability vendors.
We have a lot of DevOps customers who are using the platform every day for their own purpose, who have deployed those agents at scale on every instances on the infrastructure, and we provide security capabilities in those same agents. We are basically removing the friction, the number one friction that we have seen in the industry. The second challenge that you have when you approach the problem that way is to provide a continuous experience, starting from the left, starting from the DevOps, towards more security-minded persona, potentially more mature security teams. This needs to be done in a continuous way. What you see with Cloud Security Management and what you could see this morning in the keynote is we invest a lot on all the initiatives on the left. We are
Datadog started in production, so we have shined in, you know, monitoring cloud instances in production. Basically, we are shifting left towards DevOps teams and providing visibility on misconfigurations, vulnerabilities. In addition to that, we also provide some remediation capabilities with protection. The number one topic for us is to bridge our customer base with the security use cases.
Okay, thank you. Another question from the audience. Thank you.
Thanks for doing this. My question is, this is frictionless to install to the install base. You have unparalleled context in terms of what's going on. I guess, what is the friction that you see in adopting security? I mean, the way you present it sounds just the obvious choice. That's my first question. Then my second question is, and this could be totally off-base, but when I listen to you talk, it sounds like security is actually like a people problem. In other words, it's like really about just you have to collaborate, you've got too few people. My question here is the collaboration component going to be a much bigger part of this solution than typically you see in observability? Like, we saw the CoScreen solution.
You know, is that going to be perhaps a driver of adoption? Curious on that. Thanks.
If there is friction, or at least the challenge that we're facing is to again connect the DevOps customer base with security solutions. It's much more about what you provide. You know, to Amit's point earlier, we grew Datadog very organically. We basically solved a set of problems for our customers, and we started to expand by solving additional problems which were adjacent to the ones we solved earlier. That's exactly the same for security. One of the risk would be to create a set of products which would be disconnected from our customer base. If you provide security solutions for very mature security teams who are not yet Datadog customers, you will create a disconnection with the customer base, and you will have to bridge that disconnection with humans, go-to-market teams or whatever.
This is not the way we have done things at Datadog before. It's not a friction, it's a challenge. What is very important for us is to create a continuum between the DevOps solutions that we have today and the security solutions that we are adding right now. This is really one of the big challenge that we are facing right now, and we are putting a lot of focus, and we are listening to our customers. We have, you know, insights on the traction if there is a pull from our customer base on specific features. What we see is that the Cloud Security Management platform is much more compelling for them than the sum of those smaller features that we shipped earlier.
All right. In person. Thank you.
Thank you. What are most of your customers using for their cloud security ahead of adopting your product suite? Or what are they doing, if anything?
What's your question exactly?
Well, are you typically replacing some type of cloud security suite when you come in and sell your upsell product in the security, like, space broadly? Or have people just not secured their cloud or have anything actively monitoring it?
We don't face a lot of competition right now coming from that DevOps angle, but competition is real, and CNAPP is a very dynamic security category. What we see regularly are modern security companies like Lacework with Orca. These are like modern security vendors offering usually like agentless CNAPP solutions. These are the ones that we see on a regular basis. It's not really something that we face on every deal. The more traditional vendors would be Palo Alto with Prisma Cloud. CrowdStrike has also invested a lot on that front. These would be the competitors that we see on a regular basis.
No, sure, Pierre. I did a bad job with my question. Do you typically have one of those competitors in place already when you're upselling your suite, or is it usually a green field?
No, it's greenfield.
Okay.
It's green field most of the time. We rarely replace at that point.
Great. Thanks.
Okay, another question, please.
Yeah, hi. Can you maybe just double-click on some of the things that you're doing to get the more traditional security folks to be comfortable with adopting your security tools instead of going with the traditional security vendor?
In terms of roadmap, you mean?
Just in general, maybe being comfortable with the Datadog brand from a security perspective instead of going with a Palo Alto.
No, I think it's a question of depth. You know, like, you have the breadth, and I think it's very important for us to provide that Cloud Security Management solution. If you want to address more mature security teams, usually they have a lot of requests on depths, the way you are going to mitigate specific actions, automate. These are the things that we are working on. You obviously have a dimension around a brand, establishing a security brand. Before investing there, we want to make sure that we can support more maturity on that front.
Okay, great. Another question, please.
Thanks. Hey, Mike Cikos from Needham. I'm thinking about the security products that you guys have, and I know you've spoken about Watchdog AI, right? I'm curious, you're able to tap into these 600+ data sources you talk about. Can you help us think about Watchdog AI maybe more broadly? Is it one cohesive AI model, or is it a number of separate different AI models under the Watchdog umbrella? The follow-up is all that data that you're ingesting, is that all being piped through the security AI model to, I'm trying to think about how you guys prove that efficacy if we're talking about the volume of data, and that's what differentiates you guys.
We have a lot to do on the AI front. The reason is we have a lot of data, and we can start to resurface more anomalies, so the unknowns. What we do right now is really leverage the observability context that we have to provide continuous end-to-end experiences, collaboration, enable that collaboration and so on. We don't invest right now massively in AI on the security front. There would be a lot of different opportunities for us to resurface what we don't know yet. We already have a lot of work to do to resurface what can be known from the existing datasets.
Okay, let's take one more question.
Thank you for the last question. On the topic of what gives Datadog the license to move into security and win in security, you've had a number of slides that spoke to all the data that's relevant. On the endpoint side, which seems like a pretty crucial data source in terms of, you know, doing the correlation across all the possible threat vectors, what's the strategy around endpoint? It doesn't sound like you're going into endpoint security, but is that a partnership thing? How do you get access to that data to drive that improved threat detection, vulnerability scanning, that type of
Yeah
posture?
You can see the portfolio. We have those three products: Application Security Management at the application level, Cloud Security Management at the cloud and infrastructure level, and Cloud SIEM. We really focused, and we still do, on cloud use cases, and we are not that much into IT/OT use cases, endpoints and so on. We have a lot of different integrations, so the easiest way for us to start to correlate those different activities is to go through integrations and to basically use the Cloud SIEM. If you want to correlate some EDR activity with cloud security activity, our Cloud SIEM is probably the easiest way to do that.
You will use one of the EDR, XDR type of integration to perform that.
Okay. With that, we are going to conclude our investor meeting. Thank you all so much for spending time with us today. Thank you, Pierre and Prashant. We'll talk to you again on November the third.
Thanks.