Dynatrace, Inc. (DT)
NYSE: DT · Real-Time Price · USD
37.61
+1.40 (3.87%)
At close: May 1, 2026, 4:00 PM EDT
37.64
+0.03 (0.08%)
After-hours: May 1, 2026, 7:08 PM EDT
← View all transcripts

45th Annual William Blair Growth Stock Conference

Jun 4, 2025

Jake Roberge
Research Analyst, William Blair

Did he kick off? All right. Thanks, everyone, for joining here in person or listening over the webcast. Before we begin, my name is Jake Roberge . I am the research analyst here at William Blair that covers Dynatrace. For a full list of our research disclosures, please visit our website at williamblair.com. With that, really excited to have Rick McConnell, the Chief Executive Officer of Dynatrace, and Jim Benson, the Chief Financial Officer of Dynatrace, here with us today. Thanks for coming. Appreciate it.

Jim Benson
CFO, Dynatrace

Thanks for having us.

Jake Roberge
Research Analyst, William Blair

Before we jump into the fireside chat, Rick is actually going to start off with a little bit of a presentation just to level set the room for those that might be newer to the story of what Dynatrace does, the market they're attacking, and then we'll jump into the fireside chat. Rick, I'll turn it over to you.

Rick McConnell
CEO, Dynatrace

All right. Thanks very much, Jake. Good morning, everybody. To begin, it will come as no surprise to any of you that the world runs on software. Software these days, as a result of that, has to be always available, reliable, has to be secure, has to deliver an exceptional user experience. Observability is essentially the category of software that enables this to occur. Now, observability has really gone through multiple levels, multiple phases. The first phase was really what we refer to as monitoring. Monitoring is really largely about dashboards. Dashboards, you might imagine, give you status code as to whether your software is working or not. Red, yellow, green. Is it working great? Is it not working at all, et c.? The challenge with dashboards is that they tell you that it's not working, but they do not tell you what's wrong with it.

By not knowing what's wrong with it, you don't know actually how to fix it. It turns out that many of our competitors, and really the state of the art today in much of the observability community, is still dashboards. It still is red, yellow, green. Observability has really moved to the next phase. That next phase is really oriented at a much more intelligent set of systems that are constantly evaluating your software environment and using, in the case of Dynatrace AI, not for the last year and a half, but for the last more than decade, to analyze billions of interconnected data points to tell you not just that something is wrong or where something might be wrong, but rather where is it wrong, precisely why is it wrong, and therefore how do you fix it.

Maybe what's so exciting to me about observability as we look ahead is this notion that the next stage is really moving us into a world of autonomous systems using agentic AI to not only give you very precise insights, very precise answers, but rather taking you to the next level of actually fixing those issues on its own. Essentially going through a step of autoremediation where agentic AI can evaluate what happened and then actually solve the problem for you. This is the evolution from monitoring to observability to agentic AI that is the journey that we're on in observability. Now, the result of this is a very large and rapidly growing market. We see the observability space itself is more than $50 billion. The application security component of that, around $14 billion for a combined total of around $65 billion. Very large market.

You can imagine this because, as I said at the outset, the world runs on software. That software has to be operational. Now, why is this getting harder? Why is it getting harder to manage software? I walked into a large oil and gas company a while ago. The principal of a large oil and gas company down in Houston took me into their network operations center. Hundreds of people staring at hundreds of screens monitoring their software. His comment to me was, "Rick, this is what you need to help me get rid of." Why? It's because hundreds of people staring at hundreds of screens to run thousands of applications was unsustainable. You simply can't take an organization that is, in this case, hundreds of billions of dollars of revenue and be staring at screens, seeing something go red, and then evaluating, "Okay, it's red.

Who do I call? What do I do next?" You see a very lengthy triage pattern or problem that you're trying to solve where you first have to figure out who to call, then they have to get on it, then they have to evaluate root cause, and then once they get to root cause, they have to figure out what to do about it next. Very complicated, very lengthy, and in the meantime, you can have systems that are down. If you look at all of our usage of software as individuals, as end users, we expect a user experience that works perfectly. We need to find the gate to our airplane. We need to be able to buy product online through e-commerce. We need to be able to watch media streams.

Whatever we need to be able to do financial banking, we expect it to work perfectly then and there every single time. If that system is down, we have a major issue, or I should say the organization has a major issue, and that organization's major issue is that their software is not working and their end users are having a challenge delivering the user experience or doing what they expect to be doing. This problem is made much worse by the cloud. Now, clearly, hyperscalers are exploding, about $250 billion through AWS, Azure, GCP these days of overall revenue on an annualized basis, growing in the mid-20s. It is accelerating the delivery of software. This is a good thing because for organizations, it is making it easier to deliver software.

The problem is it is creating a massive explosion of data and an incredible increase in software's complexity. The result of that is more and more, you have these billions of data points that you need to be able to arbitrate and arbitrate rapidly through insights coming out of a sophisticated observability system. The cloud is creating fragmented data at enormous scale that needs to be processed, and that is what observability is all about. Now, Dynatrace, we think of as the leading AI-powered observability platform. This is important because of all of what I've said heretofore. You cannot process this data manually. You cannot get the number of people needed to address these kinds of issues. What you need is really two things.

You need, number one, a sophisticated system that is analyzing the data to create insights, and then secondly, to enable those insights to be actionable so that you can then immediately resolve these issues as you look ahead. Moreover, it is not just about technical understanding of your business. It is also about what we refer to increasingly as business observability. Business observability is not just how your software is running, but how your business is running. I was in the Middle East meeting with the largest customer we have there, a very large bank in Saudi Arabia. The CTO with whom I was meeting said, "Rick, the CEO wants Dynatrace on his desktop." This was a major evolution because usually we sell to AI Ops, we sell to IT, we sell to developers, we will sell to platform engineering.

What we're seeing is whether it's airlines, financial services, healthcare, travel, you name it, we are seeing a migration toward organizations really wanting to better understand their businesses themselves. It is this business observability that I think is the next foray of observability as well. Now, the other evolution of observability is toward completely integrated platforms and systems. At Dynatrace, this is specifically what we've done. We look at not just application monitoring, but application infrastructure, real user monitoring, log management, and monitoring. There are multiple different segments which we put together, which then provide a completely integrated perspective of your overall monitoring environment. In those early days that I talked about, in early days of monitoring and dashboards, what happened was that you might have multiple different vendors.

You might have a vendor that would handle applications, a vendor that would handle your infrastructure, another one for end user monitoring, another one for logs, et c. That is disappearing. One of the reasons for that is because it is an oversight against all of those components that gives you the most comprehensive view as to what's happening in your environment. If you have to piece together all of those insights independently, it makes it much, much more difficult to be able to provide those insights needed. Having all of those insights together and combined is a very sensible approach because then you have them all in one place. That is what we do with this platform. As we look at the Dynatrace difference, we see it as these bottom three elements on this chart.

Number one, we have a completely integrated data store, which we call Grail. What Grail does is it stores all observability data types in our vernacular. It's logs, traces, metrics, real user data, et c. All of these data types in a single data store. This is important because we are able to manage all of those data types and keep them together in context of one another, which provides the best insights associated with your business overall. Secondly, we have a very sophisticated AI system that, as I say, is not something that we conjured up in the last year or 18 months or even two years since generative AI became so prevalent in our society. Something that we've been evolving for over a decade. And it consists of multiple different techniques of AI. One is causal AI.

Causal AI is focused on root cause analysis, precisely what happened in your environment based upon that data, based on those insights. Secondly, predictive AI. Predictive AI is oriented to then applying anomaly detection, machine learning on top of causal AI to anticipate where issues are going to occur so that you can then address them before issues begin. I talked about this notion of software working perfectly. Software can't work perfectly if it breaks, and then you have to fix it. Software can only work perfectly if you anticipate it in advance and fix it before something happens. Third and finally, it is about generative AI. Generative AI provides a natural language interface into the overall platform to be able to bring the Dynatrace platform and its insights to a much wider array of individuals.

By doing so, you can then accelerate the insights that come out of the Dynatrace observability platform. Finally, it's about automation. As I said at the outset, as we head into a world of agentic AI, what our companies, what our customers want is they want a fully autonomous system that can auto-remediate. Let me give you an example. We have a large customer in British Telecom. They began with on the order of 16 observability tools, none of them particularly connected. The result of it was a set of insights that were hard to piece together. They bring in Dynatrace, consolidate down these tools, integrate the data stores, accelerate the insights that they get out of those systems, and then can begin to automate the results based on those insights.

The result of this was a 50% reduction, 5-0% reduction in incidents and a 90% reduction in mean time to respond or recover. Those stats are enormous. Imagine huge companies, huge organizations that can reduce the number of incidents by 50% and then reduce the overall amount of time it takes to resolve incidents by 90%. This is huge. They estimated the cost savings associated with this at GBP 28 million over a three-year span. This is precisely what we see from our largest customers. We see them reducing incidents, reducing MTTR, as it's called, and saving a substantial amount of money, not to mention the fact that they are able to deliver a much better user experience as a result of having software that works better, that is more available, more reliable, and that is more performant, generating therefore a better user experience.

If you look at various handler supports, Cardinal, Forrester, GigaOm, ISG, and others, we almost always, I'd say always, I think, are in the upper right quadrant or equivalent of leaders in the space. The reason is because we deliver these kinds of answers, not just data, that enables software to work better. We target the global 15,000. We do this because we do have a sophisticated system, and it enables the best analytics based on the broadest array of data. You get the most data out of the largest companies. We tend to focus there. We do sell to a wide array of personas, not just AI Ops, but we sell to executives. We sell to platform engineering. We'll sell to SRE teams. We'll sell to developers. A multitude of different personas are interested in observability data and its insights.

Finally, before we go into our Q&A with Jake, this is a bit of Dynatrace in terms of our financials at a glance. About $1.7 billion in overall ARR. We do not lose customers very often, very rare. We operate at gross retentions in the mid-90s. Last quarter, we grew this business at 20% in subscription revenue with 29% operating margin, 32% in pre-tax free cash flow. It is a very, very healthy business, growing rapidly with customers who very much are on board with leveraging the value of Dynatrace and observability. To sum up, very large and growing TAM and market, exceptional set of financials, a very healthy enterprise spinning off about $0.5 billion or more of free cash flow on an annualized basis. We have an incredible observability platform that delights and delivers extraordinary value to customers.

I'm biased, but we have a great team, a great leadership team, and a great company of people delivering it. We are motivated, passionate, and focused on delivering extraordinary customer value as the world moves ahead with its criticality of software as we look to the future. Jake, back to you.

Jake Roberge
Research Analyst, William Blair

Thanks, Rick. Really appreciate that. A really helpful overview to kind of set the stage here. I guess just to kick things off, based on a common feedback point that I get is Dynatrace, great company, operating a large market, but there are other large players in there. Maybe, Jim, I'll throw this over to you since Rick just did a presentation. Let him take a breath. I guess first of all, how do you look to compete in that type of market where there's other large players?

There has been a lot of acquisitions in the observability space with Splunk, New Relic, Sumo Logic. Has competition changed at all over the last few years as a result of those acquisitions?

Jim Benson
CFO, Dynatrace

Yeah. I think I'd start with, I think the fact that you have a lot of players, a lot of the things that Rick talked about, this is a very large spend area. It is not surprising that you have multiple players. I think it actually is to our benefit. Some of the things that Rick talked about are really playing as a kind of a continuing trend. We talked about it maybe 18 months ago, this notion of companies dealing with tool sprawl for the very reasons that you outlined, that they have different divisions, different departments, all using their own tools, very difficult to manage.

Rick talked a little bit about using the BT example. That is becoming more a theme where customers just cannot deal with that anymore. We tend to focus on very large, complex environments because that is where we thrive. We are in a great position, one, with the architecture of the platform. All the things that Rick talked about, the platform being unified, the platform being AI-powered and enabled, these are all things that allow tools to be consolidated. You consolidate tools, you have capabilities now that you can save a customer money on software costs, and you can save a customer money in the way they are running their IT operations. I would say the environment, yes, there is a bunch of players.

I'd say we are in a really good position because what's happening in the broader market is a theme towards consolidation, simplification, and vendors that can integrate things and allow customers to have a better experience overall. I think what you're seeing is we're benefiting from that. We'll get into it a little bit. We've done some things on the go-to-market side to better go on the offensive to capitalize on that. We've done some things on the product and packaging side to allow customers to better leverage the platform than they have in the past.

Jake Roberge
Research Analyst, William Blair

Yeah, that's helpful. Rick, back over to you, thinking about GenAI, obviously a big topic in software land these days. How do you see GenAI impacting Dynatrace?

Maybe bifurcate it between both the workload perspective, where obviously GenAI is just another large workload moving to the cloud, as well as what you can do with GenAI in the platform from an agentic perspective and a product monetization perspective.

Rick McConnell
CEO, Dynatrace

Yeah, I think that's, Jake, precisely how I would bifurcate it. On the one hand, you have AI observability workloads. AI is being used increasingly by organizations. As they use AI, that's generating actually more software. Yeah. I talked about explosion of data, increasing complexity, and more and more software being developed more rapidly in the cloud. AI is further accelerating the rate of development of software, which is making the problem even worse and making the resource constraints even that much more difficult as well.

AI, from an AI observability workload perspective, is actually generating an increased need for observability and is further heightening the evolution of the market. Our solution works to oversee and manage those AI observability workloads in the same way that we would oversee any other software workloads. Secondly, it is about using AI in our platform and extending and evolving that platform to not just use causal, predictive, generative AI, as I discussed earlier, but also evolving it to use agentic AI to then take the insights and resolve those issues using agentic AI. One thing that is critical about this is really the differentiation of Dynatrace in that in order to take action on insights, in order to take action on your observability data, you actually have to be sure you know what the problem is.

As I mentioned earlier, a lot of other organizations will provide correlations. They provide, I'd say, educated guesses as to where issues are. Because of the fact that we have Grail, common data store, fully fleshed out, it enables you to deliver insights using our AI engine that are deterministic. You can count on that. By being trustworthy, you can then act on them through agentic AI. That's really the evolution of how we're using AI in our platform.

Jake Roberge
Research Analyst, William Blair

Okay, that's helpful. Just shifting over to the macro environment, it's obviously been a volatile macro over the past few months and even over the past year or so. Be curious what you're hearing from customers on the ground and then maybe how that potentially impacts some of those.

You talked about a lot of deals are trending towards these platform consolidation deals where you might be consolidating 10 or 15 different point solutions onto one platform. How does this more variable macro environment impact those larger transformative deals?

Jim Benson
CFO, Dynatrace

I think I can take that. Certainly there is no denying that the environment is dynamic. It seems to be dynamic daily and weekly. Having said that, the observability market is pretty resilient. Within that, it's a resilient area for all the reasons that Rick outlined because the underpinnings, when you think about pretty much any industry, the underpinnings of any industry, even industries that you do not think of as being technology industries, software is kind of the core of what operates a lot of these industries. It is critical to have observability tools in place that allow you to manage your environments.

Now, the benefit in dynamic environments is for companies that can help you save money. I think that's why there's a theme also of tool sprawl, but also if I can consolidate tools, I can likely save money. I can also have my environment run more effectively. Even though we're in a control where you can control world, we actually think the area that we're in with observability is pretty resilient, number one. Number two, we actually offer something that's differentiated that's going to allow customers to save costs, which is important in the environment that we're in. People are looking for areas that they can drive more cost out.

Jake Roberge
Research Analyst, William Blair

Okay, that's helpful. Sticking with Jim, could you talk about your guidance philosophy for this year?

I mean, obviously last year, if we flashback a year ago, you set kind of the expectation that because you were going through a go-to-market transition, you would not be raising the guidance until after the first half of the year and you got more visibility. Given the variable macro environment, but on the other side, maybe a more stable go-to-market this time around, how are you thinking about guidance and the pace of that throughout the year?

Jim Benson
CFO, Dynatrace

It's a good question. I mean, as you know, I'll start with we manage the business in a very measured way. That has not changed. That has kind of been a basic that we have done all along.

We want to ensure with guidance that we're factoring in what we know and that we're delivering a level of, the term I use, prudence, which is conviction that we can execute against kind of the parameters that we've set. You're absolutely right that in a dynamic environment, there's tailwinds and there's headwinds, right? The tailwinds are, one, we have a sales model that is now 12 months in its maturity around what we put in place for fiscal 2025. Tailwinds are new product areas that we're getting accelerated traction in, logs probably most notably. Tailwinds are tractions in the partner community where more deals are actually being influenced by partners than ever before. There's a lot of tailwinds in the business. Then you have the headwind side, which is customers in an environment that is a bit dynamic. They are cautious.

They're still spending money, but sometimes deals take longer. Relative to guidance, what we've done is we've built in a level of thought process that says deals will get done, but it might take a little bit longer. We factor into the guidance an expectation that deal cycles might be somewhat elongated. I can tell you to end it with the pipeline trends very, very strong. You say, what kind of data points from a leading indicator do you see right now? Since Liberation Day, I'd say we have seen no change in our pipeline. Pipeline growth and health is unchanged. Close rates really are unchanged here in the near term. Having said that, a lot can change. We've tried to make sure that we can evaluate that.

Relative to increasing or changing guidance, as always, I think I said last year, 20% of our year starts in the first quarter. You get 80% thereafter. We're not going to know a lot more after Q1. It is more likely an update. We'll evaluate after Q1. It is more likely we'll provide a more fulsome update after the first half.

Jake Roberge
Research Analyst, William Blair

Okay, that's really helpful. Shifting over to DPS, you've obviously seen really good adoption of DPS over the last year or two, now of 60% of ARR on that new pricing model. Can you talk about the early benefits that you're seeing with that transition and how you see it progressing over the next few years?

Jim Benson
CFO, Dynatrace

From the get-go, we talked about DPS when we first launched it, which is basically we've been at it for 20, for two years now.

Q1 2024 is when we launched the GA. You're right. The stats are 40% of your customers, 60% of your ARR. The whole thesis was a SKU-based model required a sale every time. If you sold, say, application performance monitoring or full stack monitoring or some suite of offerings to a customer and they wanted to try something new, they wanted to try logs, they wanted to try application security, it was a sales cycle. It was a pain point for customers. They loved the products. They didn't like the buying experience. The whole premise of DPS was give them full access to the platform with a rate card. You commit to a term, whether most of them are three years, could be one year though.

Commit to a term, you commit to more dollars, you get better unit price, commit to less dollars. The premise of that was always that, hey, if we do this and customers are getting value, they'll consume more of the platform. They'll add more workloads. They'll trial new things. We've seen that. We provided some statistics in our Q4 call where customers that are on a DPS platform or contract versus SKU-based, they leverage on average 12 capabilities on the platform versus five SKU. They consume at 2x the rate of the SKU-based customer. They have much higher NRR. Early in the journey, I was very honest about, hey, maybe there's some sampling bias because you have large customers that maybe would have purchased anyways. When you have 60% of your ARR, there's no longer sampling bias.

There is just a behavior where you get them on the platform and now you can drive more adoption teams to accelerate your penetration within a customer. We have seen great traction with that. We expect we will continue to see more than that.

Jake Roberge
Research Analyst, William Blair

Yeah, it is good stats there. 60% of ARR, 12 products versus five products, expanding at twice the rate. That is probably why logs is benefiting so much as well. We are up on time, so I will stop it there. Thanks, Rick. Thanks, Jim. For those that want to dig deeper into the story as well, we are going to have a 30-minute breakout session up in [Mayher], and that starts in about 10 minutes.

Rick McConnell
CEO, Dynatrace

Thanks.

Powered by