Okay. Very good.
Okay. Great. Thank you, everybody. I'm happy to be up here on stage with Rick McConnell, the CEO of Dynatrace. Maybe some of you noticed that I launched on the company just a week or two ago. So it was an effort. Probably took a little longer than I wanted, but we're finally out right ahead of the conference. So I'm happy to be part of your coverage crew, Rick.
Thanks for picking it up. It was an excellent report. If you hadn't read it, I recommend you do.
Good.
Great, great piece of coverage.
I love when CEOs say that.
Thank you.
I can't get enough of that.
All right. That's good.
Okay, Rick. So let's start maybe high level. And maybe you could describe the environment that you're seeing out there. I think one observation that I have is relative to the first half of this year when it felt tough for a lot of software companies. It seemed to me that this past quarter, both those on September and October results or quarters, put up results that were maybe a little bit better than expected. We didn't have a lot of blow-ups, thankfully. The beats were a little bit higher than normal. Tone was a little bit better. So it feels that things are at a minimum stable, if not slightly better. But I'm not sure whether that's how you would describe it. So I'd love to hear your perspectives.
Our perspective has been unchanged, which is essentially that macro environment isn't changing in any radical way.
Okay.
That said, I'm not sure what we gain by saying the macro environment is back. Let's really lean in here. I think there's still just too much uncertainty in the world. Now, having said that, we see certainly some areas of potential strength. Hyperscaler numbers now at more than $210 billion for AWS, Azure, and GCP. And annualized revenue growing at 27% with accelerating growth is a pretty darn good indicator of cloud opportunity to come. Obviously, we follow that closely as our customers consider moving from on-prem to cloud or increasing cloud adoption. That's a key metric. Consumption metrics we see from various different organizations are continuing to strengthen. And we're seeing those as well. Our consumption of our platform is growing.
Especially for our, we'll probably get to it, but our DPS or our Dynatrace Platform Subscription customers are growing even faster than our core legacy pricing model type customers.
Okay.
So this does provide a good indicator, I think, of maybe some potential to come.
Okay. Let's double-click a little bit on your comment that we saw pretty robust hyperscaler growth. I've certainly noticed too, so did everybody in the room. I don't know if you look at it this way, but if you look at the sequential dollar adds for all of the hyperscalers, it was actually the largest ever, so something good is going on at the cloud infra layer. Maybe, Rick, the question I want to ask you is, how can we think about the correlation between Dynatrace and hyperscaler growth? Not all of your revenues are cloud, but a significant portion is.
Yep.
How would you qualitatively describe that correlation?
Maybe I can start with a few metrics. First of all, the majority of our revenue or majority of our ARR today is SaaS.
Yep.
So it is not on-prem. It's SaaS. I would say the vast majority of new logo adds are SaaS.
Yep.
So they're all cloud-based by definition. And north of 90% of our customers have at least one, if not more, cloud-native workload on Dynatrace. And so cloud is eminently critical for us.
Yeah.
And growth in the cloud portends well for the opportunity that we see at Dynatrace. And so it continues to provide a fuel for growth. We obviously host our capabilities in the three hyperscalers as well. And so we follow it very closely.
Okay. So let's maybe talk a little bit about some numbers from this quarter, Rick. So with that as a context, you put up some good ARR beats in the last couple of quarters. But one of the key debates around the stock right now is that by reaffirming the full year guidance given your last couple quarter performance, it implies obviously a deceleration in ARR growth for the next few. So obviously, a debate can take hold as to whether that's an indication of some emerging stress or whether it's something a little bit more company-specific in terms of seasonality. Do you want to address for a moment why we're seeing that ARR decel?
Sure. It is. We came out at the beginning of the year with a 15%-16% ARR constant currency guide. We maintained it through the last quarter, I would say. We view it as a prudent guide.
Yeah.
At the beginning of the fiscal year back in the April, May time frame, we made a series of go-to-market changes.
Yeah.
And it resulted in about 30% breakage of accounts relative to our existing account executives. That is always a process that you just have to get through. We made those changes back in May. So at the point where we're finishing our second quarter at the end of September, you're sort of five months in.
Yeah.
A pretty sizable percentage of our reps, as a result of it, about 30% of our reps were also less than one year in tenure.
Yeah.
So we had these couple of factors to pay attention to. One coming off a pretty material go-to-market reorganization in sales combined with the addition, which I would add is ahead of plan, of new reps, which is resulting in a higher percentage of reps being less than one year tenure, and obviously at lower productivity as a result of it in that first year. We just felt it was appropriate to be cautious.
Okay.
And prudent going into the second half. As I say, there are, and we can get into this, Carl, but there are numerous tailwinds to the business as well that we were very excited about. But nonetheless, when you have that degree of reorganization in the sales force, we felt that was important. One last point on this, I would say, is Dan Zugelder is our CRO. And he began back in July of last year.
Yeah.
His first real opportunity to make discernible change in his sales force was at the beginning of the fiscal year.
Yeah.
So he certainly isn't going to make those changes in November, December, coming into our hottest selling season of the year and when quotas have already been assigned. He's going to do it at the beginning of the fiscal year. So he and I spent a lot of time together along with Jim Benson, our CFO, evaluated the kinds of changes we wanted to make. And we made those in May. We are delighted by the fact that we really saw no disruption during the first fiscal half and that instead we delivered a very strong fiscal first half. So we're optimistic, but those are some of the characterizations of how I would put our stance on the second half of the year.
And for those that might be less familiar with the Dynatrace story, Rick, if we could just back up, what was the primary impetus to make this go-to-market change? What's the opportunity that once we come out of this structural change that could provide potentially a growth acceleration for Dynatrace? What's the end state that you made these changes in advance of?
Several go-to-market changes. Probably the biggest was simply the notion that the amount of ARR coming from larger customers, sort of the IT 500, is much greater than what's coming from the customers that are doing, let's say, 100K in ARR.
Yeah.
That's where we're seeing it. I mean, as we look to Q4 of last year, we closed $18 million plus ACV type accounts, including our first nine-digit TCV deal. The result of it is, upon analysis, sort of realize that especially for Dynatrace, the money comes from the larger accounts. The allocation of our account executives coming out of the last fiscal year was not commensurate with that allocation.
Okay.
As a result of it, we had to make some changes. We reduced some number of people that we didn't think were ready to become strategic account executives. We've since added many strategic account executives who are ready for prime time in the IT 500 and have promoted others internally. It's been a mixture. That's the biggest change. Another change is an orientation toward partners, which now account for well more than 70% of our overall ARR in terms of fulfillment. This includes hyperscalers, GSIs, and others.
This is an organization that has radically strengthened under the leadership of a person we added actually just a year ago yesterday. I learned that Jay Snyder runs our partner organization, super, super strong and has built a great team. I'm very much personally involved in the GSI work we're doing, which could grow. And then the last piece is just a realization of an expansion of sales motions.
Yeah.
It used to be the case that our sales motion was pretty simple. We sold the ops, so there was an AIOps team. We would sell the AIOps team.
Yeah.
But that's beginning to change as you sell to more strategic accounts. The CxO has a lot more power. Many of those 18 deals I talked about in the fourth quarter of last year were approved by the CxO. Why? Because they need to drive tool consolidation. They recognize that in some cases they're using 16 observability tools and they need to have one or two or three.
Yeah.
They're paying too much money. They're not delivering productivity. The end user experience is terrible.
Yeah.
So the CxO is becoming more relevant. So that sales motion is a second one to add AIOps. And third and finally is developers are having much more say, especially in cloud-native environments. And so the developer is becoming more critical in our sales process. So those are sort of the things we've changed: increased segmentation, partners, and expansion of sales motion.
So that's good color on the sales and partner side. But Rick, what about on the product side?
Yep.
Because if you are going to move upstream and presumably displace some incumbent vendors, is there also a story where Dynatrace has made some significant feature improvements to the product that put you in a better position to displace some of those incumbents? And if the answer is yes, what are a couple of the feature improvements that are enabling you to win upstream?
The biggest by far is an expansion into log management.
Okay.
Log management has traditionally been a very small part of our business.
Yeah.
Our view is quite simply that logs should be a very integral part to overall observability. It's interesting that our market has developed in some ways in a bizarre form, which is that observability is over here and logs are over here. You add the Splunk of the world managing logs, but then you had us and other observability vendors managing the observability framework of traces and metrics and real user data and others.
Yeah.
But you don't really get the maximum benefit from your observability environment without logs.
Yeah.
You sure as heck can't deliver world-class observability if you only have logs.
Yeah.
And so our view is that those worlds converge.
Okay.
We are at this point very much leaned into and focused on log management as an evolution of our business.
Yep.
We have delivered that solution deeply integrated into the observability framework. Logs are integrated into Grail, which is our underlying completely integrated data store. So that's another core component to provide access for our AI engine to then process all of those data types in context together to deliver a more compelling answer. And the last thing that we've done with regard to logs specifically is change the pricing model. We believe that this world is very ripe for disruption. There is a pricing umbrella here, the likes of which I've really almost never seen before. I had a customer of ours, an Australian bank, the CTO told me that the pricing of, let's say, a major log provider were meteoric.
Yeah.
Unconstrained, meaning that logs, to the extent that they're driven by consumption, you're consuming more and more and more.
Yeah.
The pricing is very high. They're just consuming IT budgets.
We've heard this about Splunk for five plus years.
Exactly. But the difference is they're now getting some competitors.
Okay.
This is an area that Dynatrace we can attack. One of the things we've done is we've introduced an unlimited query pricing model for a fixed duration of time.
Yeah.
This is going to fix budgets.
Okay.
Around logs. So these are some of the elements along with pretty significant upgrades in our user experience. This is another major product development that we're making.
That's exciting. Rick, are you able to—I know it's early, but are you able to offer any proof points of success on the log management side? Any Splunk displacements? Anything you can share with us to get comfortable that it's working so far?
We already have 25% of our installed base of customers on log management with us. That's a pretty good proof point. In fact, from Q1 to Q2, the number of customers using log management with us grew at 25% quarter- over- quarter. The number is increasing. We absolutely have, as part of that, Splunk displacements and displacements with other vendors in the space.
Yeah.
The biggest catalyst looking forward is not only the addition of yet more log management customers, but the acceleration of growth of the ones that are using log management with us in consumption.
Okay. That's super exciting. I wish you the best of luck on that effort.
Yep. It's a major one.
Can we talk a little bit about AI, Rick? How you would [crosstalk]
I would be disappointed if you unraise it.
Let's talk about what Dynatrace's AI strategy is and how you can win.
Okay. So from our point of view, you have to sort of parse it into two buckets. I think it's the easiest way to think about it. The first bucket are the series of AI techniques and capabilities that we use to deliver observability to any of our customers. And then the second are what are you doing for AI-native workloads themselves? In the first camp, we've been in the AI business, as I say, for well more than a decade. In fact, one of the biggest differentiators of Dynatrace, in my view, is what we would refer to as our Power of Three AI. It isn't just about generative AI. It's also about AI techniques, in particular, causal AI and predictive AI that we've been using for more than a decade. Now, causal AI is driven to generate root cause analysis.
It is the capability of finding the needle in the haystack, so to speak. It used to be so simple to write software and debug software. You knew that in my early days, in fact, I was a programmer. I was programming in Fortran and C on IBM mainframes, and I was pretty sure that if the program didn't work, the application didn't work, I knew exactly what the problem was, which is my fault.
Yeah.
Nothing else changed. Compare that to the environments of today where you have a hyperscaler environment with all sorts of containers, libraries compiled in. You may not have even changed the line of code and something may have broken. It is getting harder and harder and harder to find the needle in the haystack, to figure out what is wrong with your piece of software. And this is where causal AI comes in to enable you to very rapidly assess billions of interconnected data points and immediately get to the heart of the problem to deliver answers and not just data and not just dashboards. And that really is our differentiator. Now, you extend that to predictive AI, which then takes causal AI one step further, applies machine learning and anomaly detection to anticipate issues to eliminate incidents before they occur.
This enables us to reduce the number of incidents and radically improve MTTR, mean- time- to- resolve, so we have cases like we've seen with British Telecom where they deployed us. They radically reduced the number of tools that they were using. They came back and they said they reduced incidents by 50%. They improved MTTR by 90%, and by the way, they estimated that that would save them GBP 28 million over a three-year span. That's what observability is all about: elimination of incidents or reduction of incidents, improvement in MTTR, and resultant cost savings, and this comes not just from generative AI. This comes from causal AI, predictive AI, and then generative AI is essentially an actual language translator into the causal and predictive AI base to be able to bring the platform to a broader audience.
So this is all on the first side, if you will, of how we do our job for all customers.
Yeah.
The second piece, which I'll try to be more brief about, is AI-native workloads, which are accelerating the number of workloads that are necessary to be observed. And so from our standpoint, first of all, our platform is ready-made to handle AI-native workloads today. We can analyze all those workloads and we have customers coming to us. In fact, we looked at a metric the other day that in our top 50 customers, over 75% of them had announced GenAI initiatives. And so they're all doing it and they're all going to be using, presumably, hopefully using Dynatrace to observe these workloads, which are expanding faster because of the inherent productivity that AI brings.
Yeah.
And so in both sides of the AI equation, Dynatrace participates.
Is it fair, Rick, that investors monitoring this potential pull-through as AI workload growth accelerates and pulls along Dynatrace, that we can follow the pace at which enterprises are taking their AI-native workloads into production? And if that's a good proxy, maybe it'll be a delayed lift to Dynatrace. But what's your perspective, Rick, on where the Fortune 500 is on taking those AI-native workloads out of POCs and beginning to roll them out? Early, I'm sure, is your answer. But how early? Is it happening?
I would say that in the case of AI, we've, of course, seen the early victors here in NVIDIA on the chip side, in Microsoft, OpenAI on the LLM model creation side. In the enterprise for actual use, I think we're in the top of the second inning.
Okay.
You know, I mean, it is we are just beginning to scratch the surface, and interestingly, I don't say that in a negative way. I say that in a positive way that this is, with AI, one of the biggest game changers in productivity that we've seen in at least my career and my personal history. It is going to be explosive and we've barely scratched the surface.
Rick, this might be a little.
So there's a lot to come.
This might be a little bit too nuanced, but are there certain types of AI-native workloads that have more of a natural pull-through to Dynatrace? And I say that because a lot of the AI investments we're hearing about are things like intelligent bots, where there are agents doing very, very discrete functions. I don't know whether there's a natural need to observe a single agent. So are there types of AI workloads that you're watching more carefully because the pull-through effect is more profound?
I wish I had sufficient clairvoyance to answer the question to some extent.
Okay.
To say, wow, you know, we only pick these AI workloads. What I would say to answer it from my point of view are that the two, I would say, most pervasive AI use cases that I hear about from our customers globally are, number one, in customer service, and number two is in app construction.
Okay.
Software development.
Yeah. CodeGen.
On the customer service front, we would tend to provide observability capabilities to the provider of the software that is using AI to do customer service management. We have customers in that area. They're going to need to be the ones working their inference models and their LLM insights to evaluate whether or not the bots are actually giving data that isn't erroneous, for whatever sets of reasons. And we're not in the business of doing that. We're not in the business, and observability vendors generally, it isn't just Dynatrace, wouldn't be in the business of saying, oh, the data that you're providing on a chatbot is wrong relative to what a customer success manager would provide. We can tell you if your software's working. Is it up? Is it operational? Is it performance? Is it, you know, all of those elements? And so we'll provide those capabilities.
On the application software development side, you bet. You bet in terms of observability becoming mission critical because now you're relying upon a generative AI engine to actually write code. And in that case, do you really know, at least in the early days, whether or not that code is operational? And to the extent you believe that that probably is going to result in a little bit more challenge, at least initially, of code being of high efficacy and high quality, you may have more issues. And if you have more issues, you're going to find them through an observability framework. And so, especially on that side of the equation, I think that observability becomes even more mission critical than it has been here for.
Let's move off AI in our last five minutes, talk about a few other topics, one of which is on the competitive front. Many in the audience joined me just a minute ago prior to this one with Datadog. So let's hit right on Datadog. So Rick, when you are up against them head-to-head, I guess the simple question is, why do you win? I can imagine, based on at least the initial customer checks that we did, that it's some combination of APM being a power alley for Dynatrace, and that's pretty critical. And then also the fact that you can be deployed on-prem and cloud, your hybrid. Are those the two key ones, or are there other ones that you would add to that list?
Yeah, I definitely would add to the list. As you might imagine, I would. On the hybrid piece is absolutely key.
Yeah.
The vast majority of financial institutions globally use Dynatrace, and the reason they do is because they are all going to have on-prem workloads for quite some time to come, and many of them want a single observability tool, and so the result of it is they use us for SaaS and us for on-prem, and as they move from on-prem to SaaS, they have the same observability tool so they can do it at the speed that they wish.
Yeah.
I would say the bigger elements of differentiation for Dynatrace are really threefold. Number one is contextual analytics provided by a single underlying data store, Grail in our case. Having all data in context: logs, traces, metrics, real user data, behavioral analytics, all of these elements, metadata that can be analyzed not at the UX level, but at the underlying core platform level enables analytics to be done in context with causation, not correlation. And so you're not making predictive guesses. The answers are deterministic. You know this is the issue. And based on that data, you can apply a solution. Now, if you're a single app company that's a startup and you just needed a dashboard, it doesn't matter much.
But if you're the largest airline, one of the largest airlines, one of the largest hospitality vendors, one of the largest financial institutions, you name the sector, the largest organizations are back to that needle in the haystack problem.
Got it.
And they need to find it. So that's one. Second one is AI. Everybody's talking about generative AI. That's table stakes. I think at the moment are becoming table stakes rapidly that causal and predictive AI are unique to reducing incidents, improving MTTR. That's a big differentiator. And then third and finally is automation. By having the first two components, you can actually automate the outcome.
Yeah.
This results in much better software loads and software performance for customers than otherwise you would get.
Helpful answer. Thank you. We've got time for a question or two from the audience if you'd like. No? I'll ask a final one then. So Rick, on the growth margin trade-off, maybe you could articulate how you use the CEO's thinking about that, delivering steady margin gains, but at the same time investing to take on Datadog and go after this opportunity that we talked a little bit earlier about.
Yep. The short form is we grew operating margins by 300 basis points in the last fiscal year. We're not planning to do that, to be clear, again, this fiscal year, but we have no plans to reduce operating margins. We expect to continue to be able to deliver modest leverage in the models we look ahead.
Okay.
We believe that at our growth rate, that results in sufficient investment OpEx to plow back into the business to make sure that we're making the product and go to market investments needed for ongoing growth.
Okay. Awesome. Why don't we end it there? I'm thrilled to be covering your name. Thank you for coming to the event.
Thank you.
I'll be at your big conference in February to try to learn a little bit more.
That sounds great.
Okay.
Thanks, Karl. Appreciate your comments.
My pleasure. Thank you.
Thank you all for coming.