Good afternoon, everyone. My name is Stefan Schwartz. I'm on Andy Nowinski's team here at Wells Fargo, covering cybersecurity. I have here Rick McConnell, CEO of Dynatrace. Welcome.
Hi, Stefan. Thanks much. Thanks for having us.
So, you know, now let's start with talking about how your September quarter went. It was a solid quarter. I want to ask you about cost optimization trends. It's been an ongoing discussion for a while now. I know you guys have more of a committed spend model, but how have cost optimization trends impacted Dynatrace this year?
Well, generally, we've been pretty consistent, Stefan, in saying that cost optimization trends we actually believe benefit in, in some ways, Dynatrace. Clearly, it impacts, it impacts consumption trends. But from the standpoint of optimization specifically, our view is that observability really assists the optimization process. So, we view that our customers can use Dynatrace as a mechanism of improving optimization.
Yeah, that, that makes a lot of sense. But have you had to adapt your sales motion at all, with the change in macro, kind of reoriented around a value-based sale, or has that kind of always been part of your pitch?
Well, I think there are two answers to that question. The first is related to value-based selling. I would say that, that's what we've done from the beginning. The notion of Dynatrace is really one around taking advantage of delivering software that works perfectly. So if we look at what we aspire to deliver, it is precisely that on behalf of our customers, that we enable them to deliver software that works much better with our observability solution than it otherwise would, and that this is really mission critical. So that's, that's really, I'd say, most of the answer is focused around, around that.
Got it. You know, anything that you can tell us in terms of how things are looking on that cost optimization front or any trends going into the December quarter?
I guess my view of cost optimization is that it is that we are probably past the most challenging part of that curve. But my view is that cost optimization is not magically going to cease to become an issue. It's going to persist at some level for probably quite some time to come. And so I think this is going to be top of mind, and I think that it is consistent with the macro environment that we look at today. So for example, you asked about macro and how we sell. Well, yes, we sell value into a macro environment, but the other element to macro is we, of course, like pretty much every software company selling to enterprise, particularly at this point, have had to adjust our selling motion, expectations for forecast closure, or pipeline coverage ratios for the macro environment.
This really happened for us in the middle of 2022, and we had to make those adjustments. We've made those adjustments, and we've now adjusted accordingly, our forecast and pipeline models.
Got it. I want to ask you more about those adjustments. Just real quick, if anyone in the audience has any questions, please feel free to raise your hand and we can work them in. Why don't I want to ask you about some go-to-market changes that you guys have made. You talked about implementing investments in GSIs and hyperscalers last quarter, in addition to increasing sales capacity and marketing. I guess starting on GSIs, you previously had 10 strategic partners in place. Is this a new investment phase designed to kind of increase your reach with the GSIs?
Well, with respect to GSIs, we've been talking about GSIs as a core investment area for some time, so nothing new. We had the top 10 GSIs as partners previously, which we've announced in prior quarters. This is an acceleration of our level of investment with the top partners such as Accenture, Kyndryl, and others. So, this is really very consistent with what we've been talking about. Our thesis around the GSIs and they're important to us is quite simply that traditionally, observability has been sold as a delayed purchase relative to the initial digital business transformation purchase. So you might move to the cloud, migrate to the cloud, do cloud modernization, and then 18 months, 24 months or later, you might decide, "Wow, I need this environment to work better." And then you would go through the evaluation process of observability.
With GSIs, it is our expectation that the observability decision should happen at the same time as that cloud modernization project, and if it does, then that radically accelerates the deployment of observability, and it should also render that cloud modernization project much more effective. So by accelerating that into the process, obviously, we move forward to be a potential order for Dynatrace, but we'd also like to believe that we're delivering a much better instantiation of the cloud environment that they're constructing. And therefore, among other things, assist them in that cloud optimization from the outset.
Right. I would imagine those are larger deal sizes than?
Exactly. Yeah, they're gonna be larger deal sizes as well, because many cloud modernization programs or initiatives are gonna be at least eight-, if not in some cases, nine-digit type deals of ARR. We had a deal that we actually talked about on one of our calls a few quarters ago, that was down in LATAM. It was a $100 million+ deal to the GSI. Now, we were a few million dollars of ARR in that, but this is an example of a deal that we probably never would have seen otherwise, or at least wouldn't have seen it for a couple of years. Meanwhile, the GSI brought us in the outset.
They get a better deployment, and the deal happens at the same time the cloud modernization deal happens, so it moves it well forward in the process.
How does that impact sales cycles? Is it, is it actually moving faster because you have the help of the GSIs, or?
I would say yes, but maybe for a different reason than you just described.
Okay.
And part of the reason is quite simply that, as I mentioned in the example I just gave you, that deal we wouldn't have had any visibility to at all.
Right.
So let's say it was a large bank in LATAM, so we eventually might have been brought in, but our sales cycle to that bank in that example wouldn't have even begun probably for a two-year period. And then we might go through a 12-month sales cycle, assuming we would win it, and we'd be three years out. Versus being brought in at the outset of the digital transformation deal, where you get deployed at the outset. And by the way, that deal's already in the pipeline for an upgrade. So you then begin the process of expanded NRR on that particular customer as a result of being part of that initial deployment.
Got it. So it sounds like it's accelerated your visibility in the market, and you're-
Yes. I mean, the one thing that I wouldn't want to overstate is that, GSI deals for cloud modernization aren't fast. They are not rapid deals, so they take a while. But they're worth it, they're larger, and they have the opportunity to accelerate our sales cycle earlier in the process from what we might otherwise normally see.
Understood. How long are your average deal cycles? Is there a range that you've given?
It's in the 6-12-month type range for a typical sales cycle, depending upon, frankly, the knowledge of the customer of observability.
Got it.
If a customer is already using a number of observability tools and they're looking to consolidate, those deals may go faster because they already understand where they wanna go, what they wanna be doing, and they'll bring us in more rapidly. The larger the deal, oftentimes the longer it takes.
Moving on to sales capacity. Can you talk about, you know, the kind of investments you've made there and, you know, what you're targeting after this investment phase?
We have, we've gotten asked this question a number of times, even today here at the conference.
Okay, you're adding, you're adding some sales reps. Does that mean you think macro is magically fixed? And the answer is no. We don't believe that's true. In fact, we've said through the balance of our fiscal year, we're not making any changes in assumption related to our model on macro. So then the question is, okay, well, what gives you the confidence to be investing in more people? And I would say the answer is twofold. The first is that we believe that there is more capacity for spend in our installed base that we have a view to right now, and we see this from the amount of spend coming out of our major accounts.
But we believe that through things like territory management, coming into our new fiscal year, which begins in April, we wanna be well set up to deploy our reps in a way that can optimize the opportunity with respect to even our installed set of customers, which doesn't have as much to do about macro. The second piece is, of course, that it takes three to six months to ramp a Sales Rep, even to a rudimentary level. So we do wanna make sure we get ahead of that curve to be ready at the beginning of our fiscal year for that kind of ramp.
So, is it fair to say that these investments are both in response to demand that you're seeing and also in anticipation of future demand?
If I were making the investment e xclusively based on my assumption of some improvement in macro, I'm not sure we would do it. But the opportunity that we see in pipeline creation in our installed base suggests that there's a there there, even without macro improvement. You layer onto that the notion that someday macro is going to improve, then we will be better set up to take advantage of that.
I see.
So it's really a combination of those factors.
Okay, and then same question for marketing. Why, why are you investing there?
Well, it's we need pipeline generation.
We still want to be generating top-of-funnel pipeline, and so it is. It is really oriented to that. I mean, all of these investments are really designed at driving the combination of pipeline generation plus flow-through deal velocity. And we believe that based on the pipeline generation we're seeing, there's an enormous opportunity that we have to make these investments and have them pay off. So, that's why we're making them. And some will take longer to generate a return, like GSIs are not going to immediately convert overnight. But, there is no reason in principle that we can't generate pretty sizable practices with some of these major GSIs over the course of time, and that's the kind of investment we want to be making now.
One more question on the sales force. I think you mentioned three to six months for them to ramp. If you look at the sales force now, where would you say they're at in terms of being fully ramped?
The sales force, if you look at our cohorts today, we've actually seen a lengthening of tenure. So the good news, because about six to nine months ago, we cut back on the number of Salespeople we were adding, or at least the rate and pace of adding those people. The result of that was, and attrition, meanwhile, fell. The combination of those two factors resulted in a tenure, average tenure, actually increasing. So our two-plus year onboard cohort of salespeople actually increased the number. So the good news is, and that's where we see, of course, higher productivity levels out of a two-plus year cohort than a brand-new one. So the good news is, sales productivity actually increases when that tenure lengthens. That's great. So that's sort of where we sit today.
Now, of course, we want to be adding more capacity, and we want to get those people onboarded, so that'll create some adjustment, as we discussed.
Okay. I want to move on to AI. Obviously, a very popular topic. You've had AI embedded-
We have 20 minutes to go, so that's, that's the last topic you're asking me about, right?
Of course. So you, you've had the launch of Davis CoPilot
I think at the end of this calendar year. What should we expect from it? And, you know, as you look at the competition, how does, how does it stack up?
Well, the topic of AI is a very germane and important one to Dynatrace, and it is because we've been delivering a solution differentiated by AI for over a decade. So when we talk about AI in the context of Dynatrace, it's not new to us. This is something we've been doing for a long time. Now, in fact, it isn't about generative AI, it's been about causal AI and predictive AI, two alternative AI techniques. Our view is that generative AI, in the context of natural language interface, is going to be table stakes. Everybody's going to have it. So everybody in observability is going to say, "I have generative AI." They may shorten that to, "I have AI," and that is going to provide a natural language interface into their underlying data set.
The biggest differentiations for Dynatrace in our market are threefold. Number one, completely unified platform at the core of the data store, all the way through the platform architecture. This isn't integrated at the UI layer, integrated at the core of the platform. The second is AI. AI enables us to deliver very precise answers and automation from data, not just dashboards. And the third piece is automation itself, which is once you trust the answers, then you can automate your response, and this results in a lower number of incidents and a faster mean time to recover from issues. You lower your incidents because you can predict them better, and when they do happen, you have better insights to allow you to reduce the impact or duration of those incidents. And those are the three differentiators. The middle one in there is AI.
That is a differentiator. Now, why is that? Well, Causal AI enables us to get to root cause analysis very quickly using a very precise, deterministic brand of AI. It gives you answers based on very precise data. Predictive AI then takes Causal AI, applies machine learning to anticipate where issues might occur next. And so both of those AI techniques we're using today have been used in the platform for a long time, and what's critical is they are not oriented just at productivity improvement. They are the elements of AI giving you, giving you the precise answers. Generative AI then is oriented to productivity. It, it, whether writing code or improving an organization's customer support approach or whatever it might be, it is oriented at accelerating productivity. But generative AI is only as good as the underlying data set.
In our case, the underlying data set for Davis CoPilot, which is our Generative AI solution, are the outputs of Causal and Predictive AI. So the result of it is that you know with certainty that the answers you're getting from Generative AI, from Copilot, are deterministic, empirical, and actionable. And it is the combination of these three AI techniques, Causal, Predictive, Generative AI, that enable us to use AI in a way that is highly differentiated in our market.
That's a very comprehensive answer.
That is a, yeah. It's a, it is a critical, critical topic because I would say not all AI answers are created equal, or AI platforms are created equal. And AI is a special, special differentiator for Dynatrace.
Understood. And I guess to your point, the natural language aspect of it is table stakes, but your differentiators are the Causal and Predictive AI. Is that fair?
That's a good start.
Right.
But I also wouldn't stop just there.
Because, yes, Copilot can use natural language to provide the connectivity to causal and predictive AI, but it isn't a unidirectional system. So, for those of you who've used ChatGPT or Bing or whatever you might have used for generative AI, you ask a question, you get an answer, and then you adjust your question based on that answer, and you maybe ask another question. And so it goes until you get the answer that you think is the best answer. In our case, the way we've designed our Hypermodal AI or Davis AI, Davis AI solution is to be bidirectional. So it turns out that we have designed it such that our causal and predictive AI elements of the Davis AI engine are providing feedback to Copilot to then ask the next question.
So you're actually getting the benefit of the first response in already leading forth in the next question. So it is, again, this sort of interwoven connectivity of these three elements or techniques of AI that provides the power, and they all have, in many ways, they're unique attributes, but together, the whole is absolutely much greater than the sum of the parts.
Right. And you talked about the importance of data to powering that AI. Maybe you could talk about the logs, the. Well, first, Grail, just the new underlying kind of data store that you implemented and how, you know, that improves the product overall.
Well, the first differentiator that I talked about in the three that I mentioned was unified platform. So unified platform, AI, automation, these three. In the case of the unified platform, the place to start is with a unified data store, and that's really what you need. And we simply didn't find on the market what we wanted in a unified data store. So about 4.5 years ago or so now, we started designing Grail to do this. Grail is a massively parallel processing, indexless data store that keeps all data types together in context. To do observability right, you actually want to be accessing and analyzing all data types: traces, metrics, routes, logs, behavioral analytics, metadata, all of these elements. And if you have to manually tag all these pieces together, it becomes impossible to manage over time.
So we constructed Grail as a common data store for all of these data types while maintaining context, so that if something goes wrong, for example, you need to get to root cause analysis, and you know where it started to degrade, you can analyze the traces, metrics, logs, et cetera, all at once by knowing that all of those elements are maintained in a single data store and context. And that gives you incredible power in the AI process that it goes through. So that's why Grail is so critical, so foundational to our architecture, to deliver the answers that we deliver.
What's customer feedback been like on that Grail back end so far?
Well, in some ways, Grail is transparent to end users or our customers because they might come back and say, "Wait a minute, what do you mean I'm running Grail?" We would have changed the underlying data store at AWS, for example, to Grail, and they may never have noticed. Our AI engine may be giving them better answers, but it would be hard to know because we've already done that change in the background. The place where a customer would see the difference is with logs on Grail, specifically. So Grail itself is simply an underlying data store. That's the technology that we use to do that. Logs on Grail is an incremental capability that you would buy from us to deploy log management use case.
And so that's where we now have a few hundred customers already that that are already in very short order using Logs on Grail, which are effectively just managing their logs on top of the Grail architecture.
I wanna ask you more about logs on Grail, but just real quick, I know you've got that Grail back end for AWS, and I know that you are looking at implementing it for Azure. Is that still on track? And, you know, can you maybe size that opportunity for us?
Well, so different. Those are two different questions. The first part of the question is, is it on track? And the answer is yes, absolutely. We plan to deliver Grail on Azure by the end of the year, in limited availability and preview, by the end of our fiscal year, end of March, for broader production deployment, just as we announced. So all on schedule. So that's the first piece. In terms of, can I size the opportunity for Azure, I would simply say again, Grail is a data store. So getting Grail on Azure will enable us to replace the data store we've got with Grail, and again, most customers wouldn't know. In terms of Logs on Grail, you can deploy Logs on Grail today in AWS.
If you wanted, in the future, you'll be able to do it on Azure as well.
Understood. Okay, so going back to Logs on Grail, I think you have about 300 customers using it?
Yep.
Is that correct?
That's right.
And so were these customers previously using your, your older version of Dynatrace Logs, or were they new to Dynatrace?
I would say that the vast majority of customers are existing Dynatrace customers, not completely new logos, new customers coming on board, certainly going to be deploying Logs on Grail for the first time. They're probably, in general, not deploying Logs on Grail alone. They're probably deploying our broad-based, unified platform, because on the order of 2/3 of our new customer deployments are three or more modules. So if they deployed Dynatrace anew, it would probably include, it could include Logs on Grail, but it would also include other elements as well. So, that's sort of how to think about that. So the vast majority of those 300 would be existing Dynatrace customers adding Logs on Grail to their existing portfolio modules.
And then, in select cases, could also be new logos that have deployed our broad-based, unified platform to include Logs as well.
Would you expect in the near future for Logs on Grail to become a more of a SIEM replacement for customers, or is that further out?
It is... It's, it's a bit further out, but we do anticipate that the SIEM market overall will go through a logical evolution over the course of the next, let's say, couple years. It is certainly our view that the SIEM space has really grown up by analyzing logs. We believe, though, that in the same way that observability is going to combine to include the log data type to be added to these other data types, traces, metrics, et cetera, that so too will SIEM use cases. And we think you can actually deliver a better SIEM over the course of time by viewing or having a lens into more data types. And so we think this is a natural evolution.
Okay. Staying on the topic, been some M&A in the space as of late, particular the acquisition of Splunk. I guess from your perspective, have you seen any disruption to that Splunk install base and maybe some of those customers taking a closer look at Dynatrace?
I would say it's too early to tell. It's very, very early in this process, but anytime there is M&A activity like this, there's a bit of disruption. So, I certainly don't want to overstate it, but at the same time, I think it's very fair to say that this kind of acquisition, especially given the enormity of scale of the acquisition of Splunk by Cisco, creates some entropy, and that disruption absolutely could lead to some opportunity.
In terms of competitive dynamics and your win rates, have those been relatively stable in the last three to six months?
Yes. No, no fundamental change in win rates. This past quarter, we talked about closing, for example, 10 of our new logos were greater than $1 million TCV deals that were competitive, direct competitor takeouts. And I think the dynamic typically around that is that customers ultimately want to move from dashboards to answers. And when they get to the point that dashboards are not delivering sufficient directional heading of value to enable them to reduce incidents, or improve MTTR, or achieve some other business objective, then they oftentimes will bring in Dynatrace. Another element is just tool consolidation.
I was on a call recently with the SVP of a very large Fortune 50 type company, and his response was, "We have one of everything." And yet they want to save a dramatic amount of money and get much more efficient about their approach to a variety of areas, including observability. And so this is a great example of where we can add real value there.
You've made a number of references to having a platform, tool consolidation. But as you look out, do you see this as becoming an increasing, accelerating trend as customers move more towards the platform versus different point products? And I guess the second part of that question, how do you see your platform stacking up against the competition?
Well, on the first point, I absolutely do believe that companies are looking for platform deployments that work well together. That is a core element of our thesis. Now, we are happy to deploy Dynatrace for whatever specific use case you want. You want application performance, we can do that. You want log management, we can do that. But as you see, based upon the buying trend, which is to have 2/3s or so of our even new customers deploying Dynatrace for three or more modules, the trend is clearly: give me a platform that solves effectively multiple problems, or maybe better put, addresses multiple use cases. So that's where customers are taking us, and in fact, where we're taking them.
As I mentioned earlier, a unified platform is certainly one of the areas where we believe that Dynatrace has the biggest differentiation in our market. So if we find a customer or prospect that wants a broad-based platform, then sign me up for that POC all day long, because we can easily go head-to-head and win well more than our fair share of those kinds of deals.
Got it. I think we had a question over there.
Yeah. Did you ever see Lightstep competitively before they were bought by ServiceNow? When you do talk to ServiceNow competitors, [Inaudible]
Great. You want to repeat the question, or you want me to?
Yeah, I think... Sorry, there's a little noise from back there, but I think, did you see Lightstep before they were acquired by ServiceNow?
Yeah. So the question of seeing Lightstep before ServiceNow is no, we really didn't see them at all. And for what it's worth, we don't see very much of them still today. Yeah, I would—while I know that ServiceNow in their call comments and their general posture talks a lot about observability and its criticality to their overall strategic direction, frankly, at the customer level, we're asked to partner with ServiceNow more than compete against them. And it is the comprehensiveness of our observability platform integrated into the ServiceNow platform that really is what many customers, I would say, most customers really seek.
Got it. I guess with the last minute or so that we have left, want to ask you about your thoughts of 2024. You've got more products, you've got the Grail backend, which is expanding. You're making more go-to-market investments. Maybe cost optimization is coming to an end at some point. How do you, how do you think about the setup heading into next year? What, you know, what are you excited about? You know, any thoughts looking ahead?
Sure. I mean, I am most excited in many ways about the overall observability market. The one need not look further than the growth of companies in it, such as Dynatrace, growing last quarter at 24% in ARR, constant currency, 26% subscription revenue growth with 30% operating margin. I mean, this is, this is really strong financial performance, and I think it is indicative of a combination of a very strong market for observability and I'd like to think, solid execution on our part to deliver real value, meaningful value to customers. And that's what gets me excited. We have a great market, $50 billion market for observability and application security into which we're selling. We have a terrific set of products, very high gross retention, so rare that we lose large customers.
A terrific opportunity for ongoing growth in our product portfolio with not just our existing core products and platform, but also additional elements like Logs on Grail, application security, et cetera. And we're investing in the go-to-market side to take advantage of that in areas that, Devin, you and I just talked about, like GSIs and Hyperscaler investments and additional sales capacity and those sorts of things. So we're very excited about the opportunity ahead. This is why we were able to beat and raise last quarter in an ongoing difficult macro environment. So one step at a time.
Great. Thank you so much, Rick, for joining us.
Thanks very much. Thanks for coming, all. Bye.