So welcome to Scotiabank's inaugural tech conference. I'm Patrick Colville. I am a lead software analyst here at Scotiabank. And for those of you who Scotiabank's a new entity, it's a 90,000-employee organization, $35 billion revenue, one of the 10 largest banks in the Americas, so, you know, pretty meaty. The company has been in U.S. capital markets for a long time, but not in tech, and I'm here to help partly spearhead that. So excited to be here, and thank you for supporting us. Okay.
Thank you for having us.
Yeah.
Thank you. Yeah.
So, on stage with me today, we've got the pleasure of hosting, observability OGs, but also the fountain of youth, Dynatrace. So we've got Rick McConnell, Dynatrace CEO, and Jim Benson, Dynatrace CFO. Thank you so much.
Thank you.
Rick, could you start off by just introducing the company, and, you know, what is observability?
Sure. observability, in my view, is about making software or enabling organizations to deliver software that, in our words, "Works perfectly.
Mm.
Because after all, as end users, we all expect software to work all the time.
Yeah.
At least in my observation, when it doesn't work, we get more and more and more frustrated about that. Companies need to deliver software loads that work perfectly, and that's what, that's what observability is really all about.
Mm-hmm.
Now, the way we do that is to participate in really what is about a $50 billion market for observability and application security that essentially analyzes all data types, logs, traces, metrics, routes, real user data, behavioral analytics, in real time around the clock to provide insights and analytics as to where software is working, where software isn't. And then provide a predictive element to that to actually anticipate these changes or issues before they occur, so we can reduce incidents and reduce the amount of time it takes to repair incidents once they happen. So that's, that's what we do. In Dynatrace's case, the super sauce, if you will, is that we differentiate in three areas in particular. One of them is by providing a completely unified platform that analyzes all these data types in unison.
We have built a platform that specifically is designed to analyze and integrate all elements together, and therefore provide better insights and answers to that, rather than just dashboards and alerts. So answers versus dashboards. Second is AI. We didn't start delivering AI six months ago or a year ago. We've been working on our Davis AI engine for more than a decade, and we can talk maybe more about that as we get into the discussion. Big differentiator is using AI to get to this level of precision of answers that otherwise you can't get to. And then third is automation. That ideally, in this day and age, you cannot have an army of people staring at a sea of screens, looking for the needle in a haystack. You've got to find ways to allow for automation of those workloads.
It is through this process that automated observability can really deliver against that promise of software that works better than it otherwise would.
Thank you so much, and, you know, really exciting. Jim, do you wanna quickly recap 3Q results? 'Cause we had that a couple of weeks ago, you know, very healthy set of results. Net new ARR was strong, lifted guidance, if I'm not mistaken.
Yep.
Do you mind just, I guess, recapping and pulling out some of the highlights?
Sure. So it was our second quarter. I know it's kind of strange because our year ends in March. So, so we had a very strong second quarter and a really good half. And it was highlighted by a continued theme that I think investors have come to like about Dynatrace, which is this theme of balanced growth and profitability, and that's what we delivered in the second quarter, where we delivered kind of mid-20s ARR growth. And actually, in the case of our second quarter, we actually delivered 30% operating margin, so actually even a little bit above than what we had expected. So you get a very strong, you know, rule of 50+ company. And the quarter was highlighted by, you know... Yes, the macro environment continues to be challenging and uncertain, but it's just continued strong execution.
We had particularly strong execution in the second quarter with new logos. The weighting of new logos for our net new ARR was the highest it's been in quite some time. I think it speaks to the growing opportunity that customers have where they're trying to consolidate multiple solutions, could be DIY tools that they're using, and we've been able to capitalize on that. So a very strong balanced quarter on growth and profitability, kind of rule of 50+ company, particular strength in new logo acquisitions with relatively large land sizes, and we were very pleased. And as you indicated, we increased our guidance both for ARR revenue and operating margins, and free cash flow for the full year.
And so we saw enough signals from not just the quarter, but our pipeline health, to increase guidance. You know, we increased it 125 basis points on the bottom line. And so just a continued very healthy story. And we talked a bit about-
... making investments in the back half of the year focused on go-to-market, customer success coverage, in particular. We can talk about that maybe in a little bit. But, and so I'd say we're, you know, while the market is uncertain, we are delivering very well and quite optimistic about the longer-term opportunity for Dynatrace in the ecosystem.
Yeah, I mean, let's definitely talk about profitability. Let's come back to that and investments. I guess, just to kind of take a step back, so observability is about 3% of infrastructure spend in the enterprise. What is the incremental opportunity for the market and then for Dynatrace specifically?
Well, from my point of view, observability is barely started. In many ways, using the baseball analogy, I think we're in the bottom of the second inning. You know, it's really early days.
Mm.
Yet, you have what I hear from customers over and over and over again at CXO-type levels, that Dynatrace, for example, is mission-critical, that they couldn't run their businesses without the assuredness that their software is delivering as they expect. So where we do have deployments, we're usually guess maybe 20%-25% deployed. We've only just begun. And if you look at the cloud ecosystem, just the combination of the three hyperscalers with AWS, GCP, and Azure are doing on the order of $200 billion in annualized revenue right now, and the observability market is a fraction of that. So we believe that there is enormous runway in new customers, even expansion within existing customers, for much more significant deployment.
You know, big opportunity, why would a customer choose Dynatrace? I mean, what makes Dynatrace unique? You kind of touched on this earlier, briefly-
Yeah.
But, you know, I want to double down. Like, you know, the firms that many will be familiar with, Datadog-
Yep.
Cisco, Splunk, New Relic. Why choose Dynatrace versus those peers?
As I covered earlier, and maybe go a little bit deeper this time, but unified platform, AI engine at our core, and automation. That's how we look at our three differentiators. Now, let me talk a little bit more about AI in particular. So why is AI important for observability? Primary reason is that in today's model, it is often the case that you have a network operations center. In a network operations center, you have a lot of people looking for alerts. Those alerts fire, they see something goes red, and then they start to troubleshoot. How bad is it? Where's the problem? Who do I call? Once I call them, how long does it take to get it fixed? Very imprecise.
What we try to do is use AI to be constantly looking at that infrastructure and application set to define precisely where the issue is, which we can do, to then radically reduce the percentage of incidents that you have, or the number of incidents you have, and the amount of time it takes to fix them when they happen. So, one example that we've used in the past periodically is British Telecom. British Telecom deployed Dynatrace, large deployment. They, as a result of Dynatrace, reduced their number of incidents by 50%, reduced their MTTR, mean time to repair, of the incidents that still occurred by 90%. Imagine the productivity gain, the results from that, and as such, they expected they would save an order of magnitude, like, GBP 28 million over three years.
Radical cost savings, much more productivity, developers working on innovation, not maintenance and break-fix, much better user experience as a result of software that works better, and all delivered based on a very sophisticated AI engine that is looking for any anomalies in your system to be able to predict issues before they occur.
Yeah, okay. Well, Jim, I wanna ask you about logs. Because, you know, what we're hearing is the platform message is kind of one of the key points of differentiation and, you know, something that seems to be a North Star. So logs, you publicly kind of stated you want to get to 100 million ARR for logs. Why would a customer choose Dynatrace for logs versus Splunk? And, you know, could logs be as big as APM one day?
Already is, I mean, in many ways. I mean, look at Splunk and their revenue. The log management market is sizable because it's frankly been around for longer.
But for you guys?
For us, there's no reason that it couldn't be as big as observability, sure, is today. The way that we suggest to customers look at logs is that logs have traditionally been isolated, independent. You have logs on one side, and you have observability on the other. And observability consists of data types like logs, traces, metrics, other elements that are used by us and organizations to manage their software environments. That makes no sense. It really doesn't. A log is a data type, not a use case, and as a data type, our AI engine, Davis, can actually deliver better answers to how your IT ecosystem is working by looking at all the data types in unison, inclusive of logs.... So integrating logs in your overall observability framework is extremely logical, and integrating, therefore, log management and log-related use cases, also very logical.
This is why we're investing in the space, and we believe that we have a very, very substantial opportunity to come in a market that, candidly, we see as being ripe for disruption.
One comment that I'd make in addition to that is, our approach to it is a little bit different than what you're seeing from competitors, where competitors are charging a lot for ingest and storage. So our model's a bit different. Our model's-- we're gonna charge much less for ingest and storage, 'cause there's not a lot of value in ingesting logs and not a lot of value in storing logs. The value is in the querying, and so our model is more charge less for ingest and storage, and we're gonna charge more on the query, 'cause query is where the value add is.
And, oh, by the way, what we've seen with customers that have actually gone to us with our early journey with logs is the amount of querying that's required is actually less because of the precision that we provide. So we think it's a better solution. It's a better value proposition for customers that they're spending more for the value that they're getting. And given that there's fewer queries that are required from their incumbent solutions, it means that they can actually have more people leveraging it. And it's just a better way of approaching...
Yeah, now, we're early days in it, and you're right, we, we kinda threw a—we said, "We, we think we can get logs to $100 million in two years," and so we've been at it for, I don't know, six to nine months, and I'd say we're, we're progressing well to that journey.
Okay. Okay. So, have you given a number to where you are now?
We haven't. You know, it's one of those things every quarter, you know, that... I think what we try to do is provide where we wanna be, and then what's the trajectory. Well, I think just what I would say is that we are gonna provide—we think that probably more annually is the time to talk about where are we at, you know, on different opportunities, whether it be logs, whether it be application security. That's another one we talked about, $100 million over three years, where, you know, probably an investor day, sharing some of those stats is probably better than just doing it quarterly. But I can tell you that based on where we're at with the journey, you know, we had several hundred customers that were just doing POCs on logs in our first quarter.
We now have several hundred customers that are buying logs. So, we're making a lot of traction, and the way it works is you start a POC. POCs hopefully turn into some level of use case, and more often than not, it'll be new workloads that they're leveraging our log solution for, not existing workloads. And then over time, once they're comfortable with the solution, that our expectation is they'll migrate existing workloads over. So it's a bit of a journey, and I think we're early days in it, to Rick's point, but quite optimistic.
Okay, I mean, let's repeat the same exercise for application security because, you know, from my understanding, those are kind of two, both two pillars that are equally critical. So I guess, why application security? You know, why is that a natural adjacency for Dynatrace? And then, as a customer, you know, what's my thinking as to choose Dynatrace versus, you know, an AppSec pure play?
Well, to begin, application security is in every company's budget. It's top of mind for most organizations. The question then is, who do you buy from, and what's differentiable, and what adds more defense in depth, if you will? In the case of Dynatrace, we've been very consistent about making sure that our application security investments were in areas where observability data matters. Because at the end of the day, we don't wanna be a be-all, end-all solution play in application security, competing against the big security players. Where we believe we can differentiate are in areas where the observability data adds differentiable value to the security outcome that you wish. So vulnerability management, great example. Runtime application protection, great example.
Ultimately, SIEM, which we're not in today, but, SIEM use cases could be a good example where today, SIEM use cases generally get based on logs and log management. In the future, we believe it should be based on all of the data types that I mentioned earlier. So it is the collection of those elements that define and, and drive our application security strategy, and we saw this in Log4j when that happened. For example, we were able to give very, very precise indications at runtime of precisely where organizations were making calls to Log4j library, which had a vulnerability, and therefore enabled them to fix it much more rapidly than they otherwise would. It's a good example.
Do you wanna add anything or?
No, I think he said it very well.
So do I. So do I.
That's the first time he's said that all day today, so, you know, I've been waiting-
I mean, what I-
Waiting for that.
What I would also say, though, you know, to your earlier question around pillars, that application security is another one that we laid out a goal that we said we thought we could get application security to $100 million. Again, similar to logs, we're tracking well to that journey.
Okay. Yeah, thank you for that additional color.
Yeah.
Okay, Jim, I'll stay with you. DPS pricing, so I guess, you know, what is this acronym?
Yep.
... and, why is DPS pricing important for, you know, I guess, cross-sell and upsell-
Yep
... and reducing friction?
So for those that don't know what DPS is, it stands for the Dynatrace Platform Subscription. And the way to think about it is to talk about what we did before. So, prior to this Dynatrace Platform Subscription, which is kind of like an ELA that we would sell based on SKUs. So, our different offerings, whether it be infrastructure monitoring, full-stack monitoring, application security logs, we would go to customers, and we would sell them specific SKUs based on their needs. And as you can imagine, customers' needs change over time. And so one of the frustrations that customers had was they would buy something from us, and then when they wanted to try something else, it was another sales cycle. So there was friction in the process. Actually, a good example that Rick just gave with the Log4j solution.
Log4j solution required a sales engagement back in time because we didn't have a vehicle to be able to do that. What the Dynatrace platform subscription allows you to do is you commit to a dollar amount. You commit to a dollar amount, very similar to the hyperscaler model. You commit to a dollar amount either for a year or multi-year, and you commit to more, you get a higher discount, so your lower unit prices. You commit to less, you get a lower—I mean, you get a higher unit price. And you get a rate card, and you get a rate card for access to the platform and all of our capabilities. And so the benefit that is you're not necessarily buying, you know, full-stack monitoring or infrastructure monitoring or application security or logs. You get access to the platform.
You can use it any way you choose. So the benefit of it is the customer is able to leverage the platform in the way they want. So it's a, it's a much less frictionless model for the customer that they get access to the platform. We have a customer success team that can work with them on adoption. So it's just a—it's a better vehicle for customers because it gives them flexibility. It's a better vehicle for us because we've had, we've had DPS available on a limited basis for a little over 18 months, I think now. And what we have found is customers that have leveraged this vehicle expand faster. So the expansion rates are higher for customers that leverage that vehicle than a non-DPS vehicle.
Now, I'll admit that it's a bit of a biased sample because it's large customers that have been on limited availability, but we believe the thesis is gonna hold true, even for, even not your largest customers, because it's just a, it's a better vehicle. And just from a financial accounting treatment, for us, it's still a subscription model, still ratable recognition. And the customer does a consumption drawdown, and the way it'll work is if they end up consuming faster than that ratable model, they'll end up recontracting with you and extending the period. And it's just a, it's a good model.
It's also a benefit of the model. Today, we have competitors that when you exceed your commitment, you get charged an overage, and it's an overage rate which is greater than what your, call it, your subscription rate is. We don't charge any overage rates for this. It's the same, it's the same price point. Now, you're better off recommitting because it means your spending is higher, and if your spending is higher, you're gonna get a lower unit price. But we don't charge hidden fees. We don't charge overages. It's a very transparent model, and we announced the general availability in April. And early indications are that it's a... You know, we're learning from it, but early indications, our customers are quite interested in it.
Okay, so the impact on the financial model, still subscription, I presume NRR would be impacted. So I guess, could you comment on that? And then what do you think in terms of likely penetration? I mean, is this gonna be something that is a subset of customers will use, or is it likely to be used by most customers?
So still subscription, still ratable revenue recognition. So the current model, nothing changed, still ARR. Nothing changes with that. As I said, we believe it will be accretive to expansion rates because ultimately customers, if assuming that our thesis holds true, where they access more of the platform, they'll end up burning through their commitment earlier, they'll expand earlier, they'll expand faster. So the treatment is all positive. Again, we're still early days with it, but our expectation is over the next three to four years, 80%+ of our customers will leverage it as a contracting vehicle.
Wow! Okay. I'll ask one more and then open up the floor-
Yeah
... to, to questions. So Dynatrace operating margins are very impressive, north of 25%. It's good to see that some of it's being spent on socks.
Yeah, yeah, you can see that?
Yeah, I mean, if we'd known you were gonna initiate coverage today, we would have brought you a pair of socks.
Yeah.
You know.
I'm gonna push for Scotiabank scarf.
Oh, scarf.
Okay, well, maybe we can trade.
We could trade.
Sounds good.
Yeah. So, over 25% on margins. You know, you made this comment earlier, that you feel like you wanna invest more aggressively in the second half of the fiscal year, so over the next six months. I guess, why that need to invest now, and, you know, how does that impact the bottom line?
Yeah. So I'll start with the one. We have pretty good visibility to pipeline. So the pipeline has been growing at a rate faster than our ARR growth. Our, our ARR has been growing in the mid-20s. Pipeline's growing faster than that. So we're getting signal and this is our rolling four-quarter pipeline, so we have visibility call it over the next year. So that's growing at a pretty rapid clip, and it's been accelerating. So we're getting some demand signals that suggest that there's an opportunity. I would say we were gonna make investments anyways, to be quite frank, that we have a new CRO. That CRO has been in the seat now 120 days. We were a bit cautious on where we wanted to make GTM investments without him having a point of view on it.
He has a point of view on it now. So we are gonna make more investments in sales reps. We're gonna make more investments in GSI partnerships. We're gonna make more investments in customer success coverage to drive more adoption for the vehicles that I mentioned, like DPS. And, you know, when you look at it in its entirety, you say, "Oh, you're gonna do all these things?" And we're increasing our margin profile. So we increased our guidance 125 basis points for the full year to 27%. Now, you do the math on it, it suggests that margins will dip a bit in the back half of the year, call it more to, call it 26%. But you should expect 27 as the launching point for fiscal 2025.
That, you know, even though we're probably gonna be operating a little bit lower than that, based on the math that I just suggested, you know, that what you get with Dynatrace, and you always have with Dynatrace, is what I started with, this balance of growth and profitability. We're not one or the other, we're mindful of both. We think now is an opportunity that... I'm not suggesting that the macro environment has, you know, we're expecting it to improve, but we think these investments are necessary. Customers are prioritizing observability. They're still doing it with a fair amount of budget scrutiny, but is an area of spend that they continue to make, and it's an area that we think we can make investments, and we think we can improve leverage in the model at the same time.
Yeah. An investor said to me, "Dynatrace are focused on profitability before it was cool to focus on profitability.
Yes.
No, this is, this is definitely true.
It's true, yeah.
Yeah.
Do we have any questions for the team?
You were mentioning your differentiation early on about you know, AI and automation, and you've been working on this for a decade. Are you guys building on top of foundation models, or are you building your own proprietary models? If you're using foundation models, are you kinda like fine-tuning them you know, for your solution? Tell us more about that.
Do you need to repeat the question, or is it good? So the question is, are we using foundation models, and if so, how does that play into Dynatrace? Well, first of all, great question. So we're using, we're partnering with existing LLMs and foundation models, so we're not designing our own. The biggest differentiation from Dynatrace is not trying to come up with a new LLM that's somehow uniquely targeting GenAI approaches. The unique approach to Dynatrace in AI is actually combining three different AI techniques. The first technique is Causal AI, and then Predictive AI is the second, and then GenAI is the third. Now, we've been doing Causal AI and Predictive AI for more than a decade.
GenAI is new, but let me, let me sort of try to explain rapidly how all these interact. Causal AI is about root cause analysis. This is, something went wrong, why did it go wrong? What changed? Where's the issue? And pinpointing precisely where that issue is to reduce the amount of time to fix something. Predictive AI then takes that one step further, leverages causal AI, applies machine learning, and tries to anticipate issues to come, so you can actually fix them before they happen. Simple example, oh, my gosh, look at that! You're about to run out of storage in a server farm in AWS in Virginia. You should provision more, provision more capacity, rather than run out, have an incident, have to fix it after the fact. So that would be predictive AI. We all do, we do all that today.
GenAI then thrown into that provides essentially a natural language interface to the other elements to be able to provide that query ability of the data source. So you can essentially ask questions through GenAI of our Causal and Predictive AI techniques. The benefit of this, as we all know in spades at this point, especially given of late, is that GenAI is only as good as the underlying data set. In our case, the data set that our generative AI, through Davis CoPilot, is querying, is deterministic. So it is known with certainty based on real-time data and analytics coming out of the platform. So therefore, there's no concern that that data may be erroneous. So the combination of GenAI, in our case, accessing deterministic data stores from the other AI data types or techniques, is really advantageous.
The combination of these three techniques together is incredibly powerful, and GenAI just brings these other AI techniques to a broader set of end users by enabling more people in an organization to actually query those data sources.
Is that GA now? If not, when is it expected?
It's LA before the end of the calendar year, and predicted to... or anticipated to be GA by the end of our fiscal year. So by the end of March the GenAI piece is out. And as I mentioned, Davis itself in predictive and causal AI is the keys to the kingdom. I mean, that is the heart of our differentiation in the Dynatrace platform to begin with. So that's already there.
I mean, it's gonna be—I mean, I'm very excited about it. It's gonna be super transformative, 'cause lowering—
Yeah
... the barriers to usage is something that is, you know, will substantively increase your addressable market. Pricing. Have you talked about pricing for Davis CoPilot, or is it gonna be part of the service?
We haven't decided that, Patrick, as it turns out. So we're still formulating that. What would be clear is that we do expect that having Davis CoPilot will result in more queries, more queries result in more usage of the platform, which results in more consumption of the platform one way or the other. Whether we charge incrementally, we haven't.
Okay.
We haven't finalized that. To come back, though, to the 22nd addendum, which I think is really cool about the AI techniques, is that, you know, for those of you who use ChatGPT and integrated GenAI, it isn't. I ask a question, I get an answer, I'm done. It's iterative. So usually, you ask a question, you get an answer. You're like: "Okay, well, I really wanted this answer," and so you refine the question, you ask it again, get a new answer, better answer. You refine it again, you get another answer. Our GenAI efforts through Davis CoPilot is not unidirectional, so it doesn't just query the data. It, the predictive and causal engines actually provide feedback to the generative AI engine in CoPilot to provide faster and more thoughtful iteration. "Okay, what about this question?
You know, did you really mean this?" And then provide you the ability to have a more bidirectional interaction of these AI techniques, and as you can tell, I get super excited about it.
Yeah. Do we have any, any more questions? Well, one that's on the tip of my tongue is around, I guess corporate transactions. You know, we've seen some pretty significant corporate transactions, in observability this year. You know, probably the most significant corporate transactions in the whole of software have been in observability. New Relic's acquisition, by Francisco Partners, Sumo Logic's acquisition by Francisco Partners, and then Cisco buying Splunk. So what does that mean for you guys? What does that mean for the market? Why are those deals happening, and how will competitive dynamics be different now, versus in a year's time?
I can start. You can,
Okay.
... you can add in. You know, my perspective is first and foremost, it is an enormous signal of the increasing criticality of observability. Any way you look at it, when Cisco is willing to spend that much money on Splunk, when private equity is willing to spend that amount of money on New Relic after having already done the Sumo Logic deal, it's pretty clear that observability is becoming increasingly relevant to the community at large. And that CXOs are increasingly trying to figure out how to utilize observability to their benefit to deliver better business outcomes. And you know, I think this is pervasive in so many of the customer meetings I did.
I had a CIO of a major bank in APAC tell me, "We are going to compete in our market based on software that works better than our competitor's software." Unbelievable! I mean, that's an amazing statement. And I love the statement that followed, which is: "And by the way, we're gonna do that by using Dynatrace." And like, fantastic. That is amazing. So I think generally, that's my biggest takeaway, is observability is becoming more and more mainstream, more and more critical, and more and more critical to do right. And off-the-shelf, open source, internal dashboarding, internal alerting that may have been good enough in the past, is losing the battle against IT resources, inability to get those resources, an inability to manage increasingly complex workloads. The move to the cloud has made things immensely more complicated.
Containers, microservices, way more difficult than the workloads of old, running in a mainframe that was completely isolated. You just can't do it anymore. Observability is moving from optional to mandatory rapidly, and as that happens, you need automated capabilities, and I think this is evident by the transactions. Let me answer the second part of your question more rapidly, if I can. And that is simply that I believe that this presents a substantial opportunity for Dynatrace because we have a solution in the market that is very well respected by customers. Our gross retention is extraordinarily high. We don't tend to lose customers, and they continue to use the platform more and more.
I think that this market opportunity of observability becoming more critical, combined with the Dynatrace solution in the market, is a great combination.
Yeah, it's exciting time for you guys, exciting time for the market. Thank you so much for sharing your time with us today. CEO of Dynatrace, Rick McConnell and CFO, Jim Benson, thank you.
Yeah.
Thank you.
Thank you, Patrick. Thanks a lot for having me. Appreciate it.