All right, folks, welcome. Thanks for joining us here. And for those of you who made the trek here to Vegas to see us in person, and then also those of you who have joined us online, thank you very much. My name is Noelle Faris. I am the Vice President of Investor Relations here at Dynatrace. And we've got a packed one-hour investor session, and I'll just set the expectations around lay of the land and what you can expect.
So we're going to start it off with Rick and Bernd. They're going to give a brief overview just to sort of level set folks, especially for those who weren't here in person, just on some of the things we've been talking about in the last sort of day and a half.
And then we're going to invite the members of the leadership team on stage, and we're going to do a moderated Q&A session. So, I did solicit questions for that, and we consolidated it. And I think we've got some really great topics to cover, but we are going to give you guys an opportunity here in the audience to ask questions at the tail end of the session.
And just really quick, too, the intention of this session is not to be an extension of last week's earnings call. We really want to try to kind of keep the dialogue focused on the things like market opportunity, go-to-market strategy, and sort of R&D roadmaps. So, with that in mind, I will quickly, before I turn it over to Rick, do the needful. And I promise I'm not going to read that. I couldn't if I tried.
But you guys know the drill. We may make some forward-looking statements during today's session, and we disclaim any obligation to update those statements. So, they are considered to be, as of today, February 5th. And so with that, I will turn it over to Rick McConnell.
Thank you. Good afternoon. Oh, come on. Come on. I know lunch just happened. Good afternoon. All right, very good. You have socks, so you're ready to go. I'm going to spend just about five minutes, seven minutes up front here. And as I was looking at this presentation, it occurred to me that virtually every one of you in this room could, at this point, give this presentation for me.
But I'll do it anyway. And it is to get us all in the right mindset of what are the core messages. What do we really want you all thinking about in terms of our opportunity at Dynatrace? So let's kick things off. As we have said many times, the cloud is a huge factor. Why? It's a huge factor because it results in enormous quantities of data, massive increases in complexity.
Foundationally, it is, as we believe, impossible to manage that amount of data manually, just impossible. Of course, AI and other factors are contributing to that complexity, to that amount of data, and to the difficulty in managing environments the way organizations used to. Now, one of the elements of progression that we see is the orientation from data and dashboards, red, yellow, green indicators.
I can't tell you the number of customers that I've met with across the planet that have basically said, "Let me show you my network operations center." You walk into a network operations center, it's filled with dashboards. My question is always the same, which is, what do you do when something goes red? The answer is, well, we call a big meeting, and we try to figure out what to do next.
And that process kicks off a long array of activities. Dashboards in the current world, we would say, are not enough. Obviously, you want dashboards to provide indicators, but you really might need insights and answers. And as Dynatrace, as we say, we want answers and intelligent automation from data. Because ultimately, it is those insights and answers that lead to automation that provide for auto remediation to enable you to manage your environment.
Our vision, as we've stated it since really the day I arrived more than a few years ago at Dynatrace, is to help create a world in which software works perfectly. Well, if something breaks and you have to fix it, obviously, it didn't work perfectly. So, auto- remediation and getting in front of issues before they happen is critical to being able to manage these environments.
We start with Grail, massively parallel processing, data lakehouse, all data types. We extend that to include Davis, multiple AI techniques. We are the only observability company on the planet that has and utilizes all of these techniques to try to create automated response. We add to that Automation Engine. This enables us to action those insights.
Finally, increasingly, here at Perform this week, what I hope you've really seen is the shift left, or as Bernd would say, expand left or extend left, expansion to other personas, not just central IT Ops, but also to include SREs, to include platform engineering, to include development teams.
We believe that holistically, it can't just be about end-to-end observability because we capture all data types. It's got to be all data types, but for all teams, using a consistent solution that enables them to instantiate or execute their observability strategy.
And that's what we're really after, all data for all teams, common platform that ultimately yields auto remediation capabilities through automation because you trust the answers that come out of that environment. And then I'll end with just this notion of, so where does growth come from for Dynatrace? It clearly comes from the market factors that we talked about in our earnings call last week.
Of course, AI, cloud, as we discussed it earlier, of tool consolidation, because there are too many tools. For those of you walking around here at Perform who've talked to customers, gone to breakout sessions, I'm hoping that one of the elements, one of the trends that you heard was a desire to consolidate more and more with Dynatrace.
I certainly heard that. I hope you did as well. But there are also several key growth drivers for Dynatrace ourselves.
AIOps, but not just AIOps, which we've been using for more than a decade, also AI observability. Make no mistake about it, we want to be the observability solution for AI Observability workloads. Because of the magnitude of data, we believe we can do that better than anybody else, and we are investing aggressively to go after that. Log management, absolutely critical, enormous market. We believe it is incredibly ripe for disruption.
Companies are worried about how much they're spending on it and the value they're getting from it. We believe that at a better price point, based on our included queries pricing model, plus the ability to integrate logs into your overall observability framework, you can get to a better answer. Talked a lot about our go-to-market strategy, expansion of the number of reps, focus on segmentation.
Dan will be up in a minute along with Laura, and they can talk more about that. A nd finally, DPS, pretty hot topic these days, that is really driving enormous consumption growth, almost double the rate of consumption that we're seeing elsewhere with our legacy pricing models. 35% of customers on DPS, 55% of ARR, and continuing to grow. Huge catalyst for the opportunity ahead.
We're excited to take your questions. Before we do that, I'm going to have Bernd talk for just a few minutes, and then we'll open up the panel, and we look forward to your questions. Thanks so much.
Thanks, Rick. Hello, good afternoon. So, Dynatrace has always grown on the heels of growing complexity. And actually, this is the number one reason why I'm so psyched about the whole AI hype. Think of it because AI is just bringing the next exponential growth in complexity. Sort of we have all discussed about containers, microservices, Kubernetes, and thought, this is complicated and complex with 50,000 pods or whatever per cluster.
We are now planning towards supporting the million out there. Because when AI is moving from simple chatbots to the RAG interface, to fine-tuned models, to multimodal, to agentic AI, it is just piling up. And without having any observability, traceability, what's going on there, you can't control AI for both technical reasons, cost reasons, experience reasons, also for responsibility reasons. So, in order to do that, actually, the data volumes are higher.
The data that you have is heterogeneous, and the systems are larger, so, how do you tame this? This is, I think, I don't think only, I know it. We are in the best position, actually, to solve the challenge of today and even more those of tomorrow.
Because we are the only ones who actually not just surface data of different types. We actually put all the data into context, and only data in context allows a smart AI that we call Davis AI. Because we are also the only vendor who puts three types of AIs together, Predictive, Causal. Maybe you should know that Causal AI is actually learning instantaneously.
A nd then also Generative AI is everyone else, sort of, is doing. S o, by having the context and now look at this from infrastructure, from applications, from the AI stack, end user experience.
This is all the differentiator, even the business level, because we have business events that actually customers really love. Put that into context, not only for the use cases of observability, because workloads need to be also secured. By the way, the whole area of security is in the stone age of automation. We believe that this is our huge opportunity.
Having that context and bringing vulnerabilities, security posture, hyperscaler posture, as well as all the data for detection and response in proper context allows our customers to automate like never before, including with their growing, ever-growing, exponentially growing number of workloads that they are running. This is setting up our future that I believe is very bright. This is all on the heels of key components.
Because with what Rick also had brought up, Grail, this massive parallel processing, data lakehouse is that we are the only ones who provide true contextual analytics. A nd this gives answers not only, "oh, here is an outlier of data", but actually, "oh, here is an impact because here is a change that calls there, that calls there, that calls there". Sort of, we put the actual symptom to the root cause in context like no one else.
This is also why we are the only ones to help with a reliable automation because only Davis Causal AI gets a real-time view on the IT infrastructure and systems. A nd therefore, also because it causes, not just correlates, and understands the dependencies in real time, can give you precise answers so you automate further, and this is also where sort of I explained already the Davis AI, the artificial intelligence part.
The last part is I mentioned sort of the importance for security that you drive automation forward because today the whole security area is way too siloed, way too much manual. This is a key point. This is also where Dynatrace here as a platform has the advantages to not only bring the data together, bring the different data types together, but also desilo the approaches.
On top of that, as a platform with an automation engine, with Predictive AI, enables to automate in a way to remediate faster or to prevent, but also to collaborate. That's an important point because we not only care about one audience, we care about multiple audiences there. This is actually the cue to the next sort of slide here. Because we care a lot to figure out, sort of, where are the future budgets of our customers.
Obviously, you all know this now. Everyone invests into AI, but how do you build AI apps? It's actually the cloud-native teams and the AI-native teams basically hook up the services because there is no standalone AI app and there is all cloud-native apps want to have AI.
S o, those basically extend, so what this means is that we have actually realized that it is the best potential for our growth actually to extend our audiences, as also Rick already alluded, and provide Dynatrace to an additional type of audience that we never catered before, particularly the developers, particularly the cloud and the AI-native developers.
A nd this is why we have created new experiences in Dynatrace and continue to do so, so we rapidly have released new apps in the past quarters and continue to do so throughout this year. S o, you will see the whole blast there as well.
But also with that, we have a focus on all the announcements that we also provided yesterday on exactly that audience that creates these modern new digital services. S o, basically where the budgets are, and this audience of cloud and the AI-natives—what do they want in order to automate actually, so first to take on more responsibilities about production?
Those responsibilities include not only availability, but always security as well. This is why these audiences of cloud and the AI-natives actually expect that observability and security features converge. And they also expect that it is easy to automate, and this is also where we have seen massive inroads with customers who are taking these cloud and the AI-native approaches, for instance, to integrate Dynatrace into the entire software delivery lifecycle.
On one hand, to even observe the lifecycle, but on the other hand, also to create the process of platform engineering. Because platform engineering provides, on one hand, a way more automated approach to deliver software, but at the same time, also allows more self-service to developers. And as we probably all know, developers are key for us to be early in the lifecycle of new digital software projects.
And this is also why the announcements here have focused also on developers. And then I'll quickly walk you through those five key areas. So, the first one is that we have announced Preventive Operations. So, what does this mean? I mean, Dynatrace is already the best in automatic root cause analysis. And on the heels of this, you can remediate faster.
But Preventive Operations is actually taking additional predictive AI features and combines that with actually recommendations on how to remediate and accelerates, therefore, even the ability so that you can detect problems before they occur and so you prevent them from happening at all. And this prevention is key also because you have as a company always to be resilient. And governments around the world force enterprises actually to level up their compliance.
Compliance shifts from point-in-time compliance to be continuous compliant. What does it mean? Continuous compliant means you have to automate. So, this is where we are helping customers to automate 80% of all the repetitive tasks. The next point is because compliance and resilience already integrates and converges. Sort of if you need more evidence here, is it that observability and security is converging?
It is that actually the security area of cloud natives wants to be deeply integrated into their processes, and this is why we have actually extended Dynatrace here also to have, in addition to the vulnerability, real-time or Vulnerability Analytics in Dynatrace and the Detection and Response functionality, and also the Cloud Security Posture Management, because we believe that these three components are exactly the package that cloud and the AI-natives need.
S o, now with this additional extension of personas to also security teams within the modern sort of DevSecOps processes, we also see that reaching out to the developers needs actually help. Because I mean, you all might know Shift Left. Shift l eft means the developers take on more responsibility. We think that's not the right approach. Because enterprises and executives care about the productivity of the developers, right, and how do you get that?
Not by letting everyone do what they want. And also, this kills any compliance. So, the best way for all of them is to keep up productivity is by extending to the left. What this means is that you still have central teams, but this is the more modern SRE type of teams who maintain consistency and provide self-service to the developers.
And the developers have then the tools they need in order to leverage observability for their troubleshooting, for their optimization, for AI, for security, for self-healing. It's easy like never before, but it remains on one Dynatrace Platform. So this drives productivity a lot. Because we know, for instance, that just the announcement of Live Debugger reduces MTTR 40%, eliminates feature flags by 80%. So there's lots of value.
Then finally, we also announced the extension of AI Observability to now over 40 different technologies, plus also the ability to report on guardrails so that we can help customers not only with the technical and security aspects of their AI implementations, but also of the more bias, hallucination kind of responsible areas.
If you bring all this together, think of this, what we have here, we have the scale to all deal with the modern large-scale applications that all move towards AI. We also have the platform that allows the teams, the different that are involved in our customers actually to collaborate properly to be successful with Dynatrace like never before.
Finally, we are also here in the best position to leverage all of our data to bring it up to an executive level with Business Observability because executive cares a lot.
So, with this, this was my summary. I'd like to call out the rest of the team. Thank you.
Doesn't matter.
Any word up?
Yes. Let's just put Laura in the center. Wherever you like.
Okay.
Be in the center.
I'm going to sit next to Bernd?
Laura? Where do you want me? Laura?
Wherever you'd like. Hello.
Okay. So, I think thank you, Bernd. That was excellent. Good recap of the last day and a half. So, I guess why don't we start there, Bernd, with just what there were a lot of announcements. And so, are there any ones that sort of stand out in terms of, from an investor perspective, potential for more growth opportunity?
Yeah. So, to me, definitely the most important is the resonance of the focus on the cloud and the AI-natives. This is where everyone, sort of, eyes," how do I get my AI-powered services faster up and running"? And sort of this is where those announcements fit squarely because actually all of them cater to the audience.
So, they don't do the rundown now, but sort of that's exciting. What I also found exciting is actually that already yesterday in the exchange, the whole thinking of extend to the left was picked up right away by the presenters. So, which also tells me that for us, sort of reaching out to this additional audience is really very welcome.
Great. Great. And so maybe Steve, kind of adding on to that, there were a lot of use cases and customer examples that were shared. Just in terms of product adoption, where do you think we've got an opportunity to kind of further expand within our customer base?
Oh, wow. I mean, there's lots of opportunity, as Rick was talking about, with consolidation. And I was talking about that a little bit. The workloads are growing. Consolidation is not just about, I want to deal with fewer vendors or simplify procurement. It's that they really need a different type of system and approach, and they want to bring those worlds together.
So, there's the observability convergence. There's the expansion into cloud native. There's been a lot of interest in terms of the business views as well because I think that's something this industry struggled with a little bit. And we have unique capabilities in the business events to bring that forward. That's been a, if you're counting the times a word's been said over the course of the last two days, that one might be near the top.
And of course, there are certain areas that we have a greater opportunity to penetrate faster. I think we've really seen an uptake in logs over the last couple of quarters and really starting to see that catch wind.
Great. Good. And so maybe jumping to GenAI, obviously, it was a big topic of conversation and everybody's talking about it externally as well. But what does it mean for Dynatrace in terms of just the acceleration of GenAI and where does observability sort of fit into that, Bernd, if you could take that?
Yeah. So, as alluded before, to us, it's a fantastic opportunity because GenAI will be both a curse and a blessing. The blessing is that it does help and provides fantastic value for many use cases. But also, it will cause issues as every other technology. I mean, I mentioned before just the number of instances.
But another example would be think of now code being generated with GenAI as everyone now tries to do more of this. So, we generate code that no one understands anymore exactly why it was built this way. Then we use another AI to secure it. Then we use another AI to fix it. Then we use another AI to review it. Where will this lead?
This is a recursive problem. So, now then you have an issue in production. Do you use another AI to look at this? How will this work?
You basically can't do anything without observing that whole mess sort of that is piling up here. And now you could argue, okay, then even observability feeds data into AI. Of course, yeah, that's Dynatrace, right? So, this is what we are doing. And this is why it's a fantastic opportunity for us.
Yeah. I'd like to add on that just really quick because if you look at Dynatrace versus some of our competition, one of the things that we've always excelled at is the ability to understand complex environments and discover them.
Whereas sometimes we've had competitors or do-it-yourself where they've gotten a little bit of a benefit from, they build the service, so they know the service, they know exactly what metrics to instrument, and it's all very manual. But it was okay because it was just that manual visibility that they required.
You start doing what Bernd was saying, you add complexity. All of a sudden, when you don't know how the code was written, when you don't know some of the intent, you need capabilities that Dynatrace uniquely brings around the ability to do the auto-discovery, to learn the environment.
That's a really key important area that I think the more GenAI is used to build apps, the more developers themselves are using it, the more you need something like Dynatrace to discover and provide that layer of intelligence beyond just the simple instrumentation.
I would just add that generative AI from Dynatrace, we would argue is not like any generative AI from any other Observability provider because it is accessing underneath a deterministic data layer. And if you think about that, where do the problems occur in hallucinations? Where do the problems occur in GenAI?
They occur when the underlying data store can't be trusted. In our case, the data store can be completely trusted because it's constructed with causal and predictive AI. So, generative AI or copilots accessing a deterministic data store in the form of Grail is an incredibly powerful and massively differentiated solution.
Great. So, then we'll shift from GenAI, and I want to talk a little bit more about the personas. That was a lot of the topic of conversation, especially from a shift left to more of an extend left. So, maybe can we talk about how Dynatrace has evolved to expand the use of the platform to wider audiences? And maybe we'll lead with you, Bernd, but please, anybody else who wants to jump in on that.
Yeah. So, pretty much the entire history of Dynatrace, we always worked with our IT operations, operations, operations team. And with being cloud-native sort of ourselves, we have already internally extended to the left a long, long time ago. And now finally customers are ready to do so.
And this is why it's now the perfect point in time to actually help customers to also go through that process to sort of not just have global operations and say, "okay, you have to fulfill my rules", but actually make it a collaborative effort on a so-called platform engineering approach to allow developers and site reliability engineers, DevS ecO ps to collaborate together on this.
This is why Extend Left and not just shift there because you can sort of use observability and security centrally but also equip developers with their self-service so that they can fulfill their tasks.
Okay. And this will be the last one for Bernd, and we're going to shift into go-to-market. So, one of the questions I've been getting in the last since earnings call, and then yesterday during our cocktail reception with some investors was this whole notion of kind of the R&D roadmap and a lot of focus on logs. And maybe if you can expand around what does that mean in terms of SIEM, is that something that you consider in the future?
Yeah, so that's a very good question, actually. I like that one. Yeah, because I've spoken with.
Not the other one. Just that one.
I've spoken with so many customers on the topic of SIEM, and all of them told me, "Yeah, this is outdated" and they need something that is more automated. And this fits all the sort of the point I made earlier that this whole security area is in its stone age of automation.
And this is where we are really setting out there to do something different than just SIEM but actually bring this to a massive different level. In a nutshell, take the words Cloud Application Detection and Response.
What this means is we are bringing, with the help of all of the Contextual Analytics and Davis AI that we have, together all the Vulnerability Security Area with the Security Posture Area with the data from not just logs, but obviously lots of logs too, but we add all of the traces. We add all of the metrics. We also add the user behavior.
We add topological context in there, and we use all that together to actually automate the security issues for customers. A nd this, of course, includes then the hyperscalers even across hyperscalers because this is how modern applications are being built, and this is what the audience wants to automate today. S o, the answer is yes, we go into that direction but take a massive leap forward.
Great. All right, Dan. At the beginning of this year, we rolled out, and this group knows some go-to-market changes around, focused on sort of segmentation, investment in partners, expanding your sales motions. So, maybe if you can give just a quick overview of sort of where do you think we're at in terms of the progression of that? So maybe even to be U.S.-centric, what inning would you put us in?
Probably third or fourth inning. I think we were very open about this.
Hey, Dan, that's what I said.
Yeah. Okay, good. Good. I think we were very open that we had a sequence that we were going on. At first, we had an IT 500 priority. We felt that we would add sales capacity density there. That we were also, though, I think it's important to know is that we were changing our selling motion there and that we were going to become more strategic and less transactional in that arena.
And why we play very well there. We had a number of customers in that space that did a lot with us, thought highly of us, and gave us back feedback that we had a unique value proposition. So, it was kind of easy to bet on that, and we did.
Now, what we had to do is we had, as we were changing just the density, it wasn't just that; in fact, it was a more strategic selling motion, so we had some people that were able to do that, and we had to bring in some talent from outside. P eople that had experience selling in high-end enterprise and a more strategic selling motion.
S o, we've been on that journey this year. We're very happy with how that's gone. That is one. I certainly knew that there would be some challenges because it's not just about slicing, going from seven to eight and going to three or four accounts. That's kind of easy.
It was also changing the culture of that and the selling skills that were required, but as we're over 3/4 of the way through the year, we're happy with how that's going.
We're seeing the progress going there. We're forming new relationships, and remember, I think we were open about this. I'm probably repeating myself from last year, but I think it's important to remind you that a lot of people that had seven or eight accounts, they were really doing business with three or four of them, and they kept those. We kept those people. They kept those. That was a risk mitigation. They had relationships.
They had active sales campaigns in pipeline, and then sometimes there were three or four accounts that they didn't have a lot going on, so when we gave that to a new rep, remember, they're kind of starting from scratch a little bit. They have work to do there, so sometimes they were new to Dynatrace, and then they had to establish relationships and start building pipeline, but we watched that very closely.
I mean, we monitor this on a daily basis, and we're happy with how that's moving.
Anything additional, Laura, you want to add in terms of just role that the marketing team is playing on kind of building out some of the go-to-market changes or how all that is playing out?
So, we're aligned very closely in the marketing organization with the sales organization. I think a lot of you know I've been here just a year and coming in and just sitting with Dan and saying, "Where are your priorities? What are the customer segments? How do we increase investment in the globals and the strategic accounts that we need to do more there?"
As he changes out his go-to-market changes, making sure that I'm doing the same thing to align very closely across everything that we're doing. And so, it has been a very symbiotic type of change that I've been going through as well.
I just have to add something that Laura won over the entire field sales organization over 1,100 people by helping them with this is that we don't want to start off our sales calls with who Dynatrace is. We wanted to start off our sales calls by saying, "you know who Dynatrace is, let's talk about your problems and how we solve them". That was our motion. And Laura, we've made huge progress there, but that's a huge motion.
The field is, that's what we want to go in and do. We want to go in. We don't want to explain who Dynatrace is. We want to go in and say, "hey, what challenges you're facing? What's keeping you up at night? What's causing you pain? And then how can we align our technology to that?" So, I think we're on that journey, and I think that's working very well.
Thank you. And it is a journey. So, it started. It doesn't happen overnight. You can't just flip the switch and suddenly everybody knows you. But it is absolutely the right journey that we're on, and we're making it easier for our sellers to be able to talk about Dynatrace and all the advancements.
Great. Awesome. So, I want to be mindful of time, and I want to give everybody a chance to kind of ask questions in the audience too. But I obviously still have a lot of cards here, but I'm not going to go through all of them. But I do want to give Matthias an opportunity to talk about just what he's doing from his team's perspective in terms of just driving customer adoption and driving that retention rate for us.
Yeah. So, we have the pleasure actually to prove and fulfill the reality for the promise we gave upfront. So, I mean, that's the nature of the game. The Professional Services Team, the Success Team, the Support Team. T hey are working with our large customers in making that journey overall for Dynatrace's success. So, adoption has multiple facets. That's license consumption.
So, we sell DPS models that gives free access to all the capabilities. It's really about going use case by use case, audience by audience, and refer back the value increment into an ROI or business outcome conversation. So, our customers can go back to CTOs, CIOs, and say, "This is why we use Dynatrace. This is the improvement." And that's our next step in the journey so we can accelerate that consumption out of the base.
That's pretty much what we do, and that's what our mantra and mission is.
Okay. So, I'm going to shift over to Jim because I'm going to ask a question that was topic of conversation last night as well, which was on-demand consumption. Before I do that, once we're done with Jim, we're going to be circling. So, you've got Hannah and Greg here who have microphones. So, please flag them down if you have a question. But in the meantime, Jim, so on-demand consumption.
Never heard of it.
Topic of conversation, right?
Do you think there's going to be time for more questions after Jim gets through there?
We'll see. This is a little bit of a lighter side. One of the things that investors have been asking is, so what does that mean in terms of metrics? So, how should they be thinking about the company? Is it ARR? Is it NRR? Is it now ODC or subscription revenue? So, maybe you can talk about that, and then we'll move on.
No, I was going to comment on Matthias's point earlier around driving adoption, driving consumption of the platform. I'd say it's been a cultural change, I think, for the company. And DPS is a wonderful vehicle to be able to allow for that. You're getting full access to the platform. And so, it's done exactly what we wanted, which is people are consuming faster.
They're consuming 2x the number of capabilities of kind of our legacy-oriented SKU-based customers. And so, that vehicle is really working very, very well. We talked about the fact that they're growing at 2x the rate from a consumption perspective of SKU-based customers. And so, that was our thesis. Our thesis was to get them on board. If they liked it, they would consume more, and we're seeing that play out.
I do think we knew there would always be some level of kind of on-demand consumption when we put the model in place, but we actually thought it would be more modest than it's become, and we're realizing here that on-demand consumption is going to be a real revenue stream for the company. S o, to your point about what are the right ways to look at kind of leading metrics.
I think ultimately subscription revenue is, I say, ultimately going to be a North Star for us. It's a journey. A nd so, I think in the interim, ARR and NRR still are very important, but you have to look at ARR and NRR in the context of what's happening also with on-demand consumption because what happens with on-demand consumption could affect ARR and NRR because the timing of when something turns into ARR and NRR will be different.
But at the end of the day, people are getting value from the platform. We're growing subscription revenue. So, I think that's ultimately where we're landing. I'd say it's a journey. We'll provide metrics for customers, for investors, I should say, for all of these things to help you on the journey as we go through it.
Okay. And with that, we'll open it up. So, go ahead and circulate with the microphone. There you go. All right, Pinjalim.
Okay. Thank you so much, everybody, for the nice presentation. So, I want to ask you about this DPS thing. And I'm trying to understand this, right? I appreciate that it's growing two times faster than SKU-based. But I'm trying to think, is there a familiar element to it?
On one hand, I kind of understand that between DPS and Growing Wide Network. T here's definitely some change on the activity and the engagement side. But logos are not growing. People, I mean, partnerships with partners are still kind of troubled with consumption if it's going up. So, I'm trying to think, is there a temporary element to it?
I mean, do you see that it gives customers an extra appeal to grow? And DPS-based customers will always grow faster than non-SKU customers, or both of them will converge at some point.
So, I would say that initially, when we got onto DPS, there was a bit of, I even said it to investors, a bit of sampling bias for customers that were going to go on to DPS and consume more. But if I'm understanding your question correctly, I think what we're learning on this journey is that when customers are committing, they're committing with like 100% certainty that they're going to spend to their commitment.
We do the same thing internally. When we make commitments either with the hyperscalers or someone else where we're making very large commitments for, we want 100% certainty we're going to spend that. And so, their commitments are sized that way. But we are finding customers are willing to budget. They're willing to budget for consumption over that specific amount.
And so, I think we're learning in this journey that this value chain of kind of committed contracts and consumption on the back end is really the way customers want to consume. And I think what we're also learning is that one of the things we've done a very good job of, Bernd and Steve's team, is building out the telemetry to be able to forecast for customers when they're consuming.
We're giving them alerts. So, there's no surprises in the model. We are telling them how they're consuming, whether they're consuming faster, whether they're consuming slower. And so, there's a lot of transparency in the model. And as we've talked about, there's no penalty. There's no penalty for exceeding your commitment. We're not charging like some of our competitors do for overage premiums for that. They pay the same unit price that they were previously.
So it's just a really customer-friendly vehicle that some customers will manage it in different ways, but I think it's playing out the way we were hoping.
Yeah. And just to give you a field perspective of that, we get a lot of positive feedback from customers on the commercial model. They don't want a model that is trying to get them. They want a model that's very friendly and flexible. And I think we've played the long game on this model and saying, hey, if you have a model that customers are giving you positive feedback for, and I get that a lot, that'll serve you well for the long term.
Just one small product add to that too is I don't think you can underestimate that when people go into initiatives, sometimes they don't know exactly what's going to be next and where it's going to take the next turn. With the platform subscription, they have access to all capabilities. I'm sure we've established that before.
So, not only do they find new ways to grow and accelerate, but as we add new capabilities, those are auto-added to the vast majority of customers' rate cards as well. So, as we come out, as we launch new capabilities, those are immediately available to those customers to pick up without a new sales cycle. Obviously, they're going to test and POC and things like that, but it's a very easy way for a customer to consume more.
A good example being what was announced over the last two days. That will be available to customers.
AI Observability. We had Peter from Northwestern Mutual talk about how they turned it on in under an hour. This was someone who is part of their data management team. He wasn't part of observability.
They tied those dots together, and within an hour, he's using something that is being monetized through existing rate card capabilities, growing that out, getting the broader view across that platform for something that they did not know they were going to do with that same sort of forethought. It was a point-in-time ability to take advantage of that power.
All right. I think Julian has the mic over there.
Thank you. Maybe one for Julian Reddick from Decade Partners. Maybe one for Dan. Thanks for the comments on the success that you're having in the IT 500. I was wondering if you could broaden the aperture a little bit and talk about further down market, which has historically been a big part of Dynatrace's success in the kind of Global 15,000s and what you're learning there and what your priorities are heading into next financial year.
Yeah, that's a good question. I think you're getting ready. We have kind of a roadmap that we're sharing with the board. So, I'll give you a little glimpse. So, as you know, with this disruption and change and segmentation, so we have to really monitor how much change do you inject at one time. You have a roadmap, and we have a multi-year roadmap on go-to-market.
And you say, okay, how do we stagger this so that we can continue to show growth to investors and so forth and also do our transformation? So, we've staged it. And I think the IT 500, we saw that as something we had to do. It would take a while to—we had to start early because it would take a while for that to actually deliver the fruits of that. But we also are looking at our transactional business.
And as we go into FY2026, we'll be implementing things to, well, I saw it start hitting on another cylinder of the engine in our transactional business. So, we'll be adding that to our FY2026. I think there is an opportunity. It's a big part of our business. It's continued to be strong as it's been for a long time, but we want to accelerate it. So, we'll look at how do we accelerate our transactional business in FY2026.
All right, and then I think we have Will over here. Yeah.
Great. Yeah. If that's on, yeah, Will Power with Baird. Yeah, thanks for hosting this. Question probably for Bernd, whoever wants to take it on the security front. You announced the Cloud Security Posture Management product. And just would love to get perspective on the strategic fit. Was this something customers were asking for? And just trying to understand how it fits with the broader application security portfolio.
Yes, absolutely. So yes, this has been asked for a lot, especially, and this is why the whole push is also for cloud and AI-native teams, because sort of their setup is very different. So typically, in classic security organizations, it's very CISO-driven. But in cloud-native setups, it is development team-driven. And the development teams need, therefore, an offering also for their hyperscaler setups that they can automate as part of the rest of their software delivery lifecycle.
So therefore, they look into modern offerings for that matter. And this is exactly our opportunity here. And also, the posture management and vulnerabilities, those are the initial steps. And once you have rolled out, then you look at also threats and exploits. And this is why these three pieces together give exactly the cloud and the AI-natives the package that they need.
Okay, and then I think Matt has the mic.
Yeah. Hey, guys. Matt Hedberg, RBC. Thanks for doing this. You know, I think when I think about the path to sustain 20% growth, I think there's a lot of drivers. And obviously, a lot of Perform has been about the product side. But when we think about the go-to-market side, I wanted to go back to that because it seems like there is a real opportunity there. And the question was like down market.
And I guess when you think about what are those investments, we know the GSIs can be an important part of the sales motion. So, I guess I'm wondering how much of its internal investments in bringing in new sellers to target even a wider aperture, or how much of it is like leaning into these partners that we all talk to and seems like it's a natural fit for you guys?
Yeah. I think Laura will weigh in on this as well. I typically go with an and strategy. So, I try not to, but I think that the latter, the partner piece of it. I f you think about where we're activating three key parts, GSI actually will take you more up market.
If you look at GSIs going to play as a partner, typically, whether it's Accenture, Deloitte, and some of the others in your IT 500, I think that's where they spend more of their time.
I think as you go down to the transactional business, you have a lot of regional players that are there, so our regional partners. And then what's critical, and one that we are spending a lot of energy, and really, I would say is still in early days, early innings as we use, is our co-sell with the hyperscalers.
We have activated huge initiatives with the hyperscalers. That is a huge motion for us in that transactional business. And so, I think it's that. And then we'll add capacity. We will continue to add capacity down market. It's our bread and butter. We do that as a natural evolution of our down-market business. But I think where you get really acceleration is on your partner side of that.
And then I'll add, you mentioned Perform, our big product event. It is a customer event. I look at this as a customer first. We have over 50 customers here. So, a lot of product innovation with the customers talking about what they're doing. And it has been amazing for those that have seen the main stage journey of the breakouts to have them talking about the value that they get from Dynatrace. And so that's where we start.
We start with the customers just talking about it. And that is not just from Dynatrace, but it's with our partners. And I know I have a lot of conversations with the hyperscaler and GSI, my counterparts, CMOs, or their marketing teams on what can we do. We've been building out the partner marketing organization along with the partner team within Dan's organization.
They are glued at the hip together on how we are going to not just utilize our partners as a sell-through. This is a marketing opportunity. This is a co-sell opportunity. This is a how do we design programs together. There is so much activity happening there that we see big opportunity for us as well going into this new, what is our fiscal year, with all of our partners.
And then do we have the mic over here? Thanks.
Yeah. Karl Keirstead at UBS. Thanks for this. I wanted to go back to a comment you made on a couple of occasions about focusing on the cloud and AI-natives.
Could you define what you mean by that? And if what you mean are pre-IPO smaller companies, how does that sync with the focus on the IT 500? And what's the strategy to displace Datadog, given that they've communicated that they've got a pretty big footprint there? Thank you.
So, first on the definition, it is really about all those modern projects that set up typically in hyperscalers. They're API-driven, obviously Kubernetes, containerized, sort of these kind of projects. But also, it's not only about the tech. It's also about the processes, how you develop and to deliver software. Basically, in the cloud and the AI-native approach, you also have broken up or actually eliminated sort of the silos.
Here is ops, and here is test, and here is dev. Basically, you have much more continuum. And this is also why in the cloud and the AI- native approach, you have the SRE and the DevS ecO ps teams usually part of the same organization so that you have one continuum for the rollout. And this is also where the beauty of hyperscalers come in.
You don't need a dedicated separate global ops team in order to keep hyperscalers running. This is what the hyperscalers are doing.
So, this gives exactly those teams more autonomy to accelerate, to iterate faster, to deploy faster, but also gives or sort of imposes the responsibility to take not only care about delivering features, but it poses that they have to take care about resilience, meaning availability, security at the same time.
Plus, often they care also about business metrics, like let's say how is their service being adopted and so forth and look at that. So, this is what I see here as a cloud-native definition. So, very important.
I think we need to be more direct in that answer. T o simply say that virtually all organizations at this point are moving cloud-native. Cloud-native workloads, we see 85%-90% of our customers or more now have cloud-native workloads. They're expanding in banks. They're expanding in commerce, healthcare, everywhere. So, don't correlate that to SMB type approach.
I was going to say the same thing. We're usually talking about projects, not companies with that definition. Yeah.
Okay. And then, oh, is it Fatima?
Fatima Boolani from Citi. Thank you so much for hosting us. My question is for Matthias. Matthias, you spend...
I'm talking to her.
You spend a lot of time with customers. And so I wanted to get your perspective on what the behavior is in terms of customers on DPS adopting more diverse functionalities on the platform. If you can give us some perspectives on, "hey, are a lot of the DPS customers actually becoming power consumers of existing capabilities and going to the end with more full-stack observability?"
Or has this really galvanized an opportunity for DPS customers to experiment with Grail, to experiment with AppSec? Any quantitative wrapping you can put around that in terms of multi-SKU adoption within the DPS space? Thank you.
Yeah. So, there's a strong pattern that customers on DPS actually use quite a strong variety of capabilities, modules, or whatever. So, just this easy access to use Dynatrace in whatever way you want to, not being limited to buy a certain SKU upfront, knowing how much that will be potentially or not, just gives our teams so much easier access to those personas and realizing value while then those capabilities start to grow.
And then those customers can still find more budget in another pre-commit or sort that with ODC, what Jim was mentioning. So, that's a clear pattern. I mean, I was here in 40 customer conversations over the last two days. And we are lining up our teams now to bring all that innovation. We are announced again back into the base because people are super excited. Now, we don't need a sales cycle to get started.
That's the beauty because they have it and still, if those new functionalities, those new use cases, those new value adds drive a higher and higher level of consumption, then of course, we're going to book another, let's say, growth deal or multi-year expansion into that specific customer.
But I think that's the motion we are seeing and it's really about us as Dynatrace together with those customers to find different workloads, new personas, new use cases, new processes, which they were even not able to do that before.
Okay. Sanjit?
Yeah. Thank you for taking the questions and thank you for hosting us. I was at the afternoon session at the keynote, and there was a leader from the Accenture practice, and he had this line where he said, "AI is just another workload," which I actually thought was quite a constructive comment. Because one of the defining characteristic of this category is that with compute cycles, the mousetrap changes, the reference architecture, the application architecture changes.
When we went from monolithic to microservices and Kubernetes, you guys were on top of that and benefiting from that growth. And so, the question is, do you sort of agree with that statement by Accenture saying, "AI is just another workload, therefore we're really well positioned to monitor these applications"? Or is the mousetrap fundamentally? Is there anything that you're seeing in terms of how customers build applications?
We think about getting to an inferencing cycle and GPU-based workloads and new architecture. Is there anything bubbling up that gives you guys pause in terms of how to monitor this next wave of applications that are coming online?
Yeah. So, I think in the essence, there's lots of truth in this statement because the instances and workloads grow and think of agentic is more or less microservices talking to each other. But yes, the tech stack is slightly different in their sort of in the detail. Maybe the biggest thing that changes with AI is that you have additional types of metrics, like the whole guardrails on bias and so forth that you would not have otherwise.
But sort of on the rest, sort of on, let's say, security, it's similar. You still have to take care that there is no PII data in your prompts that you send to generative AI or the like. So yes, this is why, to me, I'm super excited about our opportunity here.
Yeah. I think you oversimplified a little bit, if I'm being blunt, because while I agree with Bernd that there are things that rhyme and there are patterns, I mean, the complexity is crazy. I mean, if you follow how fast just even the hyperscaler services are going, if you look at the different audiences that get involved, like back to that Northwestern Mutual example, those aren't all traditional buyers.
You're yet once again creating a necessity for a shared view for collaboration, which I believe obviously biased, but that plays to our favor, plays to the strengths of Grail to bring these different capabilities together, different insights, different data types. So yes, it is software at the end of the day, but I think the pace and kind of the heterogeneity of the environments and buyers is definitely creating new dynamics.
Okay. We're going to do one more question. We're a little bit over, but I know we started a minute late. So go ahead, Keith, and then.
So am I the last question?
You are the last question.
So I'll break this into 17 parts.
Make it happen.
It's Keith Bachman from BMO. I wanted to ask the question to the panel. How do you become more successful in security? And I will break it into a couple. So, the first is you've announced cloud security, something we were talking about last night. What gives you the right to win there? And just to give you an example, BMO is a big Dynatrace customer for observability.
I think the chances of you winning the cloud security is very small. We use Wiz, and it's very formidable competition. The second is, do you actually need to keep expanding the portfolio in security in order to win? In other words, try to get more mind share with the security operations people? And the last is just, do you need changes to go- to-m arket to be successful in security? Thank you.
Only three parts, Keith. I'm disappointed.
Should I start? Okay. So clearly, the whole security area is being disrupted with those new modern workloads. This is where the classic CISO-driven security doesn't work. This is also why actually Wiz, you mentioned that is actually sort of having their successes there. But now I think also what Wiz is providing. Agentless has good ease of use and so forth, but the whole point is what they lack is, they don't have a platform.
They are now saying they're building an agent. They don't have the end-to-end data for context. Sort of, the point that I'm trying to make is from the foundation of what we provide in Dynatrace and extending there to this modern cloud-native security workloads, we are in a fantastic position.
They are sort of Wiz. You could argue he has a head start and is known, but sort of foundationally, I think we have an even stronger platform than them because also they now have to start actually integrating all the acquisitions. Now we are that far with our platform that we can say this is all cohesive, it's all together, and we can now push and roam forward.
Dan, do you want to add anything to that?
Yeah. I didn't touch on something, but I'll give you something actually we're very excited about. So, we have, and I'm going to try to answer your question in this, is that we have formed three specific strike teams. And what these are is subject matter expertise that help our field teams. They own the opportunity, but we've formed three strike teams.
And one we've had in place for a while, which is our security team, because they bring some subject matter expertise. And Keith, just to help you understand, one of the things that we do is we try to draft off an overall security plan. BMO, we're not going to try to change the security plan. We're going to plug into it.
We have a unique, if you think of runtime vulnerability, we have unique data, unique insights that we like to feed into their overall security strategy.
So, we're not trying to replace a lot of times. We're trying to give our unique data to that framework. So, I think that's one. But also, from a go-to-market standpoint, we have three. We have logs, we have digital experience, DEM, and we have security, and we have strike teams, which these people are subject matter experts.
So, as you kind of get deep and into the more complex selling motion, you bring these people in to help augment the field teams to help drive that. So, that's what we're doing. We're excited about the strike teams. Actually, we've hired a new logs person. She presented today in front of a main stage. I don't know if you guys were able to see it. She's fantastic. She's a great addition to our team.
So, I know we're talking about security, but I think logs is really important to us, and having the right team in place is a big help.
And some of what we do from the marketing side then is also to align to the strike teams so that we can go deeper with more focused programs because we have that expertise. So again, logs demo with insights and security, all core to those.
And the only other thing I'd add to it, Keith, is that, again, I started earlier about a culture of adoption. The incentives of the strike teams that Dan is talking about is to drive consumption. So, their metric is going to be consumption. Our model before wasn't like that.
They were a front-end model helping in the security sale. They were not necessarily the person involved in the adoption of the security offering. And we think having that combined with what Bernd was talking about, we'll be able to get better penetration with an end with customers that are already familiar with us on the observability side to be able to extend into security.
Okay.
Yeah.
We'll turn. Oh.
Wanted to just find a product comment to that because I had many customer conversations on the security topic, and they all distilled it down to one word, context. This is what they need. And this is what we clearly have as the best of all, and this is also adding to huge differentiation to, for instance, Wiz.
Keith, I would answer the question this way. I would just say that we answer it similarly every time. But we are going to invest in those areas of security where we have differentiable value. And what that typically means is areas in which observability data matters.
We don't want to compete against Palo. We don't want to compete against CrowdStrike or numerous other parties in the security markets. We will lose. And that is not our strategy. That is not our intent. But in areas in which observability data matters, in areas in which agent technology really has differentiable value, those areas are areas in which we believe we can ultimately compete to win in security.
Yep. All right. With that, we're going to have Rick say maybe some parting words, and then we'll let you all.
Yes. Just a couple of things to wrap up. First is I want to thank Noelle and Hannah and Greg. I don't know about you all who interface with them all the time, but Jim and I get to do it every day. And I think they are an incredible team that punch way above their weight, and I hope you agree with that. But thank you to you all. Really amazing, amazing work.
The second is I just wanted to thank my team. You see some of them here, but Sue, Colleen, Nicole, other members of our leadership team, they are rock stars. I would go to battle with them any day of the week or month or year. Just love them and really want to thank you all for making our short number of hours on a weekly basis all play out well.
Lastly, I really want to thank all of you. It's not something that I've experienced often in my career, but I really must say that you all understand our business very well. You take the time to understand it. You take the time to engage with us. We are grateful to you for that. You make us better as a company.
And I love working with and engaging with people that make us better each day. And I really do believe that you all do that for us. So, thank you for being here for Perform. Thank you for your support. And we look forward to our continued engagement in the future. Thank you very much.
Great.
Have a great day, and do you have one final comment?
The team will linger a little bit if you have some parting questions. But other than that, thanks for coming.
Thank you very much.
Thank you.
Safe travels.