Thanks. All right, continuing the sessions before lunch, I'm Sanjit. I'm the infrastructure software analyst on the Morgan Stanley software research team. Thrilled to have the management team from Dynatrace, Chief Executive Officer, Rick McConnell, and Chief Financial Officer, Jim Benson. Rick and Jim, thank you for joining us at the TMT conference once again.
Thanks, Sanjit.
Always good to be with you. Awesome. Before we get into the discussion, for important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. I'm gonna start out with, like, my comments. I'm a big bull on Observability as a category, not just for this year, but going forward, particularly in the agentic world. We'll dive into those conversations. Just to give us some context, Dynatrace is coming off a couple of strong quarters. Net New ARR has sustained at 16% constant currency for the second quarter in a row. The ARR base is now up to $1.9 billion. Got an operating margin in the high 20s. Trailing twelve-month free cash flow margin is 32%. You know, squarely in that rule of Rule of 40+ territory.
We are at a time in the market, though, where, there's a lot of revisiting of first principles thinking when it comes to software providers and how they create value for their customers. Rick, with that as the context, what are the core problems that Dynatrace is helping customers solve today, and what problems will Dynatrace help customers solve going forward?
The simple way to, I think, begin, Sanjit, is number one, we deliver through observability resilient software. At the end of the day, there is no time in history that we can recollect where delivering software that worked perfectly was more critical. That is foundationally number one in terms of priorities. In the observability space, I would say overall, we are in an era where observability generally is becoming absolutely mission-critical to essentially every company in the universe that is delivering core software. The second piece that is evolving and evolving rapidly by way of problem we're trying to solve is reliable AI.
Mm-hmm.
It isn't just about delivering software that works. It is about delivering AI-first workloads that actually are delivering the outcomes, delivering the content that you're expecting to be delivered.
Yeah. That makes a ton of sense. Maybe we can dive a little deeper in terms of the product innovation cadence at Dynatrace. There's a lot of buzz coming out of your conference. That conference is called Perform at the end of January. We'll get into the specific product announcements. The big theme from my point of view was around Dynatrace Intelligence, and this representing the next major evolution of the Dynatrace platform. Can you give us a sense of what makes up the Dynatrace Intelligence platform? What capabilities and value will this unlock for customers?
At Dynatrace Intelligence, we announced at our Perform conference about a month or so ago. Dynatrace Intelligence fuses together the innovations of deterministic AI along with agentic AI. Deterministic AI, we look at Dynatrace as really our superpower.
Mm-hmm.
This is what we would argue we do better than any other observability company on the planet. The reason is because we've had 20 years of context of building systems based on an underlying data lakehouse with Grail, a software topological map using graphing in Smartscape, elements of artificial intelligence beginning with causal AI to predictive AI to generative AI, all of which are designed to indicate to a software developer or software provider specifically what is happening in that software environment any given moment, and when something breaks, what broke with a very high degree of clarity, specificity, and accuracy so that you can take immediate action. We do this for the largest organizations on the planet that have billions of interconnected data points that we are analyzing and providing results and analytics against in real time.
That deterministic AI, that foundation of what precisely is happening by way of analytics in your environment then sets up the foundation to be able to take agentic action against that set of analytics. That's where the agents come into play. We believe that what the market will hear, what customers will hear from basically every observability players, "I've got agentic AI. I'm delivering agents. They can take action." Our supposition when we get into things like proofs of concept and others is that you have to start with reliable, trustworthy input to those agents. Otherwise, those agents are gonna be taking actions that are guesses.
Mm-hmm.
The result of that is we believe that Dynatrace Intelligence is really unique in the market space of delivering both what the answers are, what the analytics tell you, along with the agents linked into that then can take action. By the way, those agents and that agentic framework are really an ecosystem of agents, not just Dynatrace agents, that can take action, for example, through hyperscalers, through ServiceNow, through Atlassian, through others, to essentially enable Auto-Prevention, Auto-Remediation, Auto-Optimization in your overall environment, which gets you back to where we started in your very first question, which is resilient software and reliable AI.
Yeah. My next question was around some of the things you announced around agents. Before I get to that, if we kind of zoom out and get at least from my point of view why I'm bullish this category, we think about the attach rate of observability to agentic deployments. Do you feel like that's gonna be as high or even higher than sort of like the cloud-native application era? I mean, these agents are gonna be accessing critical business systems. They're gonna be calling external tools. They're gonna be interfacing with your end users. Like, is it sort of obvious that this is all gonna have to be monitored, tracked, logged, from your guys' perspective?
We have a theory, which I would say is early stage at the moment, but it speaks to precisely what you're suggesting. That is today, if we look at the preponderance of workloads that happen across enterprises, the largest enterprises that we typically would sell to, maybe 30% ± of workloads in traditional workloads are observed-.
Mm-hmm.
by Dynatrace.
Right.
Why is that? Well, you know, if you need to observe your primary set of infrastructure, you need to observe your primary mobile app, your primary website, your, you know, whatever those workloads may be. I was speaking to a customer recently down in Australia and their comment was, "We have 2,000 apps. We have this much. You know, they don't have the same level of criticality. We don't need to observe all those workloads." In the case of agentic AI especially, and in an LLM environment that is producing probabilistic outcomes, we believe that you really are gonna need to observe darn near 100% of those workloads.
Because it is gonna be sufficiently independent, independently operating that you are going to have to do extra work to have systems that can give you the confidence that you're delivering resilient capabilities. You're also gonna have the systems that are mission-critical to delivering reliable AI outcomes. The result of it is we believe that, you know, in this world that is evolving rapidly to an AI-first world, two things happen. Number one, explosion of workloads. You just have more raw workloads. Then the second is, to your very point of the question, you need to be able to rely upon those outcomes in such a way that probably derives more observability as a penetration rate against those workloads.
Yeah. Makes tons of sense. Let's talk about some of the agentic capabilities you introduced at Perform, your user conference. You have an agent for site reliability engineers. You have an agent for development. You have agents for security teams. Which of these are you most excited about from a monetization perspective, and how are these domain-specific agents priced?
Sort of like asking what's your favorite kid. You know, I mean, it's hard to say. I mean, I think that they are all critical. The way to think about the agents is first you have the foundational agents. These are agents like an SRE agent that would tell you root cause analysis, for example. You need to know specifically what's happening. Sitting on top of that, you have sort of management and supervisory agents that can direct traffic. Do the agents tell a Dynatrace agent to take action to resolve a particular incident, you know, maybe to turn off a feature, for example, that we can do internally?
Or do you assign that to a third-party agent to like an AWS agent that may provision more storage or whatever it might be, or a ServiceNow agent that might take some workflow action. That's sort of the next layer. Third layer would be those agents that actually could take action to resolve issues. You have a series of ecosystem agents that integrate. It is sort of a very thoughtful stacked map that can define what agents are taking action. I see that as while critical to overall architecture and the architectural topology, foundationally, the more critical piece, I think, for investors and others to take away, even customers with whom we speak, is that start with a deterministic foundation. You then have the confidence to take agentic action.
We, Dynatrace, have produced an architecture through Dynatrace Intelligence that enables you to take that agentic action thoughtfully, either through Dynatrace agents or third-party agents, to be able to deliver against those elements of essentially an auto-correcting software ecosystem, which is ultimately what we all want, to be able to deliver.
Awesome. Let's talk about in terms of sticking on the theme of product innovation, you launched your next generation Real User Monitoring service powered by Grail, powered by Smartscape and Advanced AI. In the context of you've gotten to like a $100 million consumption run rate with Logs, the question here is like, given what the traction you're seeing today with Digital Experience Monitoring and now with this next- gen platform on RUM, how confident are you that this can be your next $500 million, $1 billion + business?
Well, I will tell you that, you know, we already have $400 million + businesses. One of them obviously most recently is Logs. Our DEM business is well over $100 million, and our Infrastructure Monitoring business is actually the second fastest growing business, interestingly enough, next to our Logs business. There's obviously Full-Stack for APM. You know, the expectation is across all these categories, and some of the things that Rick talked about is more workloads, more workloads across a broader stack. The biggest sales play that we've been able to drive has been end-to-end Observability. These are customers that are looking to consolidate fragmented tools onto one platform. You see it in our land sizes for new logos. You see it in the expansions that we're doing.
These are gonna be primary sources of growth now and in the future.
I wanna have a discussion with both of you on Dynatrace's defensibility in the era of AI. Let's get an update first on just some of the trendlines of the business. I wanted to walk through this with you, Jim. In terms of the ARR performance, we've seen constant currency net new ARR stabilize at 16% for multiple quarters after years of deceleration. Can you talk us through the specific factors that contribute to stabilization? Do you feel like this is kind of the new baseline for growth as we look forward into fiscal year 2026?
Yeah. I appreciate the question. Again, to your point, we've had three quarters of stabilized ARR growth at 16%.
Mm-hmm.
We've had three consecutive quarters of double-digit Net New ARR growth, which obviously fuels ARR. Our guide for the fourth quarter at the high end would suggest this continues. To your point, we haven't done this in several years. You say, "Well, what has caused that?" Well, what has caused that was changes we said we made.
Yep
Basically, you know, a little almost two years ago now, where we made go-to-market changes. We were very clear when we made those go-to-market changes. These were changes to go on the offensive. Those changes were we oriented more resources around large enterprise accounts, so what we call the Global 500, so the 500 largest companies or governments on the planet. We continued to fortify our partner ecosystem, in particular with the GSIs. Everything we put in place two years ago, we knew year 1 was gonna be a maturation year. We needed to get the resources staffed. We needed to get the alignment going. We changed some compensation plan designs. We knew year 1 was gonna be a period of building.
What you're seeing this year, you're seeing execution consistency, which is exactly what we expected when we built the plan two years ago. The maturation of the go-to-market model has continued to advance. I expect that will continue going into fiscal 2027. Even though we're not going to guide here, you look at fiscal 2027, there's a lot of momentum in the business that we expect that if we can continue to execute like this, you'll see it continue into next year.
To follow on that point, this is, an area of question that I get asked a lot by investors. The underlying consumption in the platform, you guys have referred to, is growing, you know, north of 20% and outpacing the subscription revenue. In terms of the lag between consumption growth and then that materializing ARR, what's the best way to think about those two dynamics?
Yeah. I mean, I get that question a lot because one of the things we wanted to make sure we shared with investors is what's happening with the underlying growth in the business, which is how customers are consuming the platform. You know, obviously our model is a subscription model, and so subscription model is revenue is ratably recognized. That isn't always how consumption occurs.
Mm-hmm.
If ARR is growing 16% and consumption is growing 20%, it will converge. You know, the challenge is the timing of it. There's a lot of dynamics that go into when that will occur. You know, we do look at something internally. We look at the consumption-to-ARR ratio, and the way to think about that is the headroom. What is the headroom a customer has before they need to do an expansion? For us, it's about continuing to drive more consumption. One of the things that we've done is even though the model is a subscription model.
Mm-hmm
At its heart, the Dynatrace Platform Subscription model is a consumption model.
Mm-hmm.
Customers commit to a dollar amount. They can commit to a term, and then they consume. It's a frictionless model. They can consume 'cause they have the rate card for every capability on the platform. They're getting value. We can work with them on driving more adoption of different product capabilities. They'll consume more. The more you consume, you can consume faster. It'll burn through your commitment early, and you'll do an expansion. I expect that we will continue to provide that as a metric, like how are we doing around driving consumption. We now have teams of people. They're measured on this. They're compensated on this. We have specific product strike teams for Logs measured on consumption, product strike teams for DEM measured on consumption, product strike teams for Application Security measured on consumption.
Our CSM teams are also measured on consumption for the accounts that they support. They're not product specific. We've advanced a bunch of activities that are very consumption oriented. That wasn't the case two+ years ago. It was a very SKU-based model. Now it's a get 'em on the Dynatrace Platform Subscription and drive consumption.
That's a great context there. One of the things I've been saying all week, Rick, is that my conversations at the beginning with investors was sort of like, "Sanjit, congratulations. You cover infrastructure software. You cover data platforms. Your companies don't have seat-based models. You're so lucky." In the last couple of weeks, now everything sort of gets questioned, right, in terms of the defensibility. I wanted to spend some time with you, Rick, talking about how Dynatrace is positioned in this era of AI and give you some of the scenarios that I get asked about and get your sort of perspective.
One of the questions I've been getting is: How does the value proposition of Dynatrace, the Dynatrace platform changes when agents are doing the investigating and triaging versus human site reliability engineers or DevOps personnel, you know, working through dashboards?
On that topic, Sanjit, specifically, I would say that we expect exactly that evolution. We expect over the coming years. By the way, what the investment community is talking about at the moment, in most cases, is not what customers are talking about with us at the moment. There's a broad disconnect, I would say. If we look out over the course of the coming years, we do expect that.
Humans and users, if you will become a relatively lower consumer of the Dynatrace platform, and agents will become a relatively higher consumer of the Dynatrace platform. That does not in any way translate, in my view, to any disintermediation of Observability by AI. Rather, we see Observability as being mission-critical to these AI workloads where agents are taking action that have to result in reliable outcomes. You're simply, in our, in our view, not gonna have a probabilistic system providing input to a probabilistic system on delivering an outcome that is trustworthy for the kinds of organizations with whom we do business. It is that sort of notion that we believe that Observability and in particular with some bias Dynatrace can and should become the control plane for reliable AI.
That really is based on an architectural moat of, as I mentioned earlier, Grail, Smartscape, Dynatrace Intelligence, the various technologies that we have built into the platform to deliver both deterministic AI and certainty of answers that can be trustworthy and that can be acted upon in a reliable way to deliver the AI outcomes of the future. Whether those outcomes are delivered by an end user or by an agent in some senses is not that impacting to our overall business model. In fact, to the extent agents are a bit chatty and they're gonna consume more analytics than an end user would, then if anything, we believe that that's a tailwind to Dynatrace, just as AI broadly is a tailwind to observability and Dynatrace by virtue of generating more workloads.
One thing I'd add on that, Sanjit, is that we're not a seat-based model. Because we're not a seat-based model or a consumption-based model, so we monetize through consumption. Get them on the platform again through the Dynatrace Platform Subscription. We don't even have to change our monetization model for the way we go to market with product packages. It's already in place.
Yeah. Take advantage of those elements, yes. Correct me if I'm wrong, one of the marketing messages that Bradley have had for years, even before the AI moment was like literally answers, not dashboards, right?
Answers, not guesses.
That actually came from Rick.
Yeah. answers, not guesses are one of them. The other thing we would say is, you're right, answers, not dashboards. It is really critical for our customer base to get precisely to an answer, not try to ascertain what's happening in the environment through a dashboard and through alerts in that environment.
Awesome. Let's go to the second kind of flavor in terms of investors' concerns on the category more broadly, but also with specific to Dynatrace. This other angle is that the ability to combine open-source tooling to collect metrics, traces, and Logs, and combine that with an agent, either from one of the model labs, to reason over the data and execute an incident response. I guess what investors are getting at is the potential for customers to manage Observability themselves at theoretically lower cost or even more nuanced, negotiate better pricing when it comes to their Dynatrace renewals and bills. Why is this line of thinking off base?
My response to that is that, look, the primary deployment of observability throughout history of observability has been DIY. It is relatively speaking, quite recent that companies like Dynatrace and observability companies have come into the fray. DIY continues to be feasible, that you could use open source tools, you can use OTel, OpenTelemetry, you can bring in these sorts of elements, and you can manage it on your own. The fact of the matter is that is getting more and more difficult to do each and every day. Now, might an LLM decide to do that for their own infrastructure? Maybe. Why? Because that is core to their business. Delivering a resilient LLM that has reliable AI output, as you can imagine, for the LLMs is quite core.
In the case of an enterprise, the largest banks, the largest healthcare organizations, the largest airlines, delivering a dynamic end-to-end observability solution that can process billions of interconnected data points contextually in real time, that is super complicated, and it takes this sort of broad-based platform architectural moat that we've described to have constructed that. That is not, in our view, a likelihood of an outcome, certainly for the vast majority of large enterprises.
If we call this category monitoring, that goes back, you know, into the late-mid to late 1990s, one of the things about this category, monitoring, observability, it's been highly tied to changes in compute cycle. The history has been that the leader in one cycle doesn't typically stay the leader in the next cycle. I actually think in this category, Dynatrace is one of the true success stories. You guys were a leader multiple cycles ago when we were building on-prem Java applications. You guys, you know, innovated, rewrote the platform from a clean sheet of paper, looking like aces today. In terms of this broader AI debate, what are the ingredients of the business that allows Dynatrace to stay on top of the innovation frontier with potentially a new platform shift ahead of us?
Well, we've talked about some of it. I think that Dynatrace Intelligence is a core part of that. This notion of it's not just about deterministic AI and agentic AI, it is about using deterministic AI and agentic AI outcomes becomes particularly critical and doing so in real time.
That dynamic in our view, we've talked about the shift toward agents as consumers, if you will, observability of observability data. In that environment, I would say that structure, that architectural context is even more critical to deliver. That becomes sort of the next generation. Even as AI first sorts of models evolve, those models are going to evolve in a way in which observability foundationally becomes more critical. You really do have to have the deterministic piece, and I'd say that is where that is where Dynatrace differs from others in the market, to be able to provide that underlying foundation for success.
I think the other thing, you know, just covering this space in the context of investors debating with cost of code going to zero, that customers can now build anything, right? I think what you guys have been doing, what some of your peers have been doing, these aren't tools. These are distributed compute platforms that processes billions, trillions of data points...
Exactly
In real time. It's not like we can go out and build this easily. This is, you know, pretty hardcore stuff.
Yeah
Some of what you're suggesting I think is exactly right, which is it is, at least in our view, it is much easier for an LLM as you vibe coding something to rebuild something that has a standard workflow.
Our workflow at any particular customer at any given moment is highly dynamic, highly variable, depending on what's happening at that moment in time based on a data plane and contextual data as input to that system that is inherently different than it was seconds ago, let alone minutes or hours ago. That dynamic element means that the platform always needs to be learning and that is a shift that doesn't result in I'm producing a piece of code for a moment of time, and it takes that sort of domain expertise of the individual environment in the context of the overall platform and its generation that delivers meaningful value.
Yeah. That's great context. We've talked about the secular debates. Let's get down to the field level and talk about some of the things that are going on the ground in the business. Starting with kind of market to market, we're in terms of the go-to-market progress. We're about, I think, Jim, you mentioned almost two years into the go-to-market changes. You stated that visibility and confidence is greater now than a year ago with pipeline also accelerating. Where are you seeing the most success and which elements are still maturing?
I'd say it's playing out about as we expected. Again, when we outlined this almost two years ago, where we are on our journey is about what we expected. You know, I'd say the number 1 sales play, I think I may have mentioned it earlier, is end-to-end observability. We have three sales plays. We have end-to-end observability. We have an APM land play where you land, you do a POC, and you expand from there.
Mm-hmm.
We kinda have a cloud-native play as well. I would say universally, the most successful sales play has been end-to-end Observability, that we have a sales organization that knows how to sell it. The value proposition is very clear. We can actually allow them to save money, and they consolidate tools, and they can get a better outcome. I'd say we're still even though I talked about this two years ago, Sanjit, that this was an emerging trend.
Mm-hmm
This is a prevalent trend now. This is more and more enterprises are looking to consolidate fragmented tools onto one platform, not unique to our particular industry. You're seeing it in security and other places as well. I'd say it has been a source of growth, and I think it'll be a continued source of growth because it's still many companies are not doing this. I'd say the area that we're making great traction, and we're very proud of the fact that we hit our $100 million milestone for Logs. That's just the tip of the iceberg. To your point, right, we have a lot more growth to be had within Logs.
40% of our customers leverage our Logs solution, continue to grow the cohort classes. They start on the platform smaller, and have grown significantly. Between end-to-end observability, Logs in this growing use cases, and you will see this also with AI, native workloads that these are all areas that will continue to mature. The good news is the go-to-market motion is we're in year two, which is where we thought the productivity improvements would begin. We expect those productivity improvements to even accelerate further in fiscal 2027.
I mean, to your point on like the consolidation, buying behavior, there's, you know, investors wanna look at this market as being crowded. There's a lot of players. You could probably name a dozen different players, but there's probably not that many that can pull off the consolidation deal, right? When we think to that, and then I look at the success you've had in winning large deals and the pipeline being constituted with deals over $500K and $1 million kind of speaks to your ability to really win those consolidation opportunities.
When we think about managing the timing, variable inherent with these large enterprise deals and while maintaining, you know, guidance accuracy, what rate of success are you seeing converting the pipeline, needed to see not only your goals for fiscal year 27, but just, making sure that you get more confidence into your, into your guide?
I would say two years ago that we were growing pipeline or closing pipeline, two+ years ago. With an or. I'd say the consistency around what we drove for Net New ARR quarter-to-quarter varied a bit. I'd say what we have now is we are growing pipeline and closing pipeline at the same time.
Mm-hmm.
Pipeline growth is very strong. I'd say the quality of the pipeline is also very strong.
Mm-hmm.
We measure that by just inspecting the pipeline and looking at kind of where we're at from a sales stage perspective. I think that comes from the go-to-market changes that we made.
Mm-hmm.
Again, we made go-to-market changes to get closer to customers. You know that one of the big changes we made around the enterprise accounts was to go from maybe 10 accounts per rep to, say, four to five, which means they're a lot closer to the specifics of their customer base, which means when you're looking at pipeline and deal flow, there's a lot more intimacy around what exactly is happening.
Mm-hmm.
With that brings confidence now around closing. Your timing might vary a little bit, but I would say even though we continue to see end-to-end deals be growing in number and growing in size, I'd say our confidence in our ability to not have to land all of them, some are gonna maybe close 1 quarter, some will close the next quarter. I'd say we have building confidence that the consistency in the go-to-market execution just continues to advance, and I expect it will continue to.
That's definitely encouraging. I think from my seat, when I think about the last 12 - 18 months, it felt like you guys would execute on quarters, knock down some of those bigger deals. We'd see good ARR results, because the pipeline was weighted toward those deals, we had to be more conservative on the forward quarter. Is it also part of the solution maybe getting a higher velocity, more transactional blocking and tackling business that generates some of that ARR and that revenue as you guys go and penetrate your larger enterprise customers? Maybe, Rick, you can speak to the potential to build a higher velocity sales motion. Maybe it's mid-market or upper mid-market. Just your thoughts on that.
Yeah. I actually think, Sanjay, what you're gonna see in a cloud-native, AI-first world is you're gonna see not a pivot, not a transition, but an evolution toward more departmental selling, in particular the development audiences associated with cloud-native deployments. I wouldn't, I wouldn't walk away from the session thinking, "Wow, Dynatrace is gonna go to SMB.
Mm-hmm.
Our value proposition really wins most often in the larger enterprise." It doesn't always have to win in centralized IT.
Mm-hmm.
I think it is just as pervasive, just as impactful in smaller groups within that larger enterprise, which then get aggregated. I think that is some of the transactional volume that we can imagine evolving into the future.
Awesome. Let's talk a little bit about capital allocation. You've announced a new billion-dollar share repurchase program. Maybe give us, you know, why re-up for $1 billion. What's the signal that you're looking to send to the market?
I'll take that. You know, we were, I think it was probably, I think it was May of 2024, that we put the $500 million authorization in place for the first, in the first go. We tripled what we spent on our buyback in the third quarter because we actually thought the, it was a significant value dislocation, for the company based on what we believe the prospects of the company were and what we were valued at. We exhausted it. We doubled the authorization. Again, going back to our philosophy, our philosophy is, 1, invest in the business. That is the first use of capital. Because we have $1 billion of cash-
Mm-hmm.
We generate $500 million of free cash flow, we're in an advantageous position where we can both return capital through a buyback. Opportunistically grow and look for opportunities from an M&A perspective. You can expect that we will be buyers on, at current prices, probably at an increasing level to even more so than what we did in Q3. That's what we're going to do. I'd say from an M&A perspective, we are active shoppers, but disciplined buyers. There are things that need to kind of fortify the platform, and fortify what we think is an opportunity to kinda grow and broaden the observability use cases.
Maybe last question to wrap up. In terms of stock-based compensation, how you're managing that, what level of share dilution should investors expect? In terms of getting more GAAP profitable, how would you sort of rank those priorities?
Well, we are the unique company that is GAAP profitable.
Mm-hmm.
We're not becoming GAAP profitable. We are GAAP profitable, which is unique in kind of the software space. Relative to stock-based compensation, we've always been a very appropriate user of stock-based compensation. I think this year will be around 15% or 16% of revenue. My expectation is that we'll drive more scale from that going forward. While others are trying to catch up relative to the profitability, look at the profile of the company. The company has almost 30% operating margins on a, on, you know, because we're a cash taxpayer, where most tech companies are not.
Yeah.
We generate, on an equivalent basis, call it 32% free cash flows on a pre-tax basis. For us, it's been a focus on the things that we're doing have all been about how do we accelerate growth in the business. Accelerate growth in the business, you'll find that it's, you know, again, you'll actually drive more leverage when you accelerate growth in the business. You'll actually get it at the same time. We're quite optimistic about the opportunities ahead for the company, where we're at, where we're positioned, both on the go-to-market side, where we're at, where we're positioned on the product side and the platform side.
I'd say this is an exciting time for Dynatrace because I think a lot of what we've put in place is actually kind of something that we're gonna see this momentum continue into fiscal 2027.
With that, we're out of time. Rick, Jim, thank you for giving us the update on Dynatrace. Best of luck going into Q4, and thank you for coming to the TMT conference.
Thanks, Sanjay.
Thank you.
All right. Thank you all.