All right. We are continuing the afternoon sessions at the Morgan Stanley TMT Conference, day one. Super thrilled to have the management team from Elastic join us. We have CEO Ash Kulkarni and Chief Financial Officer Navam Welihinda. Ash, Navam, thank you again for joining us at the Morgan Stanley TMT Conference.
Thanks for having us.
Thanks for having us.
For the quick disclosures, for important research disclosures, go to www.morganstanley.com/researchdisclosures. With that, let's kick off the conversation. Definitely an interesting time in the market. Investors debating all sorts of aspects with relates to AI. I think people are coming back to, like, a first principle of level thinking when it comes to software companies. With that as context, as investors assess what software companies will prove durable in the AI era, can you talk today about the problems you solve and the core value proposition you deliver for customers today, Ash?
Yeah. The best way to think about Elastic and what we do is think of us as a data platform, and in the context of AI, what we are relevant for is providing the right context to large language models to be able to do their job, to be able to, you know, do the task that they're working on at the moment. Fundamentally, the best way to think about it is most organizations, you know, take your organization as an example. You sit on petabytes of data, every day you're creating fresh data that's in, you know, often petabytes. You can't move that data to a large language model.
You have to bring the model to the data, because it's physically impossible. It's gonna be too expensive to do it otherwise. When you bring the model to the data, it really comes down to how do you quickly, in real time, tell the model exactly what information from all of these petabytes of information that you might be sitting on is relevant to that particular question, that particular task. That's where we come in, and our core differentiation is in how we provide that specific data relevance, that specific data context, depending on the question that's being asked. That's how we get used in all the AI use cases today.
That's a great start to the conversation. You guys reported earnings last week, let's go through some of the highlights from earnings and talk through any of the debates coming out of the quarter. You printed a strong set of results last week. Sales-led subscription revenue growth accelerated to 19%, I think from 17% the prior quarter. Ash, can you walk us through the highlights? Navam, feel free to chip in.
Yeah.
What is the market missing when it comes to, last quarter's results?
Like you said, strong sales-led subscription revenue. You know, operating income was again strong. What we also talked about on the call was, we had a record number of million-dollar deals. All the go-to-market changes that we made about seven quarters ago are really paying off. We kind of, you know, segmented our sales teams into hunting territories and then farming territories. That segregation is really helping. In terms of signing more strategic deals, growing our business. In terms of AI adoption, that has been, again, very strong for us. In the cohort of 100K customers, which is responsible for, you know, the bulk of our revenue as a company, AI adoption is growing.
Now it's almost a quarter of our, you know, 100K customers are using us for AI. I'd say the two questions that keep coming up, one is around cloud versus sales-led subscription revenue. I think that's a really important one for me to constantly clarify. You know, we are seeing more and more interest in using our platform in what we call self-managed environments, where a customer takes our software and then runs it either in their own data centers or in their, you know, modern cloud environments, but in their own private VPCs. We are seeing that grow for several reasons.
One, in the U.S., in regulated industries, a lot of the AI use cases tend to be on data that's, you know, sensitive, that's proprietary. They wanna keep that within their own domain, within their own control for all kinds of, you know, reasons. We are seeing more usage in government. Again, there you have secret and top secret environments where there is no notion of a marketplace. The only way, you know, they can deploy software is in self-managed. In Europe, we are seeing a growing demand for running technologies in sovereign environments, as they like to describe them.
That's the last piece has been more of a trend in the last couple of quarters. I would expect that structurally these things will continue. You know, it's a huge asymmetric advantage for us because you look at our competition in security or observability or even in AI, there aren't many companies out there that can say you can get this entire platform with all of these capabilities, not just in a cloud form factor, but you can also run this in your own environment. I think that's an important one that we are constantly reminding our investors that look at the entirety of the business and sales-led subscription revenue, not just cloud.
The second thing is AI adoption. You know, how that AI adoption, now we have, you know, a quarter of our customers in the 100K cohort. As more customers adopt AI and as their usage on AI continues to grow, that naturally becomes a tailwind to our business. Those are the two areas where we get the most questions, where I spend my time educating people.
Yeah. To your point, I think if you looked at the other subscription line which captures the self-managed piece that accelerated by three points, during the quarter. The other question that I've been getting from investors since earnings, and, Navam, I'm gonna direct this to you.
Yeah.
The context, again, as Ash pointed out, 30% growth in $ million commitments, RPO up to 22%. When we looked at the Q4 guide, which is the next quarter, that implies deceleration of sales-led subscription basis. You printed 19% constant currency, you're guiding 15%. Total revenue, you're looking for about 13%. You flagged three fewer days in the quarter in Q4 as one way to think about this and typical risk adjustment guide versus actual worlds. Is there any other factors that investors should be considering in terms of contextualizing that Q4 guide?
I mean, I think the big message that Ash talked about in the first question about how well our execution is going on the sales-led subscription revenue line remains the case in the third quarter. We've now got a full year number for sales-led subscription revenue as reported at 20% and constant currency at 17%. This means that this is the fourth year running that we're compounding sales-led subscription revenue at or above 20%. On a constant currency basis, we've been 20% for the past two years and 18% now as guided for the full year.
We are remarkably pleased with the underlying strength that we're seeing in the business and the commitments that are driving and how consumption is going, right? Overall, very positive in how the business is going. On the fourth quarter, we always give you a guidance number that is risk-adjusted that has prudence built into it. You mentioned fourth quarter when you're thinking about it sequentially. There's three less days in the quarter, you got to think about that as you think about the sequential view of how fourth quarter absolute revenue is compared to the third quarter.
Outside of those things, we feel good about the business. We've risk-adjusted the number and give you the fourth quarter. As always, you shouldn't over-rotate on a single quarter. It's always about the trend and how the subscription revenue line continues to build, and that we're feeling good about for the full year.
Awesome. That's great. I think it's great context. Let's get into the meat of what's the theme probably across every software presentation, sort of a software vendor's defensibility to perceived AI risk. When this year started, Ash, like a number of investors reaching out to me, 'Glad you cover infrastructure. You don't cover the fee-based models.' Our security analysts in general, like, was feeling good. On the last couple of weeks, kind of everything has sort of been questioned. I wanted to just dive into some of the debates and get your perspective on some of the AI risk debates.
Now, when it comes to Elastic, one angle that I hear is that as the cost of software and software development goes to zero, does it become easier for customers to manage open source deployments, using software, using AI and AI agents to just use open source for their data platform needs, for their search use cases or for their observability use cases without having to pay Elastic? What is your argument against that line of thinking? What would the skeptics be wrong when it comes to, hey, AI is gonna make just open source deployments that much easier, and we don't have to pay for the commercial, proprietary offerings?
Yeah, a couple of things there. The first is, you know, the way we've built our software stack, it's not just about the paid version is like we operate it and the features are the same. We have a free version that has a certain amount of capabilities and functionality in it, and then we have paid versions that have incremental functionality that tends to be much more valuable, not just in terms of what you can do with it, but also in terms of driving greater efficiency in your hardware utilization. There's a lot of value in those features, and that's what people, you know, pay us for.
All of those are licensed in a certain way. If you know, use those capabilities, you have to pay us. Otherwise, you're violating a license, and, you know, that's something that's enforceable. The second thing is, you know, you talked about why wouldn't somebody just try and build this themselves and run it. The reality is, you know, you can write software using these AI tools. We use a lot of AI tools internally heavily, and the usage is growing. I'm a big fan of what you can do with some of these technologies like Cloud Code and so on.
It's a completely different matter once you've written the software to actually operationalize it, run it at scale, manage it, especially a data system. You talked about infrastructure software. You know, you talk about data systems, and it's not just ours, but ours and Snowflake and others out there. Like, these are systems. We have customers who are running literally hundreds, at times thousands of nodes, like just massive deployments that they're managing. To be able to run software at that scale is a very different thing than writing code. Like, there's a difference between writing and then operationalizing and managing.
If you imagine the cost involved, the risk involved, the effort involved in all of those pieces, why would an LLM or even an LLM maker choose to go down that route when it's much easier for them to just use the system that's already in place? The data is already sitting in that system. I mean, the cost equation would not make any sense for an Anthropic or an OpenAI or anybody to try and take that workload over. It would cost them more. It would cost the customer more. It would take more time. It would potentially introduce greater security risks and vulnerabilities.
It just makes no sense whatsoever. Now, yes, can you build UIs easily? Absolutely. Can you build simple workflows more easily? Absolutely. That's why I think that every software vendor is gonna have to really think about what is their defensible moat. Our defensible moat is our data store, right? That's really. All the work that we've done in terms of relevance and context accuracy, that's really the defensible moat, which we feel very good about. I don't think Anthropic's gonna try and recreate Postgres. They're gonna use Postgres. They're not gonna try and recreate Elastic, they're gonna use Elastic. I think that's what you're gonna see more of.
Yeah. I think in my conversation, one of the things I point out is that these are tools, right? These data platforms are tools. They're massive scale distributed oftentimes cloud systems.
That's right.
And, uh, and, uh.
Sanjit, the way, you know, the analogy that I'll offer maybe, and it might make sense to you or it might not, but, you know, the operating system for the last 10+ years was really the cloud platform, right? That's where you went to. It had all the compute infrastructure. You went there to write your applications. Even in those environments, you had data systems. That you integrated with because data would sit in those systems, they were specialized for it, and you would write all your application logic on the cloud platform. The new operating system going forward, in my opinion, is gonna be these language models.
These AI systems. Just as you had the cloud platforms, you're gonna have these LLMs. These LLMs are really optimized for reasoning, they're optimized for inference. They can do more than just deterministic, you know, development. They're still gonna need data systems to be able to store data, to be able to retrieve context for all of those reasons. I think you're gonna see that same parallel model here. Data systems are gonna continue to coexist.
Can I ask you one follow-up on the point you just made? You made the analogy of the data platform's role in the context when cloud was sort of the heart of the matter, and LLMs become, like, the heart of the operating system. Does the role of the data platforms changes in one paradigm versus the other?
I think the way you do some of these queries change, and you already have seen that, right? We're now talking about vector databases. We're talking about, you know—who was talking about context engineering five years ago? Nobody had any idea what that even meant. Nobody was talking about relevance because all of these systems are probabilistic as opposed to deterministic. It's not SQL anymore, it's about, you know, relevance, and it's about, you know, vector queries and so on. Yes, the role does change, it does evolve. That's what we've been working on for the last five-plus years, just on the vector database side.
I think the other thing that changes is more and more you're gonna see that people are not gonna build with humans only in mind. For the last 40, 50 years, most of our applications have started with UIs and visualization layers and dashboards and reports and so on. If I have an LLM that's accessing a particular, you know, thing, it doesn't care about that being in a visual form. You're gonna have less consoles, you're gonna have more APIs. You're gonna have less dashboards, you're gonna have more direct access to get the raw data because the LLM knows how to process it.
There is a real shift that's gonna happen. You know, that's something that we care about, that's something that, you know, we've been working on. I think all software platforms are gonna need to evolve in that way, but they are gonna coexist.
Yeah. No, it's a, it's a huge theme. I think we wrote an agent report, Keith and I did about a year ago, on just the shift from human to computer interface versus agent to computer interface. That's gonna be what we're talking about, I think, for a really long time. We've talked about the risk associated with AI. Let's talk about why Elastic potentially is an AI winner. Looking at the other side of the coin, where does Elastic play in the enterprise AI ecosystem? How is AI impacting growth today? Why will AI serve as a tailwind for the business in the years ahead?
I'll talk about the first part, and then I'll let Navam talk about the numbers. Just in terms of where we play in the ecosystem, we get used in a few ways in the core AI stack. We get used as a data retrieval platform for context engineering, so everything from vector search, our Jina models that we introduced recently, embedding models, re-ranker models. You know, if you look at the MT-B dashboards, the Hugging Face benchmarks, the Jina models are some of the best out there. Like, they outperform all the other commercially available models for embedding and re-ranking.
We are seeing a lot of demand for Jina. Agent Builder has been. Like, we are seeing really good traction. People are building SOC workflows that they're optimizing on their own. They're building SRE workflows on top of Agent Builder. We are seeing a lot of interest, not just in the core AI stack, but we are seeing the AI stack now being used to give us a competitive advantage in security and in observability. That, we feel, is gonna be how our AI story plays out. It's not just gonna be in search, but it's gonna be on multiple vectors. You wanna talk about the numbers?
Yeah. At the core, Elastic is a consumption model business, so we monetize consumption by our customers, and AI workloads inherently are more computationally intensive, so it drives more consumption on our platform. We had our Financial Analyst Day in October of last year, and there we actually gave some very good data in how we're seeing the difference in consumption increase of people using AI versus people who are not using AI. We quantified that difference as approximately 6% between those two cohorts. That 6% is an average number. There's a wide dispersion among those customers.
Some have many multitudes of that 6% as the uplift, and some are earlier on in that journey, so they're less. The core is that we are seeing benefits on our revenue side of our customers using Generative AI, which is driving a tailwind, and we're starting to see that in our 100K customers, which are where the majority of our sales-led revenue comes from. We're seeing more and more penetration of the 100K customers as the quarters go on. The second S-curve behind that is every one of our 100K customers are on their AI journey themselves. Some are early, some are progressing, as that inflex, you're going to see the second leg of growth as well.
Yeah. You mentioned the investor day at the end of last year. I wanna revisit some of the midterm growth targets that you laid out.
Yeah.
If you can lay out the sales-led growth targets and how you anticipate AI monetization will impact the midterm growth target, and under what time frame should the AI contribution become materially accretive to growth?
Midterm to us is approximately fiscal 2029. Our current sales-led subscription revenue targets are 20% as reported, and 18% on a constant currency basis. I mentioned that we're seeing strong compounding of that number right now. We're also seeing AI contribution of approximately in the mid-20s of the 100K customers. We're still generally early in the AI contribution among our customer base, but it is showing up in the total numbers as a tailwind to us. We feel very good given how we've been executing in the third quarter to continue to compound our sales-led subscription revenue number, and the midterm targets are basically to get that 18% constant currency number to above 20%+ in the 2029 time frame.
That structurally is going to be a lift rather than an inflection that you're gonna see in any given quarter. Over time, you're sort of gonna see this rising tide of revenue growth, so to speak, to get to that 20% as the first milestone, 20%+ of that first milestone and midterm target.
Got it. That's very clear. Actually, I wanted to ask you a question around context engineering, but I actually wanna pick up on a point that you made earlier about bringing a solution to market, not just individual pieces, whether it's vector search. What does that solution look like? You mentioned Jina, embedding models, re-ranking models. We have Esri and vector search capabilities. What other capabilities constitute a solution in the eyes of customers?
Yeah. The way we think about it is what does it take for you to build an agent from soup to nuts within our environment? Now, keep in mind that you're gonna build a multitude of agents within an organization, and agents will talk to each other through protocols that are now becoming more and more standard like MCP and A2A and so on. For us, like, the way we thought about it is , if I wanna build an agent from scratch, I'm gonna start with the data, and then I'm gonna start chatting with the data and assembling all the skills that that agent needs to do its job.
That needs everything from being able to pull the data in to begin with, to chunk it, to then turn it into vectors, then to be able to, you know, do re-ranking if I'm using multiple search techniques to retrieve the most accurate context. I might want to make sure that I can connect to an LLM directly within my environment without having to go outside. I also wanna make sure that I am observing, I'm providing some amount of observability on token usage and other kinds of FinOps activities, do some basic guardrailing.
All of those capabilities to us are what it takes to build a complete agent and then hook it up to whatever task you need it to, including things like workflow, because these agents are not just about chatting anymore, they're about actually taking actions, which also increases the importance of the accuracy of the context. If you look at Elastic's platform today and compare it to where we started, you know, even two years ago, all we had was a vector database, and we had hybrid search. Since then, we have introduced the Jina models, both embedding and re-rankers. We have introduced Agent Builder.
We have introduced workflow. We have introduced LLM observability. Everything together. We've introduced the Elastic Inference Service, which currently hosts our ELSER model along with the Jina models, but it also proxies out to other LLMs. You can do everything from within our environment without having to, you know, go bring your own license key or whatever. In the future, our goal is to also support open source models like Llama models, Mistral, et cetera, through that same Elastic Inference Service.
Soup to nuts, the ability to build everything that you need for building your own SOC agent or building your own SRE agent or building your own, you know, workplace agent for customer support or for improving Salesforce productivity or for legal or whatever you might need to do. That's the goal, and that's how we look at the fullness of the platform.
In terms of the answer you just gave in terms of what a platform looks like, can we just sort of marry that with what context engineering is? It's a new buzzword in the industry. We're hearing yourselves talk about it, other players in the ecosystem sort of talk about the importance of context engineering. maybe define that, define context engineering and how Elastic is playing a central and becoming the textual engine for agentic deployments?
Think of context engineering as the set of processes, the platform, the capabilities needed to provide a thinking engine, an LLM, with accurate context at every step of its journey. That context is everything from memory, as in what interactions did you have with it in the past, to retrieval, what specific documents from your corpus of exabytes of data does it need to look at to be able to answer the question, to specific known relationships. Not everything needs to be inferred. You know, within your organization, you have an organizational hierarchy, and there are rules on what access rights you have and so on. Those are deterministic rules.
You can just provide that information to the LLM as context for certain activities that it might need to do. It's the combination of all of these things, and that's what the data retrieval platform needs to be able to do. It needs to be able to provide all of these capabilities. The LLM can do its job appropriately and actually deliver the outcome that you're trying to get out of it.
Awesome. That's a fantastic explanation. When I look at the total revenue growth trends over the last multiple quarters, what I see is pretty durable. Growth has been in a very tight range. Not yet accelerating. The question is, when we do our customer conversations, I think you guys have even spoken to this, that the search business growth has been improving in that area of the business, and that's what seems to ring most loudly when we do our own field work. Does that imply that the security and the observability business has been slowing down or there's been some headwinds to growth?
I know you guys are advancing the observability of product pretty aggressively. Is that the right way to think about why we haven't seen just a breakout in growth even though it's been very durable?
Let me first tell you how the businesses are doing, the various solution areas are doing, and I'll give you a different lens into how to think about the growth trajectory. In terms of our three solution areas, every quarter there are just depending upon the deal flow, there are differences in which solution does the best that quarter. Last quarters in Q3 as an example, security was the best, followed very closely by search, followed by observability. Now the way I think about it is in search, obviously there's been a big tailwind from AI.
That has been something that's really helping us. In terms of security, because we were so early in delivering capabilities like Attack Discovery, a lot of the AI functionality that we delivered, we have been significantly ahead of the curve, and that is helping us win more and more deals. The CISA deal is an example that we talked about.
Yeah.
I mean, that's a pretty transformative deal. This is CISA, the organization that's responsible for security for all of the civilian agencies in the U.S. government, basically, you know, taking Elastic SIEM as a service to other agencies and trying to bring them onto that service. It's a, it's a very strong endorsement. Observability is growing at the pace of the log industry overall. The fastest growing part of the observability business, though, has been metrics. That has not been a place of great strength for us in the past. This has been something that, you know, has been at the back of our minds.
The challenge for us has historically been that our backend Elasticsearch is highly optimized for storing dense information, like logs.
Yeah.
The reason why it's optimized for dense storage is what makes it inefficient at storing sparse data like metrics. We figured out how to build specialized backend stores within Elasticsearch when we started work on our vector database. We now have an incredibly optimized vector store within Elasticsearch, arguably one of the best performing in the industry. We are taking that same model and building a metrics data store that we expect to launch sometime in the middle of this calendar year. We've talked about it publicly at our ElasticON events, and that we feel is gonna give us the competitive differentiation that we need to compete heavily in the observability market and capture more of that market opportunity.
That's how we look at it. On the overall inflection or the growth rate, the one thing that I'll ask you to keep in mind is every release or two, we have been consistently delivering capabilities that makes our platform more efficient. If you look at the vector database product two years ago, we had HNSW and everything was represented in float16. You look at our vector database today with binary quantization and all the features that we've released, there's almost two orders of magnitude improvements that we made in efficiency. Think about that. Two orders of magnitude.
Yeah.
Which means that somebody was paying X a year ago or two years ago for a workload, they're paying a fraction of that today. That acts as a natural headwind. Why are we doing that? We're doing that because, A, that's not gonna continue forever. Like, I don't know how to quantize more than in a bit. We can store a dimension on a single bit. You can't reduce it any more than that, so the optimization's gonna asymptote over a while. You want to be the best, you want to be the most efficient because we are so early in this opportunity.
We think of this as a land grab. The more workloads we get onto our platform, the more customers we get onto our platform using our vector database, that is gonna pay off handsomely in the future. The way we look at it is, even though these optimizations might act as a bit of a headwind on revenue now and doesn't result in an inflection, the underlying workload growth has been tremendous, and as we continue to progress, grab more share, I think this is what sets us up very nicely. I would not expect a inflection. I would expect steady growth that will continue to be up and to the right.
Yeah, 'cause you're playing for the longer term share of wallet, which makes total sense. I wanna spend the last couple of minutes on sort of the capital allocation side of the question, Navam. Given the steep declines in share prices across software, including Elastic's, do you anticipate having to issue more stock-based comp to retain key employees?
Yeah, we've been remarkably disciplined in our stock-based compensation this past year. Keep in mind that 2020, this fiscal year is an investment year for us. We're adding sales and marketing capacity, we're adding R&D compared to last year. Even with that investment, we're maintaining a strong percentage of revenue in terms of SBC. SBC continues to be on the downward trajectory, modulo these investment years that we're making. We continue to be very disciplined. We're investing appropriately in headcount, but also being mindful of where the stock-based compensation is going.
With respect to like share repurchases, the level of share dilution.
Yeah.
Investors should expect on an annual basis, and maybe the priority in terms of GAAP profitability, what's the latest thinking on those dimensions?
Priority one for us is obviously make sure that we're investing organically to capitalize on the market opportunity that we have, particularly to exceed our or meet and exceed our midterm targets of 20%+. In order to do that, you need to have sales capacity in the field at an appropriate level. The current investments that we've made in capacity is not just an increase in capacity, but also combined with productivity increases per rep on a single-digit basis, right? What that's telling us is that we're not pushing on a string. These conversations that our sales reps are having are resulting in better pipeline and better ACV for us on a quarterly basis.
We intend to continue to push that as appropriate. AI is not disrupting human conversations. That's something that you need to continue to invest in. First and foremost, it's our midterm targets on the 20%+ line. Second is our focus on Rule of 40, and making sure that we are adequately adding enough on the free cash flow line as well within reason, maintaining enough growth for our top lines. Third, we talked about the $500 million a capital allocation that we capital allocation strategy that we had during Financial Analyst Day. We're well underway. More than 50% of our of our total has been deployed to reallocate back to our shareholders through share repurchases.
We're very happy with how that's going. That's the order of magnitude priorities of one, two, three, which is top line first, then free cash flow and share buybacks. The net result of all that is GAAP operating margin profitability over time.
Awesome. Well, thank you for laying that out. Thank you for giving us an update on the Elastic business. Thank you, Ash. Thank you, Navam. Appreciate it.
Thank you.
Thank you, Sanjit.