Elastic N.V. (ESTC)
NYSE: ESTC · Real-Time Price · USD
47.83
+1.43 (3.08%)
At close: Apr 27, 2026, 4:00 PM EDT
48.76
+0.93 (1.94%)
After-hours: Apr 27, 2026, 7:58 PM EDT
← View all transcripts

Morgan Stanley’s Technology, Media & Telecom Conference 2024

Mar 6, 2024

Moderator

All right, good morning, everyone. Welcome to day three of the Morgan Stanley TMT Conference. We're super thrilled to have the management team from Elastic join us, CEO Ash Kulkarni, and CFO and COO Janesh Moorjani. Thank you both for attending the conference once again this year.

Ash Kulkarni
CEO, Elastic

Thanks for having us.

Moderator

Awesome. So let me just get through the disclosures and we'll go kick it off. For important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales representative. So with that, I wanted to kick off the conversation, Ash, in terms of your take on the budget environment and investment environment. You just completed your fiscal year, really solid year. The quarter, you reported 80% const ant currency growth, cloud grew 29%, you delivered 13% operating margins. I think what a lot of us in this room are trying to figure out is the state of the budget environment.

If you can give us a storyline of how fiscal year 2024 started and ended, and then as we go into fiscal year 2025, do we expect more of the same than what we saw last year, or do you see any sort of sense of your customer base having a greater capacity to invest going into next year?

Ash Kulkarni
CEO, Elastic

Yeah, so maybe just a slight correction first. So we finished our fiscal Q3, and we are in our fiscal Q4. So we haven't yet finished the fiscal year. We finished it at the end of April. So when we started the fiscal year, nine months and change back, we were coming off an environment where we had seen a lot of cloud optimizations. We were seeing, at that point in time, customers across all segments really getting quite aggressive about finding ways to reduce overall cost. So we were seeing that in the cloud numbers and so on. And we leaned into it. And that's some of the reasons why we gave the guidance was the way it was. And what's happened over the subsequent three quarters, I'd say, is a few things. One, we've seen cloud optimization stabilize.

People seem to be at a point where they feel like they're good. There's nothing more that they want to try and press on. But every customer conversation that I've had, not just in the last six months ago, but even in the last three months, the sense that I'm getting from CIOs, folks who are responsible for significant budgets and so on, although things have become a little more normal, there isn't a sense that the pressure has completely gone away. So it's not like you're reverting back to 2021. Things seem to be stable. Things seem to be normal. But there is a recognition that budgets are still tight. On top of that, probably the area where we are seeing a lot of interest in doing more is in the area of GenAI. In some ways, it's being driven from the top down.

It's a CEO-level topic of discussion. It's a board-level topic of discussion. So there clearly is a bias towards it that we are seeing customers have. But that also means that in an environment where budgets aren't expanding significantly, that's putting pressure on other things. So from our perspective, the way we've gone about it is really lean into the platform consolidation play. We started on that, really pressing the accelerator on that about five or so quarters ago when we saw the stress in the macro environment. And that's been a great thing for us. We've seen a lot of customers consolidate onto our platform. We've talked about that in our earnings releases. We are winning more share in the areas of Log Analytics and SIEM. As we go ahead, I'm not seeing anything different. So we are looking at this being the normal for some time.

But that's where we are.

Moderator

Yeah, that's a great overview and very consistent with what we've heard from some of the companies that have presented this week. To pick up on Ash's framing, Janesh, the sort of mix between optimization activity from the customers and then ultimately bringing on new workloads, whether it's GenAI or observability, security, conceptually, what would it take for Elastic to reassert 20% growth? What's the sort of framework to getting there?

Janesh Moorjani
CFO and COO, Elastic

Yeah, I think as Ash touched on this, in terms of maybe just starting with some of the broader trends we've seen, I think the macro is generally stable. In terms of consumption, one of the things that we had talked about a few quarters ago, and the industry was talking about this trend of consumption optimization, that also seems to have leveled off, and things were relatively stable. And I think what we are seeing now is fundamentally that people have, as they've been consolidating their workloads onto Elastic and making these commitments to us, they've been ramping their consumption and continuing to grow their usage of Elastic. And I think that's what we saw quite strongly in Q2. We built on that further here in the third quarter.

We feel very good about just our overall positioning from the standpoint of both the competitive landscape as well as the engagement that we've got with customers, not just on GenAI, but on security and observability as well. I think we've seen good balanced growth across those. The sales motions across logging and SIEM in particular are working really well. So for us, it's just a question of continuing to stay the course and continuing to drive execution. We'll see how we finish our fourth quarter here. That's obviously the seasonally biggest quarter for us in terms of bookings and sales activities, as is for so many software companies in their fourth quarter. That will really set the pace for us in terms of next year.

Overall, as I look at the external landscape, I look at our competitive positioning, I look at how well the sales team has been executing, we feel really good about the future.

Moderator

Let's talk about the platform consolidation opportunity more, Ash. You mentioned it in response to the last question. In terms of the three solution areas of the business, where are you seeing the platform consolidation motion resonate most strongly? And then fundamentally, why are customers in these different product areas, why are they choosing to consolidate with you guys versus some of your competitors? And there's several competitors out in the market trying to execute the same playbook. So why is that demand funneling to Elastic versus some of the other players?

Ash Kulkarni
CEO, Elastic

So where we are laser-focused in Observability and Security are in the areas of Log Analytics and SIEM. So SIEM and Security and Log Analytics and Observability, that's where we want to land. That's where we want to capture the larger share. And when you look at those two areas, you see certain patterns. First and foremost, the data tends to be very complex. The data tends to be unstructured, especially application logs tend to be very difficult to get your arms around because every developer has their own notion of what they want to put into a log. So there's no proper standard schema for it. The volume also tends to be very significant.

So it's very common for us to see many hundreds of terabytes of fresh data coming into the system on a daily basis, the scale getting into the tens of terabytes in terms of the overall store. So handling that efficiently becomes really, really important. And then doing correlations across all of that data at scale becomes very challenging. At that scale, at that speed, at that kind of complexity of the data structures and so on, really, it's a small group of companies that play at that scale. And we tend to find ourselves in a very good place competitively. So our goal is to land and establish ourselves in those two areas, in Log Analytics and SIEM. And that's where we've seen the greatest success.

Once we're in, once we are your primary log analytics vendor or your primary SIEM vendor, then we might expand from there and say, hey, by the way, that agent also has endpoint security. We're not going to compete in an RFP against an endpoint security specialist. But if you already have it, you might start to use us on the edge, and we'll expand from there. But the Log Analytics and SIEM areas are so large. And if you think about the market and who our primary competitors there would be, we find ourselves in a place where we've got a better product. It's more scalable. We've been innovating more in areas like AI. So our AI assistants for observability and security are winning a lot of kudos from customers.

We're seeing a lot of success in our field, leads with those, and demonstrates how you can make it easier for a SOC analyst or for an SRE to actually do their job using these AI assistants. That's a big differentiator. So it's helping us win quite nicely. Search, we are almost always the de facto at least one of the primary choices that people will look at. Log Analytics and SIEM, three years ago, we weren't in that position. Today, we are in a really, really strong position to keep taking share on SIEM.

Moderator

Yeah, and that makes tons of sense. And then the other element about logs and SIEM, it's been a big cost pressure on a lot of customers too. And they're looking for that efficiency, particularly in this environment. So that makes a ton of sense in terms of where you're seeing the traction. Let's talk a little bit about generative AI and Elastic, which if I think in terms of how we think about generative AI on the software team, it seems to be about two things: a better way to search, an evolution or revolution in search, and the ability to reason on data to generate new content. And if you look at Elastic and what you guys fundamentally are, a search and analytics company should imply some exciting things down the road.

I want to talk a little bit about ESRE, which is the Elasticsearch Relevance Engine. What does that enable for customers? And what impact do you think ESRE will have on your core search business going forward?

Ash Kulkarni
CEO, Elastic

So when you think about ESRE, which we launched in the middle of last calendar year, it includes a few things. One is it includes our vector database implementation. The second is it includes a whole bunch of capabilities for you to bring in external models, run those models on Elastic, embedding models. It includes the integrations that we've done with all the major large language models, whether it's OpenAI or Bedrock or Gemini and so on, even the open source ones like Mistral and Llama and so on. And then the developer capabilities to quickly turn data that you might have already indexed and brought into Elastic and vectorize it using our own sparse encoded model called ELSER. So it's a whole bunch of capabilities that is part of ESRE. What it lets you do is effectively, it lets you build RAG or Retrieval-Augmented Generation style GenAI applications.

When you are trying to build a GenAI application in the context of your business, the two things you need, like you said, a large language model, and you need a way to ground that model to make it relevant to your context. So every question that gets asked needs to be provided some context. The language model needs to be provided some context on, only answer this question using information that's in these documents. And that notion of providing the most relevant information, there are various techniques to provide it. One is vector search. Another is just semantic search. The more prevalent one now is hybrid search, where you use a combination of more traditional lexical search and vector search, re-rank the results, and get the best possible outcome in terms of relevance, and then present that.

So we are in a really, really good place in this whole ecosystem. And we are winning because of all the factors that I talked about: great vector search implementation, the enterprise-class features that we have that typically go into building a real-world enterprise application, like document-level permissions, security, the right kinds of privilege access, and so on, faceted search like geolocation, personalization, et cetera, and the incumbency that we have. So we feel very good that in the long run, this is going to materially increase the TAM for search. When it comes to observability and security, our AI assistants are helping us compete better, like I mentioned earlier. But on search, on the core area that we are best known for, we see this as in the long run materially increasing the TAM. And we see that in the conversations that we are having with customers.

This is probably the number one area that our sales teams are having conversations in.

Moderator

Just to follow up on that point and sort of your incumbency advantage, if I imagine myself as an Elastic salesperson, it seemed like the obvious motion would be to go to the customer base. They're running all of these existing applications using Elasticsearch and traditional keyword search. Is the value proposition like, hey, we can support both modalities? Why would you try and have one set of infrastructure for traditional search with Elastic and then buy a separate vector search, semantic search capability? Is that the sales playbook? Or is it as simple as that? Or is there more that needs a convincement?

Ash Kulkarni
CEO, Elastic

Well, that isn't too much convincing because people are actively coming to us. We've been going through our ElasticON user conference. We don't do one user conference. We do them in different cities. So we've done eight out of 12 this year. There are four more to go that are happening this quarter. But you look at the people who attend these conferences, they are existing customers, some prospects, but a lot of them actively coming to understand, how do I use. I'm already familiar with Elasticsearch. How do I use Elasticsearch for building a GenAI application? So there's a lot of external interest. Naturally, our teams are leaning into it. That's the fastest path to securing that part of the business. The way we are positioned, think about this way. You imagine an internal workplace search kind of application.

Let's say you had a customer self-support portal that you had built, where I go to that portal, and I look for information on, "How do I configure the router?" The traditional model was you would use Elastic to search across all your support information, and it would give you 10 links. They would be ranked in relevance. And the first link would be probably the most relevant one to go after. But you would need to then click on that link, read that document. It would point you to where the textual matches were. And you would then need to read it and go, "OK, now I understand what it's trying to say." Well, let me go and see what the second link points to. Is there something different in there? There was a lot of human effort still involved in really understanding, "What is the prescriptive recommendation here?

I want to configure this router. I'm running into a problem. What exactly should I be doing? What GenAI now lets you do is it lets you not only provide that set of, here are the right 10 links, but then using that large language model, you can have a conversational style approach where the answer to that question is, here are the eight steps that you need to follow to configure the router the way you want to configure it. That experience is so much richer. It is such a better user experience. You're actually able to save the user and even the support engineer a significant amount of time and energy, that there is much greater willingness to sign off on those kinds of projects. Because if I'm the CIO, I go, I can implement that support desk with fewer people. And that's a big positive.

I can get a much better user experience. My customers are not going to be. Their satisfaction scores go up. We are seeing that natural desire to take on these kinds of projects. That's what's exciting. Our field is just leaning into this.

Janesh Moorjani
CFO and COO, Elastic

Yeah, that's a really powerful example of it's not vector search versus lexical search or traditional search. It's really thinking about how you can leverage the platform for the full set of capabilities because enterprises realize that they don't need only vector search. They need the additional capabilities to work well with vector search in order to get the business outcomes they're looking for.

Moderator

Yeah, it makes complete sense. You mentioned Retrieval-Augmented Generation, RAG, as a really popular access pattern. And it's what's driving a lot of vector search use cases. I wanted to get your point of view on the durability of RAG in terms of the options available to the enterprise to ground their LLMs in their proprietary data. We have RAG. There's fine-tuning. Context windows for LLMs are expanding pretty enormously with each iteration. So just sort of comment on that or give me your take on, is RAG something of the moment? Or is that something you see as durable over the next several years?

Ash Kulkarni
CEO, Elastic

I think RAG has now emerged as not only being durable, but you need it irrespective of what else you might be using. And I'll explain a little bit on what I mean by that, the three most common patterns. And you touched upon them. The first is just model training. I'm going to either build my own model, or I'm going to entirely train a model on my data. It's going to be very custom. It's going to be very bespoke. That obviously requires a lot of compute. It requires a lot of knowledge and depth. And not many people are going to do that. So maybe some government agencies, et cetera. But that's the highest bar effort. The second one is model refinement. I might take an existing model, and I might fine-tune it on my data. And that's a lower cost way.

It's still expensive, but a lower cost way. The third is RAG, which is I'm going to take an existing model, and I'm going to use Retrieval-Augmented Generation to give it context in real time. The context window sizes are growing. But that's completely antithetical to what a customer really wants because these large language models, the services, charge based on the number of tokens you send them. So the more data you send in a context window, the more you're paying to whatever service you might be using. And it's not like you're getting a better result. If you tell the model, hey, respond based on these 10 documents versus respond based on these 10,000 documents, chances are that in the 10,000 documents, you've got a greater chance for hallucinations being created and injections going haywire. So you ideally want to keep it as tight as possible.

And the other thing that happens is even if you do fine-tuning of a model, your data as an enterprise is constantly changing. And the point of this whole exercise is I want to respond to a query based on what I know now, what is known as the truth at this point in time. And if you do fine-tuning, effectively, you're doing it all based on snapshots. And that is what customers don't find acceptable. So even if you do fine-tuning, the reality is that you still end up using RAG. And that's why RAG is becoming so popular. Everybody realizes that not only is it the cheapest way to operate the overall system and build the application, but it's also the most accurate. And it ends up giving you the best results.

Moderator

Yeah, that was a great explanation. And I promised, Ash, I'm going to give you a break. But I just want to tie the bow on this topic. And then we'll go to Janesh for some of his insights. But I wanted to get your perspective on two things: the decision to go with Lucene, which there's a lot of fans with Lucene. I think some of the pure-play vector databases critique Lucene. So some of the architectural decision on why Elastic chose to build the vector search capability on top of Lucene. And then just in terms of use cases, I think a lot of people think vector search, GenAI use cases. But isn't it much broader than that? I mean, I think of e-commerce product catalog pages that could be revolutionized with semantic search and vector search.

Is it much more about search overall than just maybe GenAI? Get your perspective.

Ash Kulkarni
CEO, Elastic

Yeah, maybe I'll touch upon the second one first, and then go into the details on why we picked Lucene. So yes, absolutely. One of the most interesting things that we are seeing is existing search customers wanting to re-rank the results of lexical search or traditional BM25-style lexical search with semantic search. So this notion of hybrid search that I talked about, where you use multiple different techniques to say, based on these different techniques, what's emerging as the best possible result? And whereas lexical search is just about finding textual matches, semantic search is looking for meaning. I'm looking for the meaning of a particular question and then trying to find stuff that's most relevant. When you combine those two, you tend to get really good results. You get the best possible results.

So what a lot of customers are interested in doing now is saying, can I take all the stuff that I've just built already with you for search? And can I do a re-ranking of the results using semantic search techniques on it? So one of the things we are working on right now is a re-ranking API. You can do that today. But you have to do a little bit of work. So we are helping customers implement those kinds of things. But we want to make it dead simple for you. So we are working on a re-ranking API that will just be just a single API call. And that will allow people to use our ELSER semantic search model and just apply it on all their existing search use cases. That'll give them better results.

Every time somebody does that, now you're using more computation because you're running it through the ELSER model. You have to run ML jobs. That's good for us. It's good for customers. On the question of why Lucene, Lucene is unbelievably scalable. One of the greatest things about Lucene was it scales horizontally and does not require you to just build massive indexes in memory. That was something that we felt was going to be incredibly important as this went from being a lab exercise to something that people were going to try and put in production. Because we've seen this. We've seen people start with gigabytes, then go to terabytes. Then pretty soon, you're searching over petabytes of data. At that scale, you've got to have a mechanism to scale horizontally.

The benefits of Lucene and everything that we've built over the years on top of it, we find just to be incredibly strong and differentiating. With some of the capabilities that we've delivered, things like scalar quantization, we have seen the ability to now support 4x the amount of total vectors in memory. So the memory requirements have gone down by 4x. With things like the query parallelization that we've built, not only into Lucene, but in Elasticsearch, which sits on top of it, we've been able to improve latency by 2x across just about everything that we do, both vector similarity searches as well as traditional searches. The support for a number of dimensions. We now in Lucene support over 4K dimensions. Most applications don't need more than 1K-2K dimensions. Anything above that, you are not necessarily getting any better results.

Your costs are going through the roof. So as people are putting things into production, people are realizing that the decision choices that we made are really making our vector search implementation to be the best out there. And we publish benchmarks regularly. We participate in open benchmarks. You can see the performance that we are able to deliver. I think the piece that most people don't get when they begin with all of this is you've got to think about scale. And that scale at a reasonable price means you need to be able to scale horizontally. And Lucene gives us that. So we are very, very happy about the choices we made several years ago. And it's paying off.

Moderator

That's a really great overview on the thinking about building on top of Lucene. Janesh, I wanted to ask you a little bit because I think it's an interesting time in the model. The cloud business, I think, is now 44% of total revenues. Maybe when you join us this time next year, we're going to be hopefully right around 50% plus or minus. What are the implications, both in terms of the financial model, but also in terms of the business, when Elastic's getting the majority of its revenue from cloud, both on the financial side and maybe on the telemetry side? How much better of a business could Elastic be when it's majority cloud?

Janesh Moorjani
CFO and COO, Elastic

Yeah, it's a great question. For us, if I think about the benefits of cloud, we've been on this journey for some time. I think when we went public back in the late 2018 time frame, cloud, I think if I remember correctly, was sort of around mid-teens in terms of percentage of revenue. It's increased gradually. Eventually, I don't know whether that's next year or maybe a couple of years out. I think eventually, cloud will be the majority of the business for the company. Fundamentally, when we think about cloud, cloud is, we think, much better for our customers. It's a better experience. It's stickier. They can get better outcomes. It's better for us. We lead with cloud. We encourage our salespeople to talk to customers about cloud.

If you think about where new applications tend to start, they tend to start in the cloud. That said, there's a lot of existing workloads that still sit on-premise, whether for historical reasons, whether for customer inertia or regulatory reasons, or what have you. So we've always maintained that we're not going to try and push our customers to do anything unnatural. If the shift happens, it happens gradually. It happens at the pace at which customers want to make that shift. We've seen that shift a little bit slower this year than we've seen in past years. So we do think it eventually happens. I don't know what time frame. But fundamentally, as I think about what that means for us as a business, that means we are getting closer to our customers.

Cloud, naturally, for the same workloads as a higher volume of dollars associated with it because we are recovering not just the value of the software, but also it's a fully managed service. And there's the underlying infrastructure costs and so forth. So there's much greater value addition to the business and to our customers from cloud. So we think it's the right outcome for us and for our customers. And I think that's where we continue to drive the business forward.

Ash Kulkarni
CEO, Elastic

Yeah, to your point about telemetry, we also learn a lot. So in the last three quarters, I've mentioned that each of the last three quarters, we've incrementally added several hundred more customers that are using ESRE for GenAI applications. That's only cloud data. So it's cloud telemetry. Because when it's self-managed, we don't get that same fidelity of telemetry. It just allows us to make better product decisions. There's a lot of goodness to it. So completely, that's a big part of why we bias to it.

Moderator

Match also creates a pretty good feedback loop back into the product engineering organizations.

Janesh Moorjani
CFO and COO, Elastic

Exactly. You can see what features customers are using. And that's really helpful.

Moderator

Yeah, that's an important point. This fiscal year that's about to be completed, you guys are talking about 11% operating margins. What you guys have proved through this downturn is that the business can get profitable and can get more efficient. You have signaled going into next fiscal year that the pace of margin expansion may be more muted. Can you talk us through, Janesh, where you're planning to invest, the rationale for investment, and how are you measuring ROI on potential future investments going into next fiscal year?

Janesh Moorjani
CFO and COO, Elastic

Yeah, I mean, for the longest time, we've always thought about the balance between investing for growth and profitability as one that should be a conscious and deliberate decision in the business. And we took steps when we needed to to expand operating margins. And fundamentally, there's natural operating leverage that is inherent in the business model. And we've proven that out. And so as we look ahead to fiscal 2025, it's still early. We are still working through budgets and so forth. But from everything we've seen so far over the past three quarters of this year around GenAI, it's very clear that there's a massive market opportunity that we should be focusing on capturing. It's also very clear that that is a long-term opportunity for us. If you think about, we're in the very early stages.

People tend to always underestimate the importance of the size of technology transitions over the long term and tend to overestimate them in the short term. So we're playing the long game here. And so from our perspective, it becomes really important to invest in the R&D functions, which are long-term investments on the go-to-market side in terms of some of our brand awareness, in terms of even just building out greater capabilities in the sales organization, which, as we invest over the course of fiscal 2025, will really bear fruit in the future years. So we consciously made that choice this year to say, as we look ahead to fiscal 2025, we do want to invest in the business. But we're not going to take a step back in operating margin.

We will still expand, just not at the pace at which people had been expecting or that we've been delivering until now. And then we'll calibrate as we go. And for us, it's really important to make these investments now to capture the long term.

Moderator

Let's hit on the impact of AI on the observability business. You commented on that. But also, if you have a question for the, please raise your hand. And we'll get a mic to you. But before we get to Q&A from the audience, the impact of GenAI AI on observability, how is that going to make the Elastic value proposition and observability better?

Ash Kulkarni
CEO, Elastic

Yeah, so one of the hardest things when you think about observability is trying to quickly figure out what's signal and what's noise and trying to understand the correlations between different types of signals, things that you're seeing in application logs versus what you're seeing in the network logs, correlating that with maybe the traces that you're capturing from OpenTelemetry. And then through all of that understanding that maybe it was some ACL change in a router that resulted in a slowdown in your application, those correlations are the tricky part. That's what makes it hard to be able to quickly get to root cause when you're trying to keep a system healthy and keep it operating at the right level.

That's some of the hardest things that site reliability engineers, SREs, end up having to deal with when you think in terms of what it takes to keep an observability system going. That's the kind of activity that our AI assistant for observability really helps with. Based on alerts, you can query the system and ask it questions to say, well, can you help me understand what this might be related to? And it'll do the correlation. And it'll give you that understanding. It'll say, OK, this is related to this particular application. This application talks to that particular database. Here's how you can correlate these signals. And that is a very, very powerful capability. The same or similar functionality is also in our security assistant.

In security assistant, it's all about identifying threat patterns and mapping it to a MITRE matrix and understanding what that typical kill chain looks like or that attack chain looks like and navigating through it and say, what do I do to now do deeper threat hunting? And it can give you a recommendation that based on this query, we can see that this particular system, which had some malware on it, reached out to all of these other systems. You might want to go and run this query on that system. That kind of activity is what the AI assistant does today. This is in a shipping product. And so we expect that it's really helping us be more competitive. Our sales teams are now leading with the AI assistant.

So over time, I expect that this is going to continue to be a very strong contributor to our win rates.

Moderator

Awesome. Let's see if there's any questions from the audience. I think there's one here in the front.

Speaker 4

It's a product or service that you've talked about. Has it already started being rolled out to customers? Or when will it be fully rolled out to customers? And incrementally, how much do you plan to charge for it?

Ash Kulkarni
CEO, Elastic

Yeah. So the RAG functionality is already out. We launched the ESRE product in the middle of last year. So it's been out there in production for three-ish quarters now. The way we charge for it is it's only available in our higher tiers. So if you're in one of our lower tiers, you have to buy up to one of the higher tiers. And that's when you get that functionality. And that effectively raises your entire rate card because we are a consumption-based pricing model. So you pay us based on actual consumption. There are two mechanisms by which we capture the value of all of this new functionality in ESRE. One is you have to be on a higher tier. And second is the jobs themselves, the compute tends to be more compute intensive because you're running an ML job. You're running the models.

You're running the data through the models to create vector embeddings. All of that tends to be very compute intensive. That also drives up your consumption. Those are the two mechanisms through which we monetize.

Moderator

Got about a minute and a half. Any other questions for the management team up front?

Speaker 5

Maybe just double-clicking on the incentivizing customers to go to the higher tiers. Besides RAG, are you thinking about maybe changing pricing or increasing pricing in the lower tiers to incentivize customers to actually capture that value in the higher tiers? Would love your color on that. Thank you.

Ash Kulkarni
CEO, Elastic

Yeah, look, we have raised prices in the past for both self-managed and cloud. And just depending upon the value that is being added in each tier, we'll sometimes change prices. We'll raise prices as appropriate. What you're suggesting, to try and bring the prices closer to incentivize customers, it's not something that we're doing at this time. Our AI assistants are only available in the highest tier, in the Enterprise tier. ESRE is available only in the top two tiers, so Platinum and Enterprise. But it's not just the GenAI-related features. There are other things like Searchable Snapshots that are only in the highest tier. And all of these have been reasons why, over the last several years, we've been constantly seeing our ability to gradually keep pushing customers to higher and higher tiers. And we'll keep doing that. That is part of our strategy.

That is something that we do quite actively. You should expect that we'll keep driving that motion.

Moderator

With that, we're out of time. Thank you so much, Janesh, for giving us an update on the Elastic story. Sounds great.

Janesh Moorjani
CFO and COO, Elastic

Thank you. Thanks for taking the time.

Moderator

Thank you so much, Ash.

Powered by