Good afternoon, everyone. It's Blair Abernethy here, software analyst with Rosenblatt, back with another infrastructure software company, Elastic. With us is Steve Kearns. Steve is the General Manager of Search at Elastic joining us today. Thanks, Steve, for joining us. I think you might be on mute there.
Yeah.
All right. Happy to be here. Thanks for asking. I just wanted to, you know, for the audience, if we could just set the context a little bit for those who might not be that familiar with Elastic, just give us a brief overview of the business as it stands today and, you know, some of the problems or challenges that you help customers to address. Then just give us a little bit on your background as well, which would be helpful.
Yeah, yeah, that's great. First, with me, I'm Steve Kearns. I am, as you said, the GM of the search business here at Elastic. And I've been with Elastic for almost 11 years. I’ve watched the company evolve from the very early days, helped to sort of turn us into the company we are today. Most of my background has been in building out the core platforms of Elastic, the core underlying technologies, and then bringing those to market through the different sort of ways that we go to market. I'll kind of touch on that a little bit. For me, almost my entire career has been in the search world or the information retrieval or information extraction parts of the world. This is really a natural home for me in a lot of ways.
At Elastic, you know, as a company, we call ourselves the Search AI company. That describes sort of like the ethos that we bring to building our products and also sort of how we compete in our own way from our own perspective in the different markets that we participate in. When I think about it, the idea behind search, it's really about how do you provide a high-performant way to store, to search, and to analyze vast amounts of data, whether that's structured data or unstructured data. When you have that data at scale, the data keeps growing, it's growing faster, and there's a belief that we have that the companies, if we can help companies make better use of that data, that those companies will be more successful and that we can be more successful as well.
If you think about like making use of this kind of data, right, whether that's log messages, whether that's, you know, e-commerce information, whether that's transaction histories and trade information with these fast-moving real-time data flows, you know, the concepts of search really matter. How do you find the most important transaction? How do you identify fraud within that? How do you bring the right product to somebody who's looking to purchase something on your e-commerce site? Like these different areas, they all share that need to find the right information. Sometimes that's a document or it's a product, or sometimes that's an anomalous transaction in a flow of transactions. That idea of working across massive amounts of data is really important. The second part of being that AI, Search AI company is the AI piece.
If you think about what does it mean to have an AI-ready or an AI-powered platform that you're going to be building on or using, one of the most important parts is that part of can you get back the right piece of information quickly, efficiently with the right kind of filtering and power and security and all of the other things that it takes to build a real production application? How do you do that? You know, when you think about it, this is really what Elastic does. The heart of everything that we do is a search engine. We call Elasticsearch. That's the core of all of the offerings that we have. It's our single data storage where we put the vast majority of our investment as a differentiated core technology that we build everything on.
We sort of go to market then in three ways. One is we take Elasticsearch and a number of surrounding technologies to market for developers. We say, hey, we're going to give you the best set of tools to build compelling generative AI or traditional applications. There shouldn't be a better platform to build a modern application on than Elasticsearch, especially if you have ambitions to make that a more engaging, more interactive kind of an application, or you want to build sort of Gen AI engagements or conversational AI, agentic AI on top. This is the toolkit for developers to build. We use it ourselves. We use that toolkit to build an observability product. Logs, metrics, traces.
It's a combination of unstructured log messages, structured metric and traces data, and how do you look across those to identify potential issues happening within your operational environment? Like if your internal applications are down, your business doesn't run. The speed, the time that it takes to understand that, the efficiency that you can bring with the right platform and the right technology really makes a difference in the observability space. That's especially true when you think about how the environments that people are running their applications in are more complicated with this, the advent of microservices running on Kubernetes, like the number of parts of a typical application are exploding and the ability to look across those and to see, you know, the relationships, to see, you know, what's actually going on underneath is important. That complexity matters on security.
The third way that we go to market is as a security product, a modern-day SIEMPLUS plus, like above and beyond what you might have traditionally thought of as a SIEM. It is the same challenge there. The attack surface for a typical company has never been more complex than it is now, and it is only increasing. Again, how do you look across those different signals, the security events, the audit logs, the network traffic data that is happening? How do you look across all of these signals to find the real attacks? Each individual piece might not look scary, but when you combine them, wow, you have now completed the MITRE ATT&CK framework for, you know, identifying a really complicated, sophisticated intruder, sort of, you know, walking around your internal systems and putting your business at risk.
When we look at it in that way, this idea of taking this powerful core set of search technology, if you will, or search-powered technology, both as a core technology and as a set of principles, we bring that to market for developers, for observability and DevOps engineers, and for security professionals, for CISOs and security teams.
Okay, okay. That's very helpful. If we look at the traditional search market, let's just talk about just like before the advent or the explosion of AI that's happened in the last number of years, particularly in the last three years. What would customers, and I guess also it's important for people to understand that Elasticsearch is an open-source product, right? You know, what would be sort of the typical use cases for Elasticsearch before AI came along?
Yeah, yeah, it's a great question. I sort of bucket this in a few different areas, but you could call it search-powered applications or search-based or just applications. If you were to look, the use cases for building on Elasticsearch, they're super wide. From things you might traditionally think of as search, like e-commerce or even like document management systems, legal systems, all of these different kinds of places where I'm very clearly running a search and I'm expecting 10 blue links back where I can go and investigate. Any of those kinds of systems, classic enterprise search, you know, even many of the applications that you'll use sort of on your computer or on your phone today, they are these kinds of search-powered applications. There have been a lot of applications that are search-powered, though, that you might not be thinking of as search necessarily.
The use cases can span all the way to things like some of the early and really fun examples are like transaction tracking. If you are a bank and you want to understand, you know, where is all of the time going when somebody, you know, creates a transaction, how does that spread across my systems? How do I understand the flow of information? How do I visualize who and where and what part of my network is dealing with the most of that? That is a custom application that you can build on top of a data store or search engine like Elastic. Even things, you know, if I were to think about like logistics, Elastic is really good at geospatial data.
If you are a logistics company trying to figure out where are all my trucks, what packages are on the trucks, when will they arrive, how do I plan for that, you need to have the ability to build an application that can take that into account. The range of applications that you can build on top of a search engine is pretty wide. The reasons that people will pick us, first, you know, just the raw capability, like can I just do search relevance, can I get good results back, is an obvious one. There is also an element of search that, like, every field, every piece of information you put into Elastic needs to be searchable. That is the default in our system.
That is very different from if you're dealing with a traditional relational database where you would say, I put a bunch of data in and then I separately define which fields do I want to be able to search on or query against or filter on, and then I have to decide one by one. You're making that sort of decision late, then I change the database schema and I wait a while for that to apply and work with the DBA. With a system like Elasticsearch, all the data you put in is searchable.
If you as the developer say, I now want to filter on this other field I did not think of before, or my user has asked me to go and extend the application to look at it in this new way, for us, that is trivial if you have built the application on top of a system like Elasticsearch. The range of use cases is super wide. The traditional search applications for us are like, we are a NoSQL database, we are a search engine, we are also a vector database, we are also a geospatial engine, and we are also a columnar store for doing your sort of rich analytics very efficiently. The use cases traditionally have been super wide that people have used Elastic for.
That is one of the challenges sometimes as we talk about like what are the use cases, what are the scenarios. Anyway, that gets it started. I'm happy to go into a few more specifics if you're interested. Yeah.
That's really helpful. I mean, it does show that prior to the advent of this AI wave that started a couple of years back, you know, there's many, many use cases that are ongoing for you guys. Let's talk about it. Let's talk about AI in terms of sort of what's Elastic's overall strategy when it comes to AI and how are you sort of approaching working at it, I guess, and you can define it however you want, but to me, it's such a horizontal technology. It applies to many things your customers are doing, but it also actually applies to your own business in a lot of different ways.
Oh, absolutely. Yeah. This is, it's one of the fun parts, I think, of being at Elastic is that we're both building these core building blocks for developers, and then we're using those building blocks to build our observability and our security solution. We actually get to see it from kind of multiple perspectives in terms of what does it take to actually build the right infrastructure, the right components, and then the right applications using sort of AI. For us, when I think about the, so maybe I'll start on the developer platform side, like what are the tools, how do we think about the building blocks that get built and which of those do we build and which of those do we partner for? Then I'll talk a little bit about how we use them in our observability and security solutions.
For the developer platform side of things, first, you know, we think about vector search and semantic search. In a lot of cases, this idea is really about getting better results back. If you think about like when I used to run a search and I would get 10 blue links back, as a user, I know what my job is. My job is to look at the links, click the ones I think might have the answer, then go read it and decide if I got the right answer, and if not, keep going. When you start to imagine a system that's more conversational, now I get just one answer back. The answer had better be right or I lose trust in that application very quickly.
The importance of getting the right information back and getting that into the context of the model and letting the model use that to answer properly is significantly higher. When we think about our role in this sort of AI ecosystem, job number one is making sure that you can get the right answers to give you the tools to get great relevance out of any data set that you might be working with. To do that, we think about this in a couple of layers of things that we can sort of build.
On the one hand, first, we need to be the best vector database because one of the ways, one of the key techniques for getting better search relevance, better results back is using semantic search, using not just the words themselves that are in the documents in the query, but the meaning behind them. When you hear us and others talk about sort of vector search or semantic search, we're really trying to say, how do I match not just the terms, but the ideas, the concepts and the meaning? To do that, it takes a bunch of layers of technology, and we try to provide as much of that out of the box as we can and give you the flexibility to bring the rest or customize that further on your own.
To do vector search, you need to take the text of the query and of the documents and sort of generate the embeddings or to create the vectors from the meaning of those words, and you need a language model for that. We provide a first-party set of models ourselves. We provide an embedding model that we call ELSER. It's a competitive, lightweight, efficient model that, again, is on by default in our cloud. It's like you can walk up to the system and in seconds, you can be doing semantic search on top of data that you've brought to Elasticsearch. It should be that easy. When you're using us, especially in cloud, it is that easy. That's part of the reason that we provide these first-party retrieval models. We have the same thing on what's called the re-ranking model.
It is just another part of that, how do I get the best results? I start by bringing back the best candidates, then I have an option to re-rank them to say, I got 15 candidates, but which one is the best? Which two are the best? How should I order those coming back? We provide these first-party embedding models and re-ranking models directly from Elastic, available by default, sort of in our cloud, and it is a really nice experience for users.
Are those open source or are those on your cloud subscription?
Yeah, they are available to run sort of anywhere our products are available, but they are not free to use. These are part of our paid capabilities. The ELSER model and the re-rank model, just like you can consume them very easily in our cloud and available and on by default, if you are a self-managed customer running and operating the software yourself, no problem. You can very easily sort of with a single API call, get those installed locally and be up and running very quickly.
Okay, excellent.
The second part of this that's important, I think, for us and where we play is we want to provide that simplified experience. One way we do that is with these first-party, very competitive models that are just available free, very easy to use. We also want to make it very easy to test and to work with other systems. We have this ambition of being the most open platform to build on. If you choose, you say, hey, your model's nice, but I'm already familiar with this other embedding model or re-ranking model, great. We have integrations with all of the major providers, both of the LLM side of things, so Anthropic, OpenAI, all of the rest, as well as all the places that you would want to run models.
If you are on AWS, great, we've got a nice integration with Bedrock, whether you want to run embedding models or large language models and connect them to the system, very easy to do that. We provide other layers that make building this stuff easier. Something that we call the AI Playground. This is a simplified experience to say, hey, I've got data in Elastic, what would it be like if I chatted with it? What if I tuned and tweaked the relevance models or my query differently? Do I get better results? Do I get worse results? We provide this as a right out of the box capability.
In a minute, you can now be chatting with the data inside of Elasticsearch and you can say, hey, I'd like to see how, you know, I don't know, Claude 4 or Sonnet does versus, you know, the current O3 or something or O3 Pro. You can very easily now plug in these different backend models and say, hey, how does that affect my chat experience? Now let me change my query, let me change my embedding model, and giving you that quick feedback cycle that as a developer is so important to decide, is this going to work? If it's going to work, I'll invest the next amount of time to go to the next step. These tools, this simplified getting started experience for developers is huge.
Really simplifies that onboarding, that testing process, that iteration process that it takes to get good answers and good answer quality. I think that's sort of the answer for us as a developer. We want to be providing as much out of the box ease of use as we can while still letting you open the box and really configure it right to the nth degree because getting the right answer is what determines a successful GenAI powered application versus one that's not successful. There's a lot that goes into that, but if you're not getting the right information to the model, you're never going to get the right answers out and your users will lose trust very quickly.
Are your customers, Steve, are they, would you say they're still largely in the experimental stage in building their GenAI applications or, you know, are they moving them to production? Are you seeing more movement to production? I want to, let's leave Agentic AI out of it for the discussion for a minute because we'll come back to that.
It's a great question. I think every company is sort of on their own maturity journey is maybe the right way to say that. We have certain customers, and I know Ash has talked about this in some of our earnings calls, like a leading automotive company has a number of these generative AI powered applications internally in production already. They're building out more of those as they go forward. It's great there because they've identified a pattern, like a system for what's the technology layers that we're going to use, how do we bring it to production, how do we evaluate its success? Other companies are much earlier in that cycle. We sort of see people along that spectrum. In fact, we've got a number of folks, and again, another one that Ash had mentioned was a sporting goods retailer in North America.
They are already using us to power the e-commerce portion of their search with traditional lexical search and a lot of advanced configuration around that. They are saying, we want to bring semantic search or vector search to improve those results a little bit further because we can see the benefits of that in our early testing. How do we scale it? How do we bring it out? They are starting that journey with us, which is very exciting. This is a common pattern that we will see is that people will find one use case that is working for them and they will invest in that because it is the biggest, or they will find one specific use case that is manageable and high value where they can prove out a technology architecture, bring that use case to production, and then that becomes a standard that they are going to build on and expand from.
These are still, there's still a lot of learning that happens at each one of the sort of customers that we see, but we're seeing people progress through that journey and we're seeing them find that success. We are seeing these applications reach production, but it's still a long journey for a lot of folks who are new to building this for just the first time. There's a lot of layers. You have to think about evaluating accuracy differently. It's not about did the right answer show up in the top 10 that you can skim, but it's like, is the quality of the answer that the LLM gave correct enough of the time? It sounds like a subtle difference, but actually the mental model for how you test that, how you evaluate it is a little bit different.
Yeah, yeah, interesting, interesting. You know, if we shift gears for a second here, just talk a little bit about more your solution side of things. Security is an important area. I just saw some of your guys that were, you had a big booth at RSA last month. You know, I think it would be helpful for you to sort of explain Elastic's value proposition in security. Like what are you guys, you're not, you know, doing identity management and you're not a firewall company or anything like that, but you have a lot of customers using your security solutions now. That would be really helpful to sort of frame that up for us.
Yeah, and maybe I'll split it into a couple of pieces. I'll tie back some of the AI portions maybe in a moment. When you think about, you know, the security space as a whole, it is very difficult to protect the entire footprint of an organization. There's all kinds of different data that are involved in that. There's security data that looks a lot like logs, right? Events from all the different sort of security-related systems in your environment. There's the actual logs from all of the authentication that happens across every application in the environment.
You know, a big part of how we got started in security was people saying, hey, I have a ton of data and I'm not able to process that in my legacy or traditional SIEM, and I need a way to make sure that I know what's in this data, that I can use this to help protect my company, protect my data. That idea of sort of us as a threat hunting platform is where we got started. Very quickly we realized there's so much more that we can do. Today we provide an entire SIEM, end to end with out-of-the-box security rules that are detecting not like a signature of what an individual attack looks like, but what are the things that an attack would do? They would move from system to system.
They would, you know, you would see failed logins, you would see these other actions happening across the system. We provide a SIEM detection engine that's pre-built with both ML, traditional machine learning, like what are unusual patterns that we're seeing from a given user, from a given host, from a given service in what it's accessing. Things like, how would I look across those? Saying, I've got one anomaly here, one anomaly there. Are they related? If they are, that's now a pretty serious alert that we want to go and surface. Having this pre-built security content makes a big difference. You can see, you know, over the years we've expanded that capability significantly. In fact, in security, we go all the way to endpoint security.
The actual endpoint collection of security-related events off a laptop or a desktop or a server and also protecting those endpoints. We have really a fully featured security suite, if you will, in terms of the capabilities that we can provide. That place where we start to see our advantages sort of multiply is as that ecosystem, like as the amount of data grows, the number of security rules you have to set up grows. As the security rules grow, the number of alerts that get generated grows. Now suddenly the security teams are overwhelmed with the number of alerts that they have.
It's absolutely impossible to really know what's going on, right?
You are limited then by the amount of wall clock time a human can scan these things. This is where AI comes in. We have a number of AI powered features, but the one I'll mention because it's so obvious in some senses how powerful it is. Imagine if you could ask the AI to take a look at your alert history. All of these alerts that a human doesn't have time to go through, have the AI look at your runbooks that you have for your organization, look at your network map that describes what systems do what, where the important ones are and the sensitive ones are, and then use that to actually scan through those thousands of alerts that happen every single day and surface the alerts or the set of alerts or the string of alerts that happen. We call this feature Attack Discovery.
It allows us to, using AI, building on top of the power of Elasticsearch as this sort of AI-enabled data store, surface these attack vectors that you would not really be able to see looking at any one alert. It automates the work of a lot of people. Again, this is not about taking the work away. It is about focusing the time of the analyst that you have on the most significant things. You still have to look through the rest. You still have to be really thoughtful about how you manage those alerts. If we can surface for you an active ongoing incident before you would have been able to find it before, we have now saved that organization significant monetary or reputational risk because they are able to get ahead of it more quickly.
If a customer is coming to you new and they're looking for a modern SIEM, what's the, are the connectors all built to all the major, to the CrowdStrike and Palo Alto and all the major guys of the world? That, you know, does it take them a year and a half to get this up and running? How long does it take to get in there and configure it? I guess in many cases, some of the customer might already have a, let's call it a legacy SIEM or, you know, they've got something they've been using in the past. What's the barrier to get them to adopt you?
It's a great question. For like a net new environment, we have quite a lot of integrations already on the shelf with all the kind of major systems that you would expect. If you've got an existing endpoint provider, it's very likely we have integrations for their alerts and their telemetry to be able to come right in. If you've got, you know, common firewalls, common other ecosystems, common applications, very straightforward to do that. It's getting easier and easier to bring in net new data sources. If you have a custom application with your own flavor of audit logs, for example, we have an AI-powered ingestion approach now called the sort of AI assistant for ingesting data. This makes it much easier to bring in a custom data source. Every company has some custom data sources that are specific just to them.
We are trying to simplify that process of bringing the data in. If you do, though, if you do have an existing environment, like an existing install base or an existing SIEM product, it is likely that you have a whole set of security rules, a whole set of processes and workflows around that. We have been working over the last few years to reduce the number of capability gaps. We have this new query language called ES|QL, incredibly powerful, supports joining across multiple data sources, really a powerful set of querying capabilities that close a lot of the functional areas that were hard in Elastic historically.
That's your flavor, Elastic's flavor of SQL.
Exactly. Yeah. The Elastic Search Query Language, the ES|QL. It's a very powerful pipeline query language, makes these really complex queries, something that you can build piece by piece in a very natural way that matches the way a lot of analysts think. We have migration tooling. We actually have the way to take alerts and more written by, written in like Splunk's SPL, for example, and translate that into the Elastic Search Query Language and into our alert and rule system. These are individual capabilities. You can ask it one-off. You can walk right up to our AI assistant inside the application, say, hey, here's how I used to run this query. What does this look like in ES|QL? We're continuing to build on that kind of capability going forward.
That's excellent. Actually, there was just back in May, I think you guys acquired a company called Keep Alerting. Are you familiar with that one?
I am. Yeah.
We're very excited.
Can you just tell us what that is? It's open source AIOps. What is that? You know, why did you feel this was a good target for you guys?
Yeah. First, we love these kinds of sort of technology tokens that are really additive to the capabilities that we have that pull forward some of the things that we're in the process of building anyway with a more mature sort of set of technologies. And so what Keep Alerting, while it's focused on the sort of AIOps side of the world, what they've really built is a really nice, almost like think of it as a workflow engine to say, I'm investigating a secure or like either a security incident or an operational issue. What are the steps that I want to go and take to investigate this further or to take steps to remediate this issue? Again, same concept for security and observability. The data, the terminology that we might use is different. In security, you call it SOAR.
In observability, you would call it a sort of AIOps. That platform that they built is a powerful, extensible platform that really does plug right into the way we think about taking action on top of alerts or other triggers within the system. We sort of think about this as giving us this workflow automation engine that will power a lot of our SOAR capabilities in the future, the security operations and response actions, and do the same thing for us in the observability space. We have been building pieces of this and we have some of these capabilities in the platform, but that ability to bring in a more mature, more complete end-to-end kind of a capability and extend the platform in that way is wonderful. I will tie it all the way back.
One of the benefits of building on the single core platform means that, you know, in the security side, the SOAR use cases are critical for us. This is an area that we get lots of interest from our customers, lots of excitement around Keep and having those capabilities. I, thinking about the search business, I'm really interested because when you look at an agentic AI workflow, what is that really but a workflow powered with some of those steps powered by an AI making decisions? It's not just a true/false or like a structured conditional, like ask the model, run this tool, track the results. If it gives you the right answer, keep going or bail out. This idea of having a workflow engine as a core primitive across our platform gives us this opportunity to now use that in a couple of different ways.
As with any acquisition, it takes time to integrate the technology, to integrate the team. We are really excited, very excited about the team to be joining us. They have been a wonderful addition already.
That's interesting because that's, so you're building it, you will build it right into the core of the platform. I guess if we start to think about agentic AI is all about taking action, right? It's evaluating my environment and making a decision and then taking action on that decision. If you have a workflow engine that can be tied into that, that just helps to amplify that effect, right?
That's right. It simplifies it. I mean, for us, when we think about some of the agentic ecosystems as well, here also we want to be the most open platform. We are not going to force you to use only a tool or only a workflow engine that we provide. We have great integrations and a great partnership with the folks at LangChain and LlamaIndex. We are working with a lot of the other agentic AI frameworks as well to say we want to make sure that Elastic, if you start there, that we are the vector database, the sort of context provider of choice in that ecosystem. If you start with Elastic, we want to give you the easiest path to success.
Having a workflow engine in the company, in the platform, just gives us another set of tools, another layer, another widening of the platform for developers to say we can give you a better set of tools, a richer set of tools. You can use ours or bring your own. We are very open in that way. If we can make it simpler, we want to do that.
Let's talk about that a little. Let's take the conversation up a little in terms of agentic AI. Where do you, where does Elastic see itself fitting in this, you know, emergent, I'm going to call it an emerging ecosystem, rapidly emerging ecosystem where enterprises want to build these, you know, the term is agentic, but I mean, effectively it's a, you know, situationally aware piece of software that has agency to make decisions based upon things that happen and without human intervention. How do you kind of look at sort of where Elastic will fit in that world and how do you go at it just with partners or just trying to understand how Elastic carves out its space in an agentic world?
Yeah, it's a great question. You know, I think long term we're big believers in the sort of agentic future. It's going to take a bunch of steps in iteration to get there, both in terms of the models, the maturity of the other parts of the ecosystem. We want to make sure that as this happens, that we play a big role in it. It actually goes back to what I sort of mentioned before, this like evolution of when I ask a question and I get blue links, my job is to look at them as the human.
When I get one answer back, just a question and answer, like a very simple GenAI kind of an application, if I don't have the right answer to give the model, the model is not going to be able to get the correct answer to give back to me. The quality of answers, even in the simple cases today, matters in a big way. It matters that much more when you take the human out of the loop, because at least if I'm the human, I can say this doesn't seem right. If I ask about the holiday policy or something like that and it gives me an answer that talks about Greece, I'm based in the U.S., so I'm not going to have like, it doesn't matter what the Greece answer is. I want the U.S. answer. I'll know that's wrong.
If you have that in a fully automated workflow, now you start to wonder, is it going to be able to make the right choices? Is it going to have access to the right kind of information? I think what we'll see is this kind of circle of like there's a lot of excitement. As we get to production, the long tail, the ability to get the answer right all of the time really does matter in these kinds of applications. It's easy to do a proof of concept. It's easy to find a use case that works perfectly, you know, as a demo.
When you put it in front of the users or you put it in front of your teams to automate hundreds or thousands or millions of these sort of workflows that happen within companies all the time, the accuracy really matters a lot. I think that, you know, when you imagine then to get good accuracy, what do you need? You need smart models. There are a lot of great people working on that. We're not trying to build foundation models. We're going to rely on others, the OpenAIs, your Anthropics, and so forth to build these great reasoning models that are able to participate in the process in a richer way. They still need to have the right context.
When we look at what does it mean to provide the right context, data inside of a business, maybe we'll start somewhere else. If all you're asking is like a world knowledge question, the models should be able to just answer those questions because they have great general world knowledge. If you're doing anything interesting, it involves your own business's information, the stuff that only you have access to that those external models aren't trained on. The most interesting business data is live. It's changing on a regular basis. It's changing continually. It's not like a train it once and run it for a year or train it once and run it for a month. It's like, no, no, we got a new customer today. I need to know what to tell them. I need to know their information.
This ability to bring context to those models, we believe will continue to be a critical part. Whether we still call it RAG, you know, a year from now, two years from now, five years from now, I do not know if we will call it that, but the idea of bringing the information that the model needs to the model and the correct information, that is not going to go away. This is going to be important for a very, very long time because.
Corporations aren't going to want all that data out in the wild anyway, right? So there's this proprietariness, this IP nature of my enterprise data.
That's right. Also a security aspect. Like one of the things that sometimes gets overlooked with some of the excitement in these use cases is like if you and I are inside a company and we both ask the sort of agent the same question, we might not, we probably shouldn't get the same answer if we're in different departments or if we have different customers. If I'm at a financial services institution, if I'm a high net worth, you know, sort of an analyst or something like that supporting high net worth individuals, when I ask a question, I don't want to be able to see, you know, somebody who's not my customer's information. That's scary, right? The ability for these systems to have per user, per document, per field within a document level security is absolutely critical.
That is actually even already something that we are seeing when people are making choices on what vector database to choose to back even today's GenAI applications. They are saying, hey, I need this, like, the security to really work. They are coming back to us and saying, yes, this is a thing that Elastic as a mature data platform has had for years. This is the kind of system that I am going to need to build the rest of this application on top of. We will start to see, I think, that get more prominence as these use cases get beyond the proof of concept, especially in the agentic side where you are unleashing the model. You better be giving it the right answers because otherwise who knows what it will do next.
If I have been using Elastic for, let's just say, observability in my organization and I've indexed all this data, right? I pointed at my corporate store data stores and I've indexed it all. You're able to then, you have the metadata around that data, right? Now you can control what you surface in a model or what you surface in a, let's call it intelligent application that comes after the data, right?
That's right. I think having that ability to control what data you're bringing forward makes a huge difference. One of the fun things, and you sort of highlighted it there naturally, one of the fun things about building and using Elastic, even for something like observability, is that data, even the observability data, often has a business aspect to it. You know, if I go back to one of my favorite sort of historic use cases, there was a telecoms company in South America who was building using us for just observing their cell phone towers.
One of the fun things that they did is they said, we started to use the disconnect data of our telemetry to see where are people typically disconnecting in a geospatial area, and then let's build a cell tower there because now we know where people are with our devices. We know where they're losing signal.
Where they're losing signal. Yeah.
Incredibly valuable. They started by looking at this as an observability problem just to say how are the towers performing. They realized that there's another way to apply that kind of data. I think that there's a lot of that kind of innovation that we're going to continue to see building on top of, you know, the other kinds of information that people are collecting across their business.
That's something I've talked to some of the other observability players where Dynatrace, we were talking earlier today on this, but you know, instead of just observing an IT system to see how it's functioning, but actually business operational observability. I guess, are you seeing customers kind of looking at using your platform to going into more business operations?
Yeah, it's a good use case. It's a tough one because every business is shaped differently. It doesn't necessarily look like a standard repeatable, well, here is exactly how you connect your purchasing department to, you know, your accounts receivable or something. There are certain patterns I think that can be replicated there. What we're seeing is people being creative with the data because Elastic as a platform gives you that power and that flexibility. I think that that's almost like a value add. You know, we do the core thing, start by, you know, observing my infrastructure. Then what else can I use this for? How do I learn more about my customers and their behavior and feed that back into my sales and my promotional machine?
I think that that's a natural extension in some senses of how you would build on Elastic. It's not necessarily a separate vertical or something like that that we track in that same way, but it's something that the flexibility of us as that data platform, yeah, makes it much easier, I think, than a system that was much more closed that was saying, here's your, you know, your APM UI, here's your metrics UI, here's your, like, no, this flexibility means you can look across those sources when and where it matters.
We're coming up on our time. We're going to run four or five minutes here. Steve, I want to ask you a couple more things. You announced an integration with NVIDIA Enterprise AI Factory last month. Can you just tell us a little bit about what you're doing there?
Yeah, yeah, it's a great partnership. We're so happy with the partnership with NVIDIA. Obviously, in the world of inference, the world of AI, NVIDIA is the place where you want to be running a lot of those workloads. One of the things that NVIDIA has started to do is they've said, how do we help to box up, package up all of the things that you need from the hardware side through vendors like Dell and others, through the NVIDIA chips themselves, all the way down to the software layer to make use of that to build compelling applications. You know, we're really happy to be the vector database inside that reference architecture that they're providing.
That kind of a partnership of working with a company like NVIDIA to be the default vector database in that reference architecture is huge because it's validation of the way that we partner, it's validation of the way that we support these kinds of applications in a very rich and powerful way so that the leading company in the AI sort of universe wants to pull us into that reference architecture. I think it's just another example of where our technology being present and visible with all of our partners is really valuable to us and I think will help simplify that process of building for a lot of our customers.
Excellent. Excellent. I'm going to ask you just in the last two minutes here to put on your, you know, long-term vision here. Where does agentic AI go for Elastic? Like what can this thing look like in a few years?
Yeah, it's a good question. I think there's a couple of ways to think about this. If I look at it, one of the promises and the hopes and the dreams of the sort of agentic workflows is that we can start to automate a lot of the high human toil processes across businesses. There are a lot of those that happen inside of every single business. As the number of those workflows start to be automated, start to work and the technology starts to catch up to really be able to deliver on that, I think it's a very exciting future because that means that we can have the people focus on higher value things than a lot of the sort of repeat sort of motions. It means that these things now can be a base that we build from.
Ultimately, those workflows, they will only succeed, I think, if they have the right context, they have the right access to the right information in the right way across a business. I think that our role in that is pretty clear, being that context provider for those workflows. I think that we from the heritage of our technology going all the way back to the early days of search and relevance to modern search and relevance on top of, you know, LLMs and on top of these language models with embedding models and vector search and so forth, this gives us that advantage to be a better platform to build these things on top of. That is where I think you'll see this sort of evolve a little bit and where I think we can play an exciting piece of that.
Yeah, it's fascinating and rapidly evolving. There's no question about it. Listen, Steve, this has been fantastic. Thank you very much for taking the time with us this afternoon and we'll be watching Elastic for additional really cool innovations over the next couple of years.