Good afternoon, everyone. I'm Blair Abernethy, joining you again here from Rosenblatt, covering software. Today we have Elastic with us. We've got Matt Riley, who's the General Manager of the Enterprise Search Solutions Business. Welcome, Matt. As well from investor relations, Nick Welbort and Janice Oh, as well. We're going to focus our discussion with Matt. Maybe, Matt, just as a quick introduction, for some of the people that may have not looked at Elastic in the last little while, just as high level as to what Elastic software does, and sort of your role in the business, that would be very helpful.
Thanks again for having me. You know, Elastic is the company that builds Elasticsearch, which is one of the most popular and widely adopted open-source data search technologies in data platforms on the web broadly. My role, I am the General Manager of the search group here, so I lead product and engineering from a high level for anything related to building search applications. Primarily, I focus a lot on the developer community that has been built up around Elasticsearch, which is really one of the strongest adoption paths with Elasticsearch itself. That has led to a lot of the investments that we're making in, you know, AI and ML, some of the things that I think are going to be the top of conversation here today.
I've been at Elastic for about six years now, and, very happy to be here.
Great. Yeah, let's start with AI. Maybe from an Elastic perspective, just a bit of background on what work you guys have been doing in the last few years, what sort of fielded. Maybe just talk historically, sort of where you're at. Then we'll sort of shift to what's happening today with, obviously, the large language models and so on and so forth.
Sure. Yeah, you know, our investments in AI and machine learning go back quite a few years, actually. We've been, probably starting with the acquisition of a company called Prelert back in 2016, which was an anomaly detection, machine learning-focused company, that we integrated fully into the Elastic Stack. Historically, that has powered a lot of the things that have emerged, you know, primarily in the observability and security use cases, things like time series, data analysis, and anomaly detection, and forecasting use cases, which have been very popular among our customer base. More recently, you know, we probably about 2 years ago, started investing in capabilities around vector search and large language models.
The ability to do vector search right alongside the BM-25 text retrieval, that was the sort of original retrieval algorithm of Elasticsearch, and then also the ability to bring transformer models directly into your Elasticsearch cluster, allowing them to perform inference and do things like text embedding or sentiment analysis and classification. All those capabilities were investments that we started about two years ago and have recently become GA'd for our customer base, and all kind of within the 8.x release cycle over the last several months.
You mentioned the vector search. Can you just walk us through a little bit of that? Just, not too deep into the weeds, but just give us a sense of what's different about that and what's the, you know, I guess, the value prop for your customers.
Yeah. Vector search is fundamentally kind of a different type of search or similarity algorithm, where, you know, historically, if you're retrieving something through a search engine, you're typically typing keywords and looking for matches of those text chunks in the documents in the corpus. With vector search, everything, both the corpus of documents and the search query, get turned into these vectors of floating point numbers. Essentially, you determine similarity or relevance by performing the distance calculations there. Because it's such a different type of retrieval algorithm, it requires different types of implementation. That's something that we saw emerging, and we saw the promise of that.
What we see now is that, you know, text embeddings, for example, with transformer models, which is one of these foundational key elements of what we're seeing in generative AI applications. The output, that embedding, is really, it's a vector, so it's a dense vector of numbers, and so having the ability to take those text embeddings and do retrieval on them, natively inside of Elasticsearch was really critical. When we saw this kind of the emerging opportunity with this technology, that's when we started investing and making sure that we had a really fantastic implementation at the core of Elasticsearch that would support the use case right alongside all of the kind of retrieval algorithms that we originated, that we originally implemented and that we're so popular for.
This is built right into the Elastic Stack then?
Yeah, that's correct.
All of your customers, as they upgrade, will get this, or if they're Elastic Cloud customers, which is a fast-growing portion of your customers, they'll have the access to this technology.
Yes. Yeah, absolutely. It sits right alongside the existing capabilities. It's built right into the core of Elasticsearch. It's also integrated into the same APIs that people are already familiar with from Elasticsearch. If they've been building search applications with our product for the last 10 years, for example, and now they want to move into building things that incorporate vector search capabilities, they're looking at the same set of APIs. Everything's kind of pulled into one, you know, cohesive offering.
Okay. Excellent, excellent. That's so now, recently, a couple of weeks ago, you announced the Elastic Relevance Engine. Let's talk about that and just sort of help people understand, you know, what the implications of this are and then, where are you going with it?
Yeah. The Elasticsearch Relevance Engine or the Elasticsearch Relevance Engine, or ESRE, as we're calling it, is really the culmination of all of these kind of foundational capabilities, as I call them. Where we originated with text retrieval or BM-25 search, and then adding on the capability to do vector search, and then, you know, also the ability to bring in transformer models that create the vectors in many of those cases and do inference directly inside of Elasticsearch. What we're finding is that there's, you know, oftentimes people want a combination of these capabilities. They want to be able to do BM-25 search right alongside vector search. We also have capabilities around what's called hybrid search, and the ability to merge these things together into a single, you know, request and query result set.
We've also been bringing in some of our own proprietary models. As I mentioned, you can bring in a transformer model that you find, that maybe that you've built yourself, if you have a data science team or that you're taking from the open source community, for example. We've also built our own proprietary model that we call the ELSER. That is a transformer model that's meant to essentially just improve search relevance capabilities. ESRE, you know, is really the collection of all of those capabilities into one cohesive set of APIs, and the ability for our customers to kind of pick and choose the fundamental capabilities that they need out of that set of them for whatever application that they're building. 'Cause fundamentally, in the end, you know, Elasticsearch is meant to be a developer tool.
We power a lot of the next generation of applications, we want to make sure that we're building the functionality or these core capabilities that the developers adopting Elasticsearch or who have already adopted it, are going to be looking to use as they build the next generation of AI applications.
Is ESRE, this relevance engine, is it something that will come with the stack, or is this an added, you know, a certain tier that you have to be at in order to access it?
It's, it is part of the stack. It's, it's part of the core capability of Elasticsearch, so it's not an add-on, it is built into the software. The licensing for some of the capabilities inside of ESRE varies. Some aspects, for example, of the core vector search capability is actually available in all licensed tiers. Things that, like, bringing in your transformer models and doing inference in Elasticsearch, all the work that we did around hybrid search and the combination of search results, our own proprietary model, ELSER, those are all licensed in our platinum tier, and so those are commercial features only.
Okay. Okay, it's very early days. This has only been in the market for a couple of weeks. It clear you must have been. You know, what were some of the things you heard from your beta customers when they were looking at ESRE?
Well, I mean, we're seeing a lot of interest from the customer base in general, right? Every week I'm having many, many conversations with our existing and potential customers about this capability. I think every enterprise out there is trying to figure out exactly what they're going to do with generative AI. It's certainly something people are very excited about. As you said, I think it is still early days. We're trying to help make sure that people understand what the capabilities are and what that enables for their business. As they're kind of understanding that and building the applications that they have, you know, we expect to see those things move into, you know, production workloads over time.
It's interesting, I mean, I guess your traditional, larger enterprise customers who have been using you in a on-premise kind of method have now moved to cloud. Do you think as ESRE's really more of a cloud kind of implementation for customers or will there be some on-prem, you know, behind the firewall kind of opportunities as well?
Yeah. There's nothing in ESRE that requires someone to be in the cloud. I think we're seeing that very, you know, quick adoption inside the cloud because it's where a lot of the companies who are quickest to adopt these kind of capabilities are, they've made a lot of that transition. I do believe that we're gonna see opportunities in both of those areas.
Interesting. That certainly adds more value to your on-premise solution then, doesn't it?
Yeah, I think so. You know, I think a core fundamental tenet of what Elastic has done very well over time is just trying to meet our customers where they are, whether that's in, you know, one of the major public hyperscaler clouds or on their own self-managed on-premises hardware. We work very hard to make sure that the capabilities we build can be, you know, consistent wherever people decide to deploy these things. You know, we've also made really significant investments in making Elastic Cloud, obviously the easiest place to be able to adopt these capabilities. It's really just point and click, and you can get started in a couple of minutes.
To do all of that stuff while maintaining, you know, a really significant, compliance posture and allowing our customers to maintain the compliance and data sovereignty, you know, rules and regulations that they may need to comply with. All of that stuff has been, I would say, just an enormous amount of the work that we've done on Elastic Cloud over the last several years. I'm very proud of where we've been able to take that over, you know, in time and where we are today.
... It's interesting, and, you know, it seems that more and more in recent weeks, the enterprises, the end customers, the large customers, are not willing to put their data into ChatGPT necessarily. They're concerned about it. There's been a couple of high-profile issues. You know, do you really see your enterprises wanting to take your platform and build their LLMs with your technology? Is that sort of where you see this going?
I think that one of the reasons that we feel it's important to be able to bring transformers into Elasticsearch, is that in that case, we can really encompass the entire workflow. From retrieving the data that's inside and private to your enterprise, to performing inference on it and perhaps generating answers or, you know, whatever results you're looking for from the machine learning models. That said, I think that I do think that that's going to be an advantage for the platform and our ability to kind of encompass everything all together.
With that said, I think that we're also seeing the, you know, the major hyperscalers also recognize this, these security concerns, and they're going to be building capabilities to allow us to continue to have those kinds of guarantees with our customers who we share with them. Where they may be running, you know, Elasticsearch on Azure, and if they want to interact with the Azure OpenAI Service, for example, how can we make sure that there's, the data privacy between those two things is complete? I expect that we'll be able to see that, you know, evolve in a very productive way over the next several months.
Is the Esri technology, is it open source? Is there an open source component to that as well, as like purely, you know, proprietary for Elastic?
All of the code is open. The code itself is open. If you go to our GitHub repository, you'll be able to look at this software right there. As for the open source license, as I mentioned earlier, portions of it, like the vector search implementation that we did, that implementation of that algorithm, is part of every license tier of Elasticsearch. You get it with the free and open, you know, basic tier, as we call it. Certain, but certain other aspects of it, while being open code, are licensed at a platinum level or in one of our commercial tiers, where we do require a license.
In terms of adoption, how do you, how does this, how do we get this kind of moving in the enterprises? Is it push or is it pull? What are your partners saying about this right now?
Yeah, I mean, I would say that by and large, everyone is pretty excited about it. From, you know, consumers who have been using ChatGPT for fun, you know, use cases of their own at home, to companies and the enterprises we work with every day. They're all, you know, excited by the opportunity. I think in the enterprise, everyone is trying to figure out exactly what it means for their business and how they can do the kinds of things that are emerging, the kinds of use cases that are becoming possible. How can they execute on those while, again, maintaining some of the privacy and security, and compliance obligations that they have?
I think that there's certainly, you know, some aspects of that to still be, you know, figured out, and people are, you know, working through a lot of that, and I think that Elastic can play a very positive role in helping people get to market quickly with those opportunities. To answer your direct question, I think it's very much so gonna be, you know, I think there's a lot of push from the community who are very interested in pursuing these things.
Yeah. Yeah. In terms of your other solutions, your observability and the security solutions, what are the implications there?
Yeah. I think one of the beautiful things about what we built with Esri is that, you know, Esri is this bringing together of a bunch of foundational capabilities, a collection of things that work together to build AI-type applications that we want to offer to our customers who are doing that. You know, our security and observability product offerings are also those customers. They're able to leverage the same capabilities to enhance the observability and security solutions. You know, things like what are being called copilots, essentially these very smart bots that can help you inside of your day-to-day workflows while you're using products like ours.
For an example, you know, if you get an alert in our observability tool that something's wrong with the Kubernetes cluster, you know, the ability to take that alert and then actually generate a remediation plan for you and say: Okay, this is what we think the root cause might be, and here's a remediation plan. Doing all of that inside of the observability tool can really streamline workflows and make sure that, you know, you don't actually do a lot of that manual work of investigation and debugging, can really be done, in many cases, automatically. As we integrate those kinds of capabilities into observability and security at Elastic, it's all gonna be powered by ESRE, the kind of the underlying Elasticsearch Relevance Engine.
Yeah, interesting. Do you see... I know it's very early days, but do you see multiple copilots within your, you know, your search, observability, and security solution sets? You see one? Like, how do you think this is gonna play out over the next couple of years?
Yeah. You know, I think that the fact is that we'll probably want to provide a consistent experience for customers, 'cause many of our customers use us not just for security, but also for observability and for search. We have many people who are adopting us across those use cases. We'll want to provide, I'd say, a consistent user experience with the different assistants, or with an assistant in each of those offerings. That said, the underlying models that power those assistants, they may actually ultimately become more and more specialized as we train them on very specific kinds of remediation tactics.
You know, generating a remediation plan for something that you see that's going wrong with a Kubernetes cluster that's misbehaving, may be quite different from the kinds of things that we need to know when you're doing a threat hunting inside of security, for example.
Yeah.
There's probably an opportunity for some specialization there, but with hopefully a very consistent user experience across all of it.
I mean, the consistent user experience means you can leverage that installed base you have of literally hundreds of thousands of programmers that are experienced on your platform, right?
Yeah. That would be the ideal, yes.
Yeah. Yeah. Well, in terms of LLMs, how long do you think this is gonna take to sort of really begin to, you know, materialize, I guess, from a, you know, your customer standpoint? How are they building things now, or are they just kicking the tires on the technology?
Well, we do have customers today who are building and have deployed even some of these capabilities. I think we mentioned a few recently. We talked about a company called Relativity. That is their eDiscovery tool, so that uses Elasticsearch at its core, and they're using a lot of our capabilities in vector search and the things encapsulated by Esri right now. These capabilities are GA'd today. Other companies, you know, who we talk about, like a large home improvement vendor that we work with in an e-commerce capacity, are also using a lot of these same capabilities today.
You know, the reality is that we're seeing this from pretty much everyone, not everyone is as far along, and I think that we're seeing, you know, certainly people are on different stages of the adoption curve there. Part of our job at the company is to not only build these capabilities into the product and hopefully predict the right things to make good investments in, but also to educate the customers as to what's possible, right. We have the advantage of having an enormous number of existing customers that store and trust Elastic with an enormous amount of their enterprise data already. We have the opportunity to help them understand what's possible now that we're bringing some of these capabilities to market.
How are you doing that, Matt? Is that through your direct teams, service teams or, you know, training partners? It's just how. I'm just curious to see how long what's the path to get these things to really start rolling out in organizations?
Yeah. It's a good question, and really, it kind of goes both bottom-up and top-down, right? If you think about the bottom-up adoption motion of developers, the people who I spend most of my time thinking about how we can make sure we are the tool that they reach for when they wanna build one of these, one of these new kinds of applications. That's certainly something that you've probably seen a lot of from us over time, interacting with the developer community, talking about the technologies, and making sure that we're, again, meeting customers where they are, whether it's on, you know, the right kind of cloud provider, or whether we're building the right kinds of clients and integrations that they expect with the other tools that are emerging in this space.
There's a certainly strong thrust in the bottom-up movement here. There's also a top-down, you know, methodology as well. You know, we speak to a lot of, you know, enterprise-level customers who are coming to us with the same kinds of questions. Those are typically sales-led conversations where, you know, our field teams, whether it's just the, you know, the sales team itself or the services partners, and potentially external partners, are all playing a role in helping us, you know, bring these technologies to market.
Is the interest level on the enterprise customer side of thing, is it top-down? Is there, you know, executive/board level interest out there in your mind?
In my mind, yes. There's definitely the high-level interest coming from all parts of these organizations. Even inside of those enterprises, of course, there are also developers who are adopting these things. In fact, it's in many cases, that's one of the quickest ways that we're adopted inside of enterprises, is by the software developers who are adopting our tools and introducing them inside of these large companies. Yes, to answer your direct question, the interest is really coming from both sides, is certainly in the conversations that I'm having.
Interesting. In terms of, maybe just from a competitive landscape perspective, just give us a sense of, you know, Who you see sort of as your major competitors out there, you know, from a search and now search plus, you know, search, enhanced search with LLMs. You know, how do you see that landscape today, and what gives you guys the confidence that you're gonna win this battle?
Well, we are seeing, you know, emerging competition in a variety of the different areas that Esri encompasses. For example, there are purpose-built point solutions for vector search, for example, where there are, you know, vendors who are focused solely on doing vector search capabilities and building the algorithms underlying that. There are other companies out there who are building, you know, things for doing inference on models, for example, and just hosting these large language models and performing inference and taking, you know, perhaps taking your text in and creating embeddings for you, and then you're supposed to take those embeddings and then take them over to one of the, you know, purpose-built vector search data stores.
I would say, though, that, you know, our unique value here is that Elasticsearch is really the combination of all of those things, which makes it I would say, a significant. It gives us a significant advantage in the market in that, you know, what we're finding is that it's very rarely just one of those capabilities that companies need when they're establishing the sort of AI stack. They need vector search capabilities, certainly, but they're also finding, and if you go look at the academic literature, even the most relevant and best algorithms for doing text retrieval today are actually a combination of both dense retrieval with vector search and text retrieval with capabilities like BM-25.
The fact that Elasticsearch brings both of those things together, along with the ability to do ingestion, and create embeddings for the vector search use case, all in one cohesive package, is really a significant differentiator for us. We do see competition kind of, in many of these different areas, but I think that we're the furthest along in terms of bringing all of the capabilities together in one cohesive offering that people are going to need, as they build the sort of AI stack that they're, you know, seeking to build, and so that they can, you know, start creating these next generation of applications.
It's interesting, you know, when you look at your, you know, very flexible platform, and the ability that you've built the observability solutions on top of, and security solutions, and SIEM solutions, very powerful, very adaptable platform. It seems that the LLM angle is a whole new area for you, that it's a material area. Like, this is not a small add-on feature. It seems to me like it could be quite large for Elastic. Is that the right way to be looking at this?
I think so. It's certainly a significant new capability, and I think we invested well in making sure that we were seeing the possibilities of transformer models quite early on. Transformer models are a relatively new thing. They've been around for maybe four years or so, like, something like that. Seeing kind of how performant they were and the things that they were capable of very early on, so that we could start the implementation to be able to bring those directly into Elasticsearch, I think was really critical. As we've seen this proliferate, and we've seen models continue to get better and better, and we see broader and broader adoption of those things, both from very large companies, but also the open source community that's creating lots and lots of models for different specialized use cases.
The ability to have Elasticsearch play a role in hosting those models and helping you perform inference on those models at scale, all inside of the kind of enterprise-grade software solution that we've been working on over the past decade. You have the same kinds of expectations of being able to take Elasticsearch into production. Yeah, I think that bringing transformers into that and kind of fitting it into our existing ecosystem is obviously a very large opportunity for us.
What's the product direction, I guess, or strategy over the next couple of years now that you have this other, you know, component in place with Esri? Where do you Is it, what's sort of the approach, you know, going forward here?
Yeah, I think that I tend to break it down into two types of investments. The first being these kind of foundational features, that for the most part, are component parts, many of which are highly complex and not small, but they're all components of a broader kind of ecosystem of interrelated technologies. The primary focus of those things, ultimately, is to build really relevant retrieval models. You have a lot of data inside of Elasticsearch. We wanna make sure that we're investing in the right algorithms and capabilities to help you get the most relevant data out of those things, and you can pick and choose the ones that are best for your application. Those are kind of our type one investments. I can assume that those type one investments, we're gonna continue to improve all of them.
We'll continue to add capabilities when new kinds of capabilities emerge, and things that we discover or that we're seeing in the community that is becoming particularly important. Then there's the type two investments, which are these improvements to our solution areas, right. Observability and security. I think it's very important that we stay at the forefront of integrating capabilities powered by those fundamental technologies, like an assistant or a copilot, into the observability and security workflows that our customers are going to naturally come to expect as part of their user experience. I think that you'll see some of those things from us relatively soon, and those are things that we'll also continue to invest in and refine over time.
You know, for example, the kind of the opportunity that you mentioned earlier, potentially of creating more specialized agents over time, that are specialized in security versus observability or different aspects of those product lines, even.
Should we think in your, in your mind, Matt, we're all grappling with where is this going, you know, from these large language capabilities and large language models. You know, do you see your customers or the end enterprises, you know, building their own custom copilots for different, you know, use cases within their organizations? Is it really kind of more that you will build it to, in order to enable your customers to use your software more effectively, easily, more easily?
Yeah, I think that certainly we will build some, right? We will build some of the copilots that sit inside of our observability and security solutions. For many of our customers, they're building products, you know, that are well outside of just those particular use cases. They're building things that are bespoke, custom applications that power their entire business in many cases. We expect that those people will be building their own kinds of copilot-like features, among other things, that are enabled by generative AI. It's not just chat interfaces, but a variety of other things. Again, that's why those type one investments are so important to us, because we wanna make sure that we're building those building block features and offering those in a way that they can be used together in combination with each other, all through familiar APIs.
I think that's a very important aspect of how we're looking at this, and we expect that customers will be doing that. We also expect that they'll be bringing some of their own models. Like, we have customers today who are building or training their own transformer models, so they're not only taking open source models and bringing them into Elasticsearch. Some of our more sophisticated customers have data science teams that are training their own models on their own very proprietary datasets, and they're now, you know, able to take those models and put them into Elasticsearch. Really, in those type one investments, our goal is, you know, obviously flexibility and to meet our customers again with wherever they are, because we want them to be able to build the applications that they envision.
We will definitely be building, you know, some of these assistants ourselves, the things that are gonna be sitting inside of, our particular product experiences.
Excellent. That's very interesting. I guess, you know, I guess, how do you keep Elastic sort of in front of them, I guess, at the forefront of this, as opposed to being, you know, down part of the plumbing that people forget about? You wanna stay on the innovation forefront, right?
We definitely wanna stay on the innovation forefront. Sometimes that's going to mean being part of the sort of like integrated infrastructure. I actually think that's one of the beautiful things about Elasticsearch, and one of the things that makes us so sticky with many of our customers, is that if we're the foundational tool, or you're using some of these foundational capabilities as part of your stack to build a next generation application, by necessity, many times we're gonna be deeply integrated into the infrastructure that you build, both the software and the hardware infrastructure that you're deploying. Elasticsearch will be in the middle of that. Being part of the plumbing there is actually quite good for us, because it means we're a critical part of how the whole operates.
I think that we're happy to be there, but certainly, the goal is absolutely to be at the forefront, whether we're inside of the forefront of the capabilities and the technology. Whether that means we're being deployed inside of the core infrastructure or whether we're, you know, powering dashboards that are, you know, people or non-technical folks inside of a company are using to answer questions about their business data through observability or security or even some of our other, you know, applications inside of Kibana.
you know, it's interesting, you know, a lot of your model today is driven by consumption. you know, people on the platform pulling in more data, doing more work on the data. What are the implications from your perspective of more and more of these LLMs being built on your platform? Is it supplanting regular search and not really be that incremental, or do you think it's gonna be quite incremental?
Well, I think that, you know, in both scenarios, whether you're using traditional search or you're using some of these newer capabilities, we tend to see quite a lot of compute requirements, right? They're both compute-heavy applications. Certainly, machine learning is very well known for being... or in AI, for being highly compute-intensive, which does ultimately kind of in the long run, turn into consumption within our cloud platform or even in self-managed, in terms of the total number of nodes and things like that in our pricing model. And the fact that a lot of these capabilities inside of Elastic are priced or licensed in our premium tiers, you know, those things together over time, as our customers move these applications into production, I think will ultimately drive consumption.
Yeah, it would seem to me that because of this area is so hot, it would be something that a lot of your lower-tier customers might want to just, you know, move up, to just to be able to get their hands on it.
Yep.
Yeah, which is, which is pretty exciting for you guys for sure. How about the partners, you know, your, and you go-to-market partners, Matt, do you deal, you know, directly with them at all? Or just kinda, you know, how are they reacting to this situation? How are they gearing up for it?
I think that everyone, again, is thinking about this product or this opportunity right now and some of the capabilities inside of it. Certainly, I interact with some of our, you know, our partners in the hyperscalers and some of that, and trying to figure out the best ways that we can work together to bring, you know, these capabilities to Elastic's, you know, broad customer base and vice versa, right? Many of them, you know, they have needs that are, I think, uniquely served by Elasticsearch and the capabilities we've brought together with Esri.
We're just about at our time here, I appreciate you going a little extra for us because I know you're trying to get back to your conference. If we just look at the other side of the story, what was, what concerns do you have out there for this technologies and sort of what could be some of the inhibitors to adoption?
I think we've touched on some of them. You know, everyone is still working on these things. One of the inhibitors, just, you know, people being able to access and make use of the technology. It's a complicated area, and it's moving quite quickly, making sure that, you know, we're educating customers well there, and that they can find the information they need. We've simplified the developer experience to a point where it's, you know, simple for them to get up and running and see these things in action. I think that's very important. We also touched a bit on, you know, the security, the data security, and data sovereignty requirements of a lot of these kinds of applications, which I think Elastic is in a very good position to help our customers solve.
Those are the kinds of things that I'm seeing, as being some of the more, you know, pressing questions as people start to take these from, you know, what could be considered, you know, just more consumer applications of people using ChatGPT or something like that on their own, to really thinking about what that means to bring it into an enterprise application and bringing that, you know, to production. That's definitely one of the things that we're seeing the most of.
Okay. Okay, great. Well, that's. This has been fantastic. Is there anything else that we sort of didn't touch on that you kinda would go, "Hey, I wanna make sure I mention this," or?
No, I think that we've covered everything. You know, I really appreciate the time. Thanks for having me.
I appreciate your taking the time out of your busy day, and it's been great and looking forward to lots more great traction with this in the market for Elastic. Thanks very much, Matt, for joining us.
Thank you.