Thank you everybody for coming. It's day one. It already feels like day two to me. So much content, right? We have, I think I was telling everybody that we have, 3 days, 200+ presentations, 50 of them software. So you're, you're part of that exclusive. 50 software, and 30 of them that I'm running back and forth.
It's great to be here.
I just crack jokes and ask questions, but you guys do the tough work. I'm like the prompt engineer. I just ask the questions, and you guys are the brains behind the whole thing. So Ash and Janesh, thank you for coming. Yes, great to have you guys.
Thanks for hosting us.
Yeah, thanks for having us.
Absolutely. So I can hear my echo very loudly. Ash, it's been, what, two years now since you took over as CEO?
A little under, yeah.
Yeah. Yeah.
Over a year and a half.
How has it been? I mean, how are you liking the, the CEO? Is it all what it's cracked out to be or,
You know, they don't tell you about the macroeconomic environment turning on you just as you take on the job. But, n o, look, it's been an interesting journey in every way. Like, we've made a lot of progress in the area of just our focus on cloud. We've made a lot of progress on making sure that we can, you know, drive both growth and profitability in the business. The focus on consolidation and helping customers really consolidate out onto the Elastic platform, like, that's been a really good motion for us, and it's paying good returns. And then, generative AI. You know, it's today, the entire discussion, I think I spent more time talking about, you know, just our vector functionality and everything that it does
Yeah
more than anything else. So it's a huge resurgence of interest and excitement that we are seeing in our customer base for search, and that's been just awesome to watch.
We can talk about vector all of this presentation.
I can-
All about it.
I can imagine.
Yeah, we'll get to the good stuff in a minute, but as you look out, how would you want Elastic to be seen? Let's say, you know, what is your defining stamp on Elastic gonna look like five years from now? What do you want it to be known for?
Yeah, I, I think, our, our mission is pretty clear to us, right? So the way we see ourselves, we see ourselves as, a data analytics platform, an AI-powered data analytics platform for all kinds of real-time search use cases. What we really do incredibly well is allow you to bring in any and all kinds of unstructured data very quickly, make it all searchable, apply all kinds of analytics, using machine learning, using other techniques, and then get value out of it, irrespective of the domain. Obviously, we get used a lot for, use cases around e-commerce and so on, but also around observability and security, and I expect that to continue to be the case going forward.
You know, in the next five years, and, yeah, it's like, to me, we have a massive opportunity ahead of us to build a multi-billion dollar business. That's our focus, and that's what we're gonna continue doing.
Great. And since you talk to customers quite a bit, I would love to get your perspective, as you're the doctor that feels the pulse of the spending environment. What are customers telling you about next year and how they view tech initiatives? And I'm happy to share with you what I've heard from other CEOs. We've been doing this, asking the same questions pretty much all day long. What are you hearing from your customers next year?
Now, sure. So, there are two discussions that I seem to have, like, much in the same breath almost. The first is, everybody is still very focused on making sure that they can control their costs.
So the whole cost discussion is still very significant for customers and CIOs, decision makers of all kinds. And then the second is around generative AI.
Right? So everybody has a sense of what they can do with it. You know, in the last few months, we went from, "What is this?" to now everybody having a game plan, at least, of what are the use cases, the early applications that they want to invest in and build generative AI applications for. And now it's a matter of them going through that implementation.
But it's, you know, the cost pressure has not gone away.
From our perspective, we're actually leaning into it because we see a advantage that we have. You know, our value proposition is incredibly strong. Our pricing and the way we price is incredibly advantageous to customers, and the number of things that they can do with us, we are using the current situation to our advantage.
I, you know, I don't think that that pressure is gonna go away anytime soon.
T hat's good. So if you were to ask people, your customers are still in a cost-constrained mode, and they're approaching next year the same way, or, y ou know, after all, we've been through an unusual period. The last 12 months, rates have gone up from 0 to, not 12 months, but rates have gone up
from zero to 525 basis points over the space of 18, 19 months. Unusual time. But if things were to stabilize a little bit, Jan Hatzius, our chief economist and head of global research, presented earlier today, and he cut his recession probability from 20% to 15%, and he sounded quite positive on, on rates, inflation, the economy, et cetera. So if things were to stabilize, do you think we could get back to the point where the industry could re-accelerate next year? And what are your opportunities in a stable economic environment? 'Cause we can look back at the last 18 months and say, "Wow, we survived! I mean, I have a job.
Um, so-
But the next 12 months, what if it's stable?
So we, we did see, the consumption optimization stabilizing in Q1, and we talked about it on our earnings call. And, you know, Janesh can, or we can talk more about it. But we definitely saw that, that the motion of, people trying to optimize their existing workloads start to stabilize. And the best way I'd characterize it is, generally, you know, we are hearing from customers that they are, they are where they want to be at this point.
Like, they started doing things like moving more data to cold storage, which allows them, you know, even though the search performance might be a little slower, it allows them to save not only on what they spend on Elastic, but on the infrastructure cost.
It helps them save a lot on the infrastructure cost. So we saw them do those kinds of things. We saw them start to sample data in certain cases, but the reality is that there's a limit to what they can do with all of those kinds of optimizations. You know, I can store most of my data in cold storage and leave about seven days of worth of data, or maybe one day's worth of data in hot tier, so where it's, like, on disk and immediately accessible for querying and search and so on. But you're not gonna reduce it below one day, because, like, you need at least that much data to be able to get your log analytics information or your security insights or whatever you're doing. So it's we are at that point where it feels like
Okay, investing in, like, security outage, and somebody says: "It's gonna take me three days to get back to you because my data
Right. So there's a natural limit, and data volumes are continuing to grow. So we always knew that there was gonna be a certain point to which these optimizations would happen, and then, you know, things would stabilize, and we saw that stabilization in Q1. And like we've talked about, you know, I'd like to see that, I'd like to see how Q2 plays out. And, you know, once we see Q2 play out, like, we'll be able to continue evolving, our guide and so on. But, you know, Q1 was a very encouraging sign. At the same time, we saw all the commitments that customers had been making onto our platform also start to ramp up. So these are the commitments that we saw customers make in the last, you know, several quarters, where we have been leaning into this consolidation play.
Like, those new workloads that are coming onto the platform, we had talked about this, that it takes some time for those workloads to ramp up.
If it's a new workload, we are starting to see that ramp up.
So there's a lot of positivity there.
Then, you know, generative AI is a whole another, you know, tailwind in a way, that hasn't even yet fully shown up in the revenue yet, right?
So we have hundreds of paying customers at this point, but those implementations and workloads are in the early stages of production. And because we are consumption-driven, we recognize revenue when more queries happen, where more data comes into those workloads. So that will take its natural ramp in time. So we haven't factored any of that for FY 2024, but more for FY 2025. So I think there's a, there's a lot that gives us confidence in the business.
You know, so from our perspective, the way we are looking at the macro is that, you know, we're looking at it as being unchanged.
Yeah
and stable in some ways, right? So, but there's a lot of positivity that comes from just all the actions that we've been taking, and that makes us feel really good about the opportunity ahead.
That's great. So, Janesh, I told our clients that I would not use the expression "drilling down," "double-click," "shifting gears," so if you
Changing focus.
Changing focus, yeah. None of that. I'm not gonna use any of the terminology. But I wanted to get your perspective on the business, the evolution of the business model. You've certainly got going with the cloud quite a number of years back. You had a consumption bent, you still have a consumption bent to it. How are things looking like with respect to balancing out the business model? Are you at a point where you clearly know what kind of customer use cases or cloud versus on-prem, and how this assessment might change in the future, since you've been at the company for a very long time? What's your
Yeah, happy to talk about it. So, you know, from the standpoint of thinking about cloud versus self-managed, we've always let the decision be led by the customer. At the end of the day, people will deploy us where their applications and infrastructure reside. And for every customer that says: "I'm all in on the cloud," you look at them five years later, and, you know, you'll see that many of them have barely moved an inch. So we've always said: Look, we'll never be the reason to slow you down. If you wanna be on-prem, great. If you wanna be in the cloud, great. Now, we prefer that they are in the cloud because it's better for them, it's better for us. They get many more capabilities and advantages from, from being in the cloud.
So we create much more emphasis on cloud, but it's never been about a forced march of any, of any sort. I know that's been, you know, something that some others have tried.
But for us, it's always been, let the customer decide. And over the course of the past several years, we went public in late 2018, and from that time frame, when cloud was probably around, you know, low to mid teens in terms of the mix of our total revenue, it's increased. Now, it's 41% of total revenue. And looking ahead, I do think that cloud will continue to be a growth engine for us. It will continue to grow faster than the business overall. We will continue to see a shift more towards cloud in terms of the mix of the business, and that's just the nature of the workloads and where they reside, and the fact that most new workloads that customers start up tend to start in the cloud, and then they grow in the cloud.
So that's where we see a lot of the growth. Our partnerships with all the hyperscalers have actually helped quite significantly in that regard with AWS, GCP, and Azure. Those are working really well for us. We've won a number of awards from them. We've had built deep technical integrations with them. Then the other dimension that I think about in terms of the evolution was around solutions which you allude to as well. If I think about the solutions as security, observability, and traditional search, which has been the bread and butter for us
You know, at the end of the day, all of them have been growing quite nicely, and particularly with generative AI, as Ash was saying, there's a new level of excitement around search in particular, but that also finds its way in the form of capabilities for observability and security with things like the AI assistants that we are launching. So I do see all the solutions areas continuing to grow quite nicely for us. We are not trying to engineer a mix shift. We've not set targets to say we want one solution to grow faster than the other or anything of that sort. So as long as they all continue to do well, mix shifts, if they happen at all, will play out over a much longer time frame.
We're happy with the balance that we've got, and the balance that we've got in the business has been one of the core strengths of the business for quite some time now.
Got it. I think, maybe I'm getting the companies mixed up, but Ash, you have an MBA, right?
Yes.
And you understand finance really, really well, and you're telling me that you understand discounted cash flow and the net present value and all that stuff.
Better than I do. I don't pretend to know more than Janesh or many others of my colleagues, but yes, I did at some point of time in my life, I did spend time here at Berkeley.
Yeah. Oh, okay. That's, that's a great business school. Somebody in my family also went there for business school, so I'm partial to it. Now, getting to ESRE. So, how long has this product been in development, and when did you take the cover off? And, talk, talk to us about that overall platform, and then we can get into vector search.
Well, we started investing in vector database functionality a while back, like almost three years back, when we started looking at, b ecause where this journey started was really with our appreciation of foundation models and the progress that we were seeing with these transformer models and how quickly that was evolving. We were tracking everything that was being published, and one of the first things we did, I think over four, around four years ago or so, was made it possible for you to bring, you know, external models onto our platform and run them natively in Elastic. So bring your own PyTorch models. We built an integration with Hugging Face pretty early, and the idea was, like, just go bring your models, run them on Elastic, natively on data that's sitting in Elastic.
You know, ML, we always felt was gonna be a very differentiated element, and the way we built our platform is everything that we build on our platform, everything above it takes advantage of it, right? So our AIO ps functionality for observability was built directly using that same platform functionality on ML. Same with behavioral detection for security, was all built that way. And then, three years ago, a little over three years ago, we started, like, really looking at this area of vector databases and dense vectors and what does it take to create a vector embedding using inference models, and then how do you store them and you search across them very efficiently. And we reached some decisions pretty early based on the state-of-the-art and what we believed was the right way to do it.
We were pretty clear that we didn't want to build this functionality as a bolt-on.
We are now seeing all the benefits of that.
That's the reason why we built it directly into Lucene. And the reason for it is there is so much contextual information that you need to provide in any kind of search, everything from geolocation to personalization to document-level permission. So what data do you have access to versus not? All of that is context that needs to power how you search across information in a database. And so we have, over the years, built all of those capabilities, that richness of functionality, and we knew that even in these vector use cases, all of that was gonna be super important.
Building it as a bolt-on was not the right answer.
So we wanted to build it in the core. So we started working on that, and two years ago, around the time of 8.0, we released the ability to store dense vectors in Elastic and search across it using what I would describe as brute force methods. So we implemented the nearest neighbor algorithm, the kNN algorithm, and that was a while back. And we saw customers start to use it, but the search was not that efficient.
because it was quite literally a brute force method. It works well, you know, up to a certain scale, but not beyond that. But we had been already working on the next evolution of that, which was the ANN algorithm, the approximate nearest neighbor, which tends to be more sophisticated, more efficient, faster. It uses something called the HNSW, the Hierarchical Navigable Small World graphs, which most vector database companies have used as the underlying algorithm. And that we released, I think it was 8.6, I forget the exact time now, well, six months ago.
And we released that along with our own zero-shot inference model that we call ELSER. There were a bunch of other capabilities that we released. We talked about the fact that there were some new features coming. All of that we released six months ago, and when we released that in 8.6, we really started to see, you know, customers pay attention and start to use the vector functionality in meaningful ways. And since then, we've been, you know, continually driving into our roadmap. Like, we've recently released functionality that allows us to take advantage of native instruction sets in modern CPUs, what are called SIMD instructions, so single instruction, multiple data, where you can take an instruction, and in the chip itself, apply that against multiple, you know, data elements, so across a vector.
You know, there's a lot of FUD that Java can't take advantage of these kinds of things. Java itself has been evolving. Like, you don't have to make a choice between being thread safe, you know, using a language like Java, and you not being able to take advantage of some of the CPU instructions in modern CPUs that are now available. So Java has incubation APIs now that allow you to go directly. We've already taken advantage of it. So we have been constantly evolving on our performance, on the ease-of-use capabilities, like, you know, on the ease of use front, how do we allow you to just, you know, have Jupyter Notebooks that you can use to build these applications very, very quickly? So there's a lot of work that we've been doing. This is a big focus area for the company.
So it's been a long journey-
Yeah
so to speak. So this was not something that we launched in a hurry or launched, you know, without forethought.
It's really well thought out. So, I know that, Janesh, when you and I met for the first time with your founder, it was about search. That was the, the killer application, right? This is like six, seven years back or so. We're coming back to what made Elastic such a, a great piece of technology. So do you think, in some sense, search becomes the defining use case, and that are we at a point where, you could sort of envision the beginnings of a platform, tooling, enabling new applications to, to work, and, and you could, you could start this as a, almost a separate product, a separate SKU, a separate business use case?
I, so-
Where are you going with this? Because it sounds like this is-
Yeah, I...
fascinating
I can imagine how from the outside, if you've heard us talk a lot about observability and security in the last few years, you would think that, you know, search was not still our, you know, how we think of ourselves.
I never thought that way, though, and Janesh knows that, right?
But we-
First search
search internally.
It's always been about search.
Internally, that's the way we felt. We've always felt that it's always been about search.
Yeah.
The way we got into observability and the way we got into security, and the way we differentiate and win in observability and security is because of search.
Right? So our focus, like when we play in observability use cases, like we've said, we always lead with log analytics. And the reason why we win is when you're dealing with logs, it tends to be extremely complex. Logs don't always tend to conform nicely to a particular, you know, format or schema. Application logs tend to be all over the place, and when you're dealing with that kind of data, we are incredibly good at helping you make them understandable, searchable, analyzable. That's the reason why we win.
Same with security, because when you're dealing with security logs, security data, you're trying to coordinate across these different data types. They tend to be all over the place, and our ability to do that incredibly well, incredibly fast, at very large scale, that's why we win. So we've always had that core search DNA. Now, what we typically have classified as enterprise search, right? So whether it's e-commerce search or, you know, all kinds of
other, broad set of search use cases, for those enterprise search use cases, generative AI is definitely creating a massive resurgence of interest-
and excitement, which is great. Like, that's very good to see. But generative AI is also making observability and security really interesting for us.
Why is that?
So we released our Security AI Assistant. It's now in beta, and our Observability AI Assistant is now in tech preview. And we are actively showing it to customers, getting feedback. Customers are playing with it. It's built on top of ESRE, so it's built on our core platform, much like everything else we do.
You don't call it Copilot.
We call it Assistant. Yeah, it's like it just felt like wrong to call it Copilot, because I love the term Copilot. I think Microsoft did a wonderful job by calling it Copilot, but now it almost feels like it's their term, right? I, I, I'd feel like, you know, guilty about it.
Yeah, there are more Copilots than pilots.
You know, but you know
Like an orders of magnitude, like
Totally give credit
That is cool.
to Microsoft for it, man. They, it's a wonderful term in my opinion, better than Clippy. But you know, it's the assistant is built on top of ESRE, both of these assistants.
Yeah.
The kinds of things that it lets you do is effectively based on the alert, as an example, give you a very prescriptive guidance on what are the next steps, and you can have an interactive conversation with the assistant to say: "Okay, that's a particular. What does this error really mean? How many places am I seeing this error?
What do I do next about it?" And the assistant will even do things like ask you: "Do you want us to open a case in, you know, PagerDuty, or ServiceNow, or whatever?" And through our integrations with them, allow you to open that case.
This is all available now?
This is all in beta. Yeah, we are showcasing this to customers.
Wow!
So, I know not everybody gets excited about reading what the IDCs and Gartners and Forresters of the world say, but like IDC did a market perspective where they, they sat with us, we showed them a demo. They've done a write-up on this. This is, this is w hen you think in terms of how can you make it easier for less sophisticated security analysts to use the security tooling that they have available to them and have a better outcome, like this is, this is really valuable. So we are seeing, you know, wonderful conversations coming out of it. It's, it's fun when you see a customer actually stand up and walk closer to the TV to see the generation of all of this information and the assistant when you're showing them a demo.
I feel excited about not just what this is gonna do for search, but also what it can do for other domains that we play in.
Which is why, like, you know, b ut coming back, it's still all about search.
Yeah. Yeah. I'm gonna ask you, Janesh, a question, but before that, you guys are all invited to come back to next year's Goldman conference. You have to register yourself and your copilot. So just be warned. We'll all have copilots.
Yeah.
Maybe multiple Copilots.
Your AI personality.
So
Maybe your Copilot can register you.
I know, yeah, right? Yeah, I know. That's it's a, it's a world of possibility. So, Janesh, how are you gonna get paid? What is the monetization model for, generative AI? I think a range of companies have come up with, $30 per user per month. Another one has raised prices by 60% for the Pro Plus version. How are you looking to monetize? Are you taking a different tack or,
Yeah. Our approach is to actually bring the same simple, powerful pricing model that we've always had with the product, and bring that to AI as well. So the way I think about it is, in terms of the capabilities that you need for generative AI applications, the machine learning capabilities, the advanced features, those are only in our higher paid tiers, in Platinum or above. The AI assistants that we talked about, when those go GA, those will be in the Enterprise tier. So for somebody to take advantage of all of those features, they have to be in the highest paying tiers.
Now, there could be other reasons why they wanna move up to the higher tiers as well, but fundamentally, if you're in a lower paying tier and you wanna take advantage of these features, you have to pay up, and there's a pretty significant price difference between the most basic tiers and the higher level tiers.
Is there a price increase for the highest tier that includes the AI functionality, or?
Well, no. So if you're in the highest tier, in the Enterprise tier, and you have the features for AI included, you'll have the assistants included. What happens then is, depending on the kinds of use case that you're using, if you're using more use cases related to AI, those will tend to be much more resource intensive
because they are highly compute intensive.
Correct. Yeah.
At that point, it causes the meters to spin faster.
And that's how we monetize, because ours is a resource-based pricing model, and it's tied directly to the value that customers get from the underlying technology. And that's been one of the cornerstones of our pricing model that has helped customers over the past several years, make sure that they are tying business value to the amount that they are paying us.
Yeah. Yeah.
So that will continue, and as the meters spin faster with these more compute-intensive use cases, that then drives incremental revenue for us.
Yeah. When you say meters spin faster, I grew up in India, so we have these auto rickshaws and
Yes, you're familiar with fast-spinning meters.
Yeah, fast spinning. Exactly. Anybody been in an auto rickshaw here in the audience? If you, No. Nobody's been on an auto rickshaw. Wow! Have you been in an auto rickshaw?
It's-
We both grew up in Bombay.
It's been a while.
Okay, yeah.
It's been a while.
It spin the meter.
Yeah.
We would pull the guy and say, "You, your meter is spinning really fast.
Slow down, buddy!
Yeah. Anyway, so any questions from our clients here? Just, feel free to raise your hand and use your best prompt engineering skills. I think I scared people away by using the words prompt engineering. Yeah, okay. We have a prompt engineer. Come on. Yes. Go for it.
Hey, so you talked about your new vector search product, you know, recently coming to market in version 6.8, and it's very early in this market. Can you just talk about sort of the competitive landscape and what differentiates your vector search versus other solutions in the market?
It's a good one.
Yeah. So, I can, I can give you a sense of like, what we are hearing from our customers who have, who have played around with multiple different products and have chosen Elastic. You know, roughly, the feedback falls into, you know, four categories, and I was describing it to a few other people that, we met earlier today, during the day. So first and foremost, the feedback that we're getting is our vector implementation is really good. Performs well, scales incredibly well, and, you know, we participate in public benchmarks, so you should be able to go and see the data for yourself. But people are, are generally giving us very positive feedback on our implementation. The second feedback that we're getting is, a lot of the features around just the richness of context that we can offer are very differentiated.
The hybrid search functionality is incredibly valuable. You know, we implemented something called Reciprocal Rank Fusion, and with Reciprocal Rank Fusion, basically what you do is you use multiple different techniques, vector search, you know, semantic s
earch, textual search, and use the multiple techniques to re-rank the results from each other to get the most optimized output, the most optimized search result. You know, this work came out of RRF, Reciprocal Rank Fusion, came out of academia. We were one of the first to implement it. Now, others are trying to catch up. But you know, when you think about context, even beyond that, being able to pass personalization information, being able to pass geolocation information is incredibly important, and our product is able to do that.
And, you know, you can imagine the value in that, right? Like, you know, if you are based in Europe and I'm based in the U.S., and we ask the same question, you know, about your company's vacation policies, we should get different results. So the search should be, you know, augmented with context. And the third big piece of feedback that we get is just data policy, data privacy and security, which is critical. Like, we've implemented document-level permissions and things like that directly into the product, and it's an afterthought for many other vector pure plays, and that's a huge problem for enterprises. Like, no enterprise is willing to violate data privacy regulations, and we are able to adhere to that. We're able to guarantee that. And lastly, incumbency.
Look, the reality is that when you talk about search, in Elasticsearch, we've had an amazing reputation for the last decade, not only within, you know, the 20,000 customer base that we have, but also, you know, within just the broader community of users. So when you add all of that up, like, we've got a really differentiated story and one that, you know, even though customers might play around with multiple technologies, they seem to be choosing us. So we feel pretty good about our position.
Ash, I had one final question for you. I could ask you more questions, but, I know you gotta go as well. Really appreciate you coming. So, how are you thinking about industry has gone through retrenchment, you know, freezes, layoffs, et cetera. You guys had your layoffs a while ago. How are you open to hiring at this point and picking up the pace of hiring?
Yeah. We are already hiring, right? So what you should expect is, we are gonna continue to hire. We're gonna hire in the places where we see the biggest opportunity for growth. So whether it's in R&D, you know, areas of marketing where we'll continue to invest. You know, we have a strong position in generative AI. You know, we feel really good about where we are, but I see this market continuing to grow, and we wanna make sure that we are not only leading now, but we are leading, you know, three years from now, five years from now. So we're gonna invest in those areas. Also in sales capacity.
We made a very strategic decision that we're gonna service the SMB segment through self-service, but we're gonna focus our sales energy on the enterprise and commercial segments, and that's paying off incredibly well for us when you look at the larger commitments that we are seeing from customers. So we'll continue to make investments in those areas. So, you know, we are hiring, but the model is such that we feel really good about the operating leverage that we get in the model as we continue to grow. And so we feel very confident that we'll be able to both grow and drive profitability as we go forward.
Do you wanna add anything to that, Janesh? Because that's been a big thing, right? Operating leverage with the growth.
Yeah, no, Ash touched on it quite nicely, right? We expect that we'll continue to grow revenue faster than expenses and just stay wise and thoughtful in where we are making investments. We're not gonna compromise growth, but we can deliver against the operating leverage that we have inherent in the model, like we've demonstrated already.
Got it. On that note, thank you so much, guys, for coming.
Thank you.
The story sounds even more interesting than last year. We have this new vector search capability, and we wish you really, really well. We wish you'd get more and more customers for this. And thank you once again, and we have two more days of this. So
Enjoy the conference, guys.
Yeah.
Appreciate it.
Thank you so much.