Okay, perfect. Okay, we're gonna get started with our next session. So again, for anyone who I haven't met, I'm Gregg Moskowitz. I cover enterprise software for Mizuho. Today, I have the pleasure to be joined by Janesh Moorjani. Janesh has been Elastic's COO since May 2022, and its CFO since August 2017. Prior to joining Elastic, Janesh served as CFO of Infoblox, from January 2016 through August of 2017, and prior to that, held various senior leadership roles at VMware, Cisco, and PTC. Janesh, thanks so much for being here today.
Thanks for having us, Gregg.
My pleasure. So you know, many investors, no doubt, are already familiar with the Elastic story, but for those who are not, maybe just spend a couple of minutes to tell us about the company and really what your key tech differentiators are as well.
Yeah, happy to. So, Elastic was actually founded in 2012, and the main product, the Elastic Platform, was authored by our co-founder and current CTO, Shay Banon. And the... What it is at its core is it's, and it's a search platform, and search is so many different things. Anytime you have large volumes of unstructured data, messy data, any kind of data really, and you have the need to ingest that and read that and return near real-time search results on it, that finds application in so many different use cases. So when we started off, initially, in our very early stages, we focused more on traditional enterprise search use cases. From there, logging became a very popular use case for us.
SIEM, some years later, ended up becoming a pretty popular use case for us. And so those are the three broad buckets on which we focus today, traditionally, thinking about search, thinking about observability, and thinking about security analytics in particular. And it's been a great ride. When the company, when I joined the company back in 2017, we were still. We had just completed about $100 million in revenue back then, and we just finished our fiscal 2024 and came in just shy of $1.3 billion in revenue. So it's been a great growth journey for us. And yeah, it's, I think the best is yet to come.
All right. Wonderful. Thank you for that, Janesh. And then, you know, nearly every enterprise software vendor says that GenAI will be an incremental growth driver for them.
Sure.
But I have to say, you seem to be showing it earlier than most, and maybe it's most evidenced by the fact that Elastic has amassed over 1,000 paid customers for your vector database and RAG capabilities. But what would you primarily attribute the early success to? And then also, in your view, how, if we sort of take a step back a little bit, how much more relevant is search in a GenAI world?
Yeah, and maybe I'll tackle the second part of that first, Gregg, because if you think about how people interact with their data, if you think about search the way it was traditionally done, you would have a search query, and you would get a set of search results, and it would be essentially a menu of items, and you would have to then click through and say: "Is that the right answer that I wanted? Or if not, I need to go somewhere else." And there are different forms of search as well beyond the text-based view, but you know, that's one way in which people would essentially interact with their data. And you know, what GenAI does is it changes the landscape because it gives you the answer that you're looking for relatively quickly, almost instantaneously.
In the very old days, it was like the Google I'm Feeling Lucky button, and if you pressed that, you got the summary of the answer that you were looking for. It's that kind of concept, and fundamentally, if you think about what is happening with all of these GenAI applications, they are leveraging the power of search, either to provide you with a summarized response on an existing data set or to generate content based on the context and the query that you've provided. So essentially, it's using search under the hood, effectively, to either deliver results or to autogenerate content, and that's why the two are so tightly coupled together.
And that's, I think, a big part of the reason why we've also enjoyed initial success, because if I think of Elastic as a search platform, it is a relatively ubiquitous search platform. Since our founding in 2012, we initially open-sourced the product. We still have a free and open version. The product has been downloaded more than 4 billion times . So if you think about most people in the developer community, they're typically familiar with Elasticsearch as a product. They've been using it for search in some form or the other. So Elastic and search were almost synonymous with each other. Search is obviously embedded in the name of the product itself.
And so as people started to think about GenAI applications and what it takes to succeed with GenAI and thinking about how to leverage the power of search, there are certain new capabilities. People talk about vector search all the time, which we've actually natively built into the platform. But beyond just vector search, there's a whole bunch of additional capabilities and, features that people need to make GenAI applications successful.
And if you think about all the capabilities that enterprises have wanted, respecting privacy postures, respecting compliance postures, respecting security postures, personalizing the data, making sure that privileges that one user may have are, are respected compared to privileges another user may have, considering geographic differences, all of those things that over the last 12 years we've been refining and, and scaling in the core platform, people get the power of all of that, and vector search essentially becomes yet another feature. And enterprises can use those to not only get better outcomes in terms of the, their enterprise needs, but also more cost-effective outcomes. Vector search, as you know, is a very expensive way of searching because it's very compute-intensive.
And, what we're also seeing emerge is patterns where people are leveraging traditional lexical search for simple search queries that need lexical search, combined with vector search for the more advanced cases that might need, for example, semantic search, and then re-ranking those results and getting a better business outcome, but doing it a lot more cost-effectively. So I think all of those capabilities have helped. Our partnerships with hyperscalers have helped quite significantly. And in general, we play very well with other partners in the ecosystem, including, not just the hyperscalers, but other providers of models, companies like LangChain and Hugging Face and others.
Mm-hmm.
And then, of course, there's, just as I said, the familiarity. Everyone's familiar with Elasticsearch, and so we tend to be invited to many of those conversations quite naturally.
All right. That's great. Very, very helpful, Janesh. Thank you. You also reported a very good Q4 a couple of weeks ago, which really has, I would say, bucked the trend versus what we've seen from many other consumption models more recently. Just, maybe just spend a minute or two to talk about the results or kind of what your key key highlights were from that.
Yeah, happy to. So, we reported $335 million in total revenue, which was 20% year-over-year growth in our fiscal Q4. Our year ends in April, so we just wrapped up the fiscal year, and, for the full year, we reported, as I mentioned, just shy of $1.3 billion in revenue, which was about 19% year-over-year growth. Our model is a mixed model. We have software that you can download, which we call self-managed software, and we also have a hosted offering that we refer to as Elastic Cloud. Cloud has been growing a lot faster for us. Cloud grew 32% in the quarter, and, cloud is now 43% of our overall revenue. So it was a very good quarter overall.
In terms of all the metrics that we typically guide to, we exceeded the high end in all of those, including on the profitability side. Even the metrics that we don't formally guide to, but that we actively track as a business in terms of our customer counts and net expansion rate and so forth, we did really well. The customer counts in all the categories were really strong. The net expansion rate ticked up. And as you referenced at the start, with, we reported with... that we now have more than 1,000 overall customers that are paying customers for our vector search and RAG capabilities.
What was even more exciting within that actually was if I look at our cohort of customers, that's more than $100,000 in size of spending with us, which is a good representation of some of our larger customers. We had more than 145 customers in that category, which is a little bit more than 10% of that total, that are using us for vector search and RAG, and that has more than tripled since a year ago. So, so it was a good quarter for us all around.
Okay, wonderful. And then on the product front, so a lot of innovation recently at Elastic. Let's begin with your most recent announcements, if you will. So Search AI Lake and Elastic Cloud's
Yep
... Serverless. What personally excites you about these?
Oh, gosh, how much time do we have?
Probably not enough.
Probably not enough. I'll try and keep it brief. I think there's... When I step back and think about Serverless, I think there's, you know, three fundamental things that I think about that are different than the traditional Elasticsearch offering. We'll call that the hosted offering, which is the existing Elastic Cloud offering. You know, the first is, as you think about the underlying architecture, when people deploy Elastic Cloud for any particular use case or solution, you need a certain level of compute, either to ingest the data or to query the data, and you also need a certain amount of storage to store all, all of that data.
Fundamentally, the compute and the storage tended to be coupled together, and so as you scaled one, you needed to tend to scale the other one as well. What the Serverless offering rests on is our Search AI Lake architecture, and that fundamentally decouples the compute from the storage. And so as you think about ingest or query or the storage, you can start to directly store all of your data in cheap object storage. Think about something like Amazon S3, which, as you know now, offers many nines of reliability, seems infinitely scalable. So you're not constrained anymore by the volume of data that you can store there. And having stored that, you can start to query all of that directly.
For query purposes, you can have some of that stored in the cache, so it's still blazing fast.
Mm-hmm.
You can independently choose to apply compute resources, and so what that does for us is a couple of things. You know, one is it makes, it makes it a much better experience for customers, but importantly, it unlocks a set of new use cases where historically, if there were use cases where you had very high needs for storage, or you had very high query rates, but not a lot of underlying data, those would, from an economic perspective, be unreachable. Because if you had to scale your compute and storage linearly, it could drive the total cost really high, and now suddenly, the economics of those use cases becomes much more within reach. So it gives customers a lot more flexibility, and it unlocks more use cases for us.
I think the second piece, that excites me about that is, you know, if I think about the traditional Elasticsearch hosted offering, there's still a lot of management that, that a user would have to do. So, you know, when you set up your initial Elasticsearch clusters, you're still, thinking about some of the underlying configurations. You have to think about how you want your indexes distributed. You have to manage the sharding of that. You have to manage the scaling. It required a little bit more of hands-on work associated with that, and in Serverless, that is all abstracted away from the user.
Mm-hmm.
So all of that is handled pretty seamlessly, which makes it. It takes a lot of the management work away from the user and does that automatically in the product, which makes it a much better user experience and allows even greater scaling. And I think the third piece, which both of those two pieces that I just mentioned come together and help us enable, is if you think about the three pillars that I talked about around search and observability and security, historically, our pricing model was a resource-based pricing model, and even today is a resource-based pricing model, which is use case independent. And depending on compute and storage, it tends to be driven a little bit more by compute, but depending on compute and storage, you pay a certain price.
And because we are now able to abstract these a little bit further away, it allows us to create more bespoke experiences across those solutions in terms of the user experience as they are interacting with the product, and also allows us to have pricing that is both more simplified and more aligned with what the users are typically experienced to paying for in that particular use case. So if you're a security buyer, you might be used to buying a certain way. It aligns the pricing model more closely with that as well. So for all of those reasons, it helps us abstract the infrastructure away, it helps us create new pricing models, it unlocks new use cases. It's really exciting for us.
Yeah. That's super interesting, and also there's, I know some list pricing that's available on the site, just kind of under-
That's exactly right.
Pricing and packaging.
Yeah.
Um-
That's right. Yeah, so the, the pricing is on the website. We haven't GA'd the service yet. It is in public preview, and that's because it's, it's currently on a couple of data centers with one of the hyperscalers, and we need to complete a little bit more of the build to expand to the others and,
Yes
... have a greater physical presence before we declare GA, but the pricing is on the website.
And then I also wanted to touch on Elastic's new query language, ES|QL, which I believe you're already at 1,000 customers and counting that are using it. What does ES|QL bring to the table that Elastic didn't have previously?
Yeah, ES|QL is, again, another really exciting development because, you know, if I think about one of the themes that we've talked about and experienced for quite some time in the market is this concept of platform consolidation, where users of other products want to come onto Elastic, both for technical innovation as well as from the standpoint of cost. One of the barriers to that, historically, has been their familiarity with other query languages and the popularity of other query languages. And what ES|QL does is it essentially takes that away because it's a very natural query language. It's a piped query language that developers are very easily able to construct queries in because it's very similar to what they are used to earlier.
And, if I think about migrations from, from other platforms, one of the reasons why, you know, people might have been a little bit more hesitant to migrate is you can migrate the data, you can migrate existing queries, you can throw some, some bodies at it to do that if, if needed, but what about new queries that you need to write? So training the existing operators to say, "How do you work with a new product?" was always a barrier, and ES|QL fundamentally largely removes that barrier because it's very familiar with what they are previously used to. By the way, connecting that back to large language models, large language models are very, very effective at taking the input from one of those query languages and translating it to ES|QL. And so that makes it even more seamless.
So that's one of the things that has helped us unblock ourselves a little bit even further in the SIEM use case in particular.
Excellent. Thank you. Then a two-part question on generative AI. So you have ESRE, you have sparse encoder model, you have your observability and security AI assistants as well. How are... and I know it's early days, but how are the uptake of these progressing relative to what you may have expected? And then second part of the question is, you know, Janesh, where do you think incremental budget is coming from at this stage to pay for Gen AI? Is it coming from other areas in IT, or is it largely incremental spend, do you think?
Yeah, I think there's a couple of pieces. So in terms of the adoption relative to our expectations, we mentioned the statistics on the customer counts. Maybe I'll just clarify that those are customer counts only in Elastic Cloud.
Yes.
We also have a lot of self-managed customers that are using us for vector search, that are using us for RAG use cases. We know that because we interact with them, we're helping them, but we don't get product telemetry from them, so we've left that out of the customer count for now. So this is just a cloud-specific statistic. Similarly, if I think about, the AI assistants for observability and for security, those tend to be features almost similar to copilots, if you will, that will help make an operator much more effective. And one of the things that that does is it just helps us make our existing product set more, more competitive in the marketplace, which is, which is fantastic.
But one of the other pieces on security, and we actually announced this at RSA recently, is: How do you extend even beyond copilots? And maybe the example I'll use is of a SOC operator, and everyone knows that they suffer from alert fatigue. Every morning when they come into work, there's hundreds, if not thousands, of alerts that they may need to respond to, and most of them tend to be false positives. And so how do you get that SOC operator to be even more effective and focus on what really matters and find what really matters? And that's a great use case for us. And so what we previewed at RSA was shifting the focus from alerts to the actual attacks,
Mm
... with a capability that we call Attack Discovery. And what we demoed was we actually ingested a bunch of data into our SIEM from third-party endpoints, and married that up against a third-party large language model, and narrowed down that large list of alerts to a handful of attacks that really mattered, that were underway at that point in time, and that the user could then go focus on. So there was so much excitement and engagement around that. That's obviously still early. So I think the initial engagement, the initial adoption for us has been, has been really strong. We're really excited about what that holds for the future. In terms of where the budgets are coming from, Gregg, the other part of your question, so I think what we're seeing is a couple of things.
Number one, people are shifting budgets towards GenAI because all of these applications need money to develop, they need money to test and scale, and so forth. Where the budget is coming from, I think, is a couple of things. In some instances, people might be shifting the composition of their spending, and to the extent they are, the whole platform consolidation theme ties in very nicely to that.
Yeah.
Because they are saying: We want to get off expensive incumbent vendors, and we want to move to Elastic, and that actually plays very nicely to our advantage. I think the other thing we also sometimes see is, very often the initial use cases for the adoption of GenAI are more internally oriented use cases. Things like customer support or call center use cases, or even in financial services, wealth management use cases. Many of those use cases are predicated on cost savings that the customer can drive from other sources. And we do the same. We have, internally within Elastic, we are, trying out a GenAI related use case for our own support, and, you know, essentially, that means our support organization has to hire fewer people.
Yes.
And so it becomes self-paying from that standpoint. Now, different organizations may have different pools of budgets that they move around between IT and non-IT, or sometimes it's all one budget and what have you. So it varies organizationally, but those are some of the things we are seeing.
Okay, very helpful. And then, let's talk about the competitive environment if we can, because at least if I take the SIEM space, Janesh, I mean, there's just been an enormous amount of change recently. You've got, Cisco, of course, buying Splunk, you have Exabeam and LogRhythm merging.
Yeah.
You have Palo Alto buying the QRadar assets from IBM for their XSIAM business. What does all of this mean, do you think, for Elastic and just for the competitive landscape and dynamics overall?
In one word: opportunity. And the way I think about it is, the trends that we've seen. If maybe I start with, you know, for example, Cisco and Splunk. It's, it's been less about the acquisition itself because we've seen the, the theme of platform consolidation in general, whether it's from from any of the incumbent vendors in our favor. We've seen that theme playing out for many, many quarters now. I think ever since the macro started to turn a little bit south back in the mid to late 2022 timeframe. So in a quarter for the past six, seven quarters, we've been seeing that trend in our favor, and much more than we historically used to see that. And I think that's, that's continued over the past several quarters. I think that will continue, going ahead.
'Cause fundamentally, when people look at both the back end and what we can provide for them, and SIEM from the standpoint of the outcomes that we deliver with the technology and the savings and cost, that's pretty compelling to them. We talked a little bit about some of the barriers to why people stay with existing technology. We talked about it in the context of ES|QL. One of the other barriers tends to be just inertia, right?
Yeah.
Something's there, it's working. I know I may need to change it at some point in time, but I've got other priorities right now. When you have some of the kinds of transactions that you mentioned, it puts customers on notice. It puts customers on notice to let them know that they, they are going to have to go through a forced migration. And so suddenly, that, that barrier also goes away, and when you have customers going through a forced migration, there's money in motion, which means it's opportunity for us.
Do you guys have any competitive displacement programs that are targeted towards one or more of these vendors?
Yeah. We're actively doing that, and we had, for example, with our sales kickoff as we started our fiscal year. Mark Dodds, who's our new Chief Revenue Officer, comes from a deep enterprise selling background and had spent many, many years at Cisco before joining Elastic. Brings amazing clarity and focus to the field sales organization. We focused the field on a couple of important plays, and with respect to platform consolidation, it starts with a simple question, saying: Who are you using, and how can we help you? Because with any of these existing incumbents, either there's cost pressures or there's end-of-sale, end-of-life pressures, there's reasons that customers want to move, and so we have active campaigns on that front.
Okay, terrific. I'm glad you mentioned Mark, 'cause obviously, that was an important hire for Elastic. Any other focus areas under Mark's leadership that you would call out?
You know, I would describe it as evolutionary, not revolutionary.
Mm-hmm.
No fundamental change in the strategy. I think the strategy has been working quite nicely for us for the past several quarters as, as everyone may have seen from the results. We did some of the normal evolution that you would have in any field sales organization at the start of the year. We talked a little bit about the platform consolidation theme. The other theme that he focused, folks on was on everything related to generative AI and the opportunity there.
Mm-hmm.
And there, it starts with a simple question to a customer: What vector search database are you using? And from there, that leads to an amazing set of discovery questions because everyone's thinking about this, and so it became really powerful for our field sales organization. And beyond that, I think it's gonna be continuing to build on some of the longer-term capabilities that we know we need to build in the field sales organization.
Yeah, and I, I have to say that's, you know, kind of a, a real terrific angle. It actually helped elucidate why... Part of the reason why, anyway, you're, that Elastic is being so successful in that area.
Yeah.
'Cause you could easily imagine how that's gonna, again, spark really in-depth conversations: How do we add value in this strategic area? Who am I gonna use, and how?
That's exactly right.
So obviously, right there for that.
Yeah.
All right. Wonderful. One more question before I pause to see if there are any questions in the audience. Let's just turn back to the financials for a moment. So, for the year, I know you've guided, of course, 16% top-line growth. Your operating margin guidance at the midpoint is 12, 12%. Can you give us a sense, Janesh, of kind of how you're thinking about balancing incremental R&D investments, as well as sales and marketing overall, including sales capacity?
Yeah, happy to. So, you know, the way we approached our model for this year, as you know, Gregg, you've covered software companies for a long time. There's incredible operating leverage inherent in software business models. As a company grows and adds top-line revenue, you don't need the same dollars to go into the investment line. You need to invest less to drive more revenue, which naturally means you can let some of that flow through to the bottom line as operating margin expansion. So we make a conscious choice about how much of that we wanna reinvest back in the business. On GenAI, in particular, we have deep conviction about the long-term opportunity in GenAI. I do think it is a long-term opportunity.
Despite all the successes that we've talked about, the revenue models are ones where as customers first take time to build the applications, then they deploy the applications, they build the integrations, the application scale, data starts to come in, the usage happens, then the usage follows an S-curve. So the revenue tends to be more back-end loaded.
Mm-hmm.
But you'll never get that revenue in the future if you're not designed into the application today itself or into the stack today itself, I should say. And so that's why those customer metrics are really important to us, and we've got... Based on that initial momentum on the customer metric, we've got so much conviction about the longer term. That doesn't play out into revenue. It didn't play out into revenue in a meaningful way in 2024. We're not banking on a lot of it in 2025, but we are investing towards that because it does take that upfront investment. So, the path we chose to follow for fiscal 2025 was to take some of that natural operating leverage and reinvest that back in the business, which is why we're committing only one point of margin expansion this year.
We also deeply believe that over time, as we continue to grow the revenue, there will be that leverage is strong enough that we can sustain future investments and expand operating margin even further in future years. And for whatever reason, if we don't see the growth, or things change in the external environment and you know, things are a lot worse for any reason, we've shown before that we have the discipline to take cost actions where necessary to focus on the right outcomes for shareholders. So it is a balancing act for us, but one that we consciously choose in terms of where we wanna be on that spectrum.
Okay, very helpful. Any questions? Yes. Oh, sorry, there's a mic coming over.
Thank you. Just quick, two questions. The first one is vector search versus vector database. Are the customers tied to the database, or can they plug in a different vector database to use your vector, what is the, you know, connect, disconnect?
Yeah, I think in general, in the industry, you'll find people use the term almost interchangeably.
Yeah.
A database in its most pristine definition will have certain technical definitions around asset compliance and so forth. Otherwise, it's a data store. Elastic technically is a data store, but you'll see us ranked very highly in DB-Engines as one of the most popular databases in the world. So but we don't, we don't, we don't store transactional data. We don't want to be in the transactional database business competing with other database vendors. That's not where we play. If you think about most customers' data, it's either transactional data, say, sitting in a traditional database, or it could be data that is used more for analytical purposes, sitting in data stores. You have companies like Snowflake and Databricks and others.
Or you have just deeply messy, unstructured data that is used for use cases like ours, that are core operational use cases.
Okay.
And that's where we tend to shine, and that's where a lot of our customers are using our capabilities. But fundamentally, I think in the industry, people tend to use those terms a little bit interchangeably.
Yeah. Thank you for that. And just industry-wide, we are all wondering, the hockey stick. Everybody's comparing semiconductor hardware with software. This is like a million, or I don't know, a billion-dollar question, that is the revenue going to trickle in? Because this year I'm hearing only 1% of the workloads are AI. So in your view, I know every time it's different, like, is it going to really trickle in? Like, what do the enterprise needs to see to actually go in, all in? Or is it going to be like a two-year, three-year, five-year proof of concept, and slowly they'll do production? Like, how should we be thinking about it?
Yeah, you know, my own belief is like other major technology transitions we've seen in our lifetimes, GenAI will eventually be much, much bigger than most people expect today, but it'll also take a lot longer to play out today. That's been history. If you sort of look at significant transitions in our lifetimes, that's what, that's what you'll have seen. I think that's how it plays out in GenAI, too. In terms of enterprise adoption and what's inhibiting enterprise adoption, I think we also have to recognize that not all AI is equal. There are AI-related use cases where you're trying to draw conclusions and inferences based on existing datasets and presenting, for example, summary results.
And, you know, a good example of that might be the way a consumer might interact with, say, a ChatGPT, and you say, "Write me a 100-word congratulatory note to an employee that just got promoted." You know, that's generative in nature, but you can restrict it to certain datasets. Or if you, for example, say, "Summarize my company's compensation policy for me," it's summarizing it based on an existing dataset. And then you can have other applications that generate a lot more content or different types of content, and that's where companies tend to be a little bit more careful with things like hallucinations and so forth. Interestingly, with RAG, which is where we help our customers, RAG helps control some of that, to a degree.
So that's another piece where customers leverage RAG and pick Elastic for that. But I think that's part of... As models continue to get better, models will get smarter. They might even get smaller. I think you'll see some of those naturally start to play out, and that's probably also why, in some of these initial use cases that tend to be a little bit more popular today, they're almost human-supervised use cases, where the output is being presented to an employee internally before being shared with a customer directly. I think eventually, you know, they will be shared directly, but it's a question of how long that takes, depends on how some of these other factors are addressed.
From our standpoint, again, the activity levels and the engagement levels we've seen with our customers, the success we've seen with actual paying customers, the engagements we have just gives us lots of conviction about the long-term opportunity that we see in front of us.
All right. Well, unfortunately, we could talk probably a lot longer, but we are out of time, so we have to wrap it up there. But Janesh, thank you for a terrific conversation. Appreciate it.
Thank you, Gregg. Thanks again for hosting us.