MongoDB, Inc. (MDB)
NASDAQ: MDB · Real-Time Price · USD
258.20
-6.18 (-2.34%)
At close: Apr 28, 2026, 4:00 PM EDT
259.50
+1.30 (0.50%)
After-hours: Apr 28, 2026, 7:47 PM EDT
← View all transcripts

Goldman Sachs Communacopia + Technology Conference 2024

Sep 9, 2024

Moderator

Okay, sure. Okay, fire away. Whatever you guys want. Are we on? Are we on? We are on. A real delight to be able to host MongoDB. I think it's been three years in a row that we've had the pleasure of hosting you guys. CEO Dev Ittycheria, Michael Gordon, CFO, and I think you're COO as well, right? Yeah, I gotta keep up with the promotion that happened quite a while ago, if I'm not mistaken.

Dev Ittycheria
CEO, MongoDB

Yeah.

Moderator

Thank you. And I'm joined by my colleague, Matt Martino, here. Welcome to Goldman Sachs Communacopia & Technology Conference 2024. I think you heard the stats, 2,900-plus registrants, up from last year. Number of companies up from last year, 24,000, well, meeting requests. Insane. I will not tell you what the fill ratio is. It was not that high. So a lot of demand to see companies like you. Thank you for joining us. Dev, you've been CEO for now 10 years. I know it's hard to look out five years, but what are your aspirations for the company in five years? Where do you want MongoDB to be? When you come back to Communacopia & Technology 2029, what is this company gonna look like?

Dev Ittycheria
CEO, MongoDB

Yeah. So, thank you for having us. It's great to be here. I'll start with saying, kinda just reflecting our 10-year journey and what did we try to accomplish. When I joined, we were doing roughly $35-ish million in revenue, and we had about 350 employees, and our job at the time was to prove that MongoDB could be trusted for mission-critical workloads. Because we were viewed as a cool, fancy, cool, new toy, but everyone said: "I'm not sure, you know, where and when I can really use MongoDB." Obviously, we knocked that off. Then we launched a cloud service, and there was a lot of skepticism, this was in 2016, that could we actually compete with the hyperscalers?

We are actually trying to partner and compete to saying, like, "Why wouldn't Amazon eat you for lunch?" Obviously, our growth since 2016 has shown that we actually, you know, built a pretty substantial business over that period of time. Then the third thing that we've tried to do is address a broader set of use cases, and so that was all about our platform and enabling customers to run more use cases on MongoDB versus just being an OLTP engine. We launched Search, Vector Search, Time Series, and a bunch of other capabilities that people use us for today. That's where we are today. Over the next five years, we're going after a very large market. The TAM is, you know, enormous, and there's really kind of three priorities for us.

One is moving more upmarket because we think that's where the best returns are, and we've seen that in terms of the returns that was from the different channels, and I also believe that AI will be more seen at the high-end of the market. Unlike other platform shifts, you'll see the AI workloads emerge at the high-end of the market versus the low-end of the market. The second thing is that we think that there's a unique opportunity, given with the sophistication of these code generation tools, to really reduce the tax and the cost, and the time, and the risk of migrating these legacy applications. Customers are saddled with thousands and thousands of legacy apps. The technical debt is very high. The cost to run and manage the apps is very high.

There's end-of-life issues, and frankly, people also want to AI-enable these applications. And so there's a confluence of factors that are getting customers to really be much more receptive to now modernizing these applications, and we're seeing a lot of interest on that front. And the third step would be for us to really become a core ingredient of the future AI tech stack. And we think that architecturally, we are well positioned to do that for inference workloads, and that's an important distinction. Because one, for inference, for AI workloads, there's even more requirement to be able to query and manage complex, rich data structures, which are designed to do.

You need a lot of flexibility and agility in your schema, as data is always changing, and you need the performance and scale of a natively distributed system, and which is what MongoDB is, and so from that point of view, that's kind of how we think of ourselves going forward.

Moderator

I thought you're gonna give a $10 billion long-term aspirational thing, not... And, and Michael's looking at us, "No, that's not gonna happen.

Michael Gordon
CFO, MongoDB

We tend to speak quietly and carry big sticks.

Moderator

I like that. I like that. I forgot to mention, but, Dev, I met you in 2006 or 2007 when we worked on your first company IPO.

Dev Ittycheria
CEO, MongoDB

That's correct.

Moderator

Back then, I distinctly remember how you explained provisioning technology and how. So clarity of thought and clarity of being able to explain the strategic vision has always been a very strong thing with you. Michael, shifting over to you. We're already starting to look at calendar 2025. When you talk to customers, what's on their mind as far as IT priorities, budgets?

Michael Gordon
CFO, MongoDB

Yeah. So, a few different thoughts. Everyone is focused on AI, and to Dev's point, trying to figure out how to get value out of AI, right?

But there's also a lot of care and feeding that's sort of the normal course business, you know, has to take. And so, one of the beauties of being a relatively small share player in a very large market is we're not particularly dependent on the underlying, you know, IT trends and spend environment. We've been able to successfully win, you know, kind of new business, regardless of the macroeconomic conditions, regardless of IT budgets. And so, you know, I don't see any reason why that should change. The image that I often use for people is, if you think about, you know, the ocean, there may be a lot of froth, you know, at the top. There may be choppiness in terms of the waters.

It may be high tide, it may be low tide, but that doesn't matter if you're at the bottom and you're a 2%, you know, share player in a market measured in the many tens of billions of dollars. And so I do think that people are focused on getting value. I do think people are focused on driving efficiency. I do think incrementally, people are looking to turn the AI opportunity into impact within their own organizations, and so it creates a lot of exciting opportunities for us. But again, despite, you know, our relative size, the underlying IT spending environment isn't a hugely important factor for our ability to succeed.

Moderator

Got it... It looks like the AI thing is real. Your customers are talking about it, you're talking about it, we're talking about it. Yeah, okay, just making sure, you know. We lived with dotcom bust, man. I mean, it was not that good. So there's a natural skepticism. Is this thing really gonna work? And it takes a lot to-

Michael Gordon
CFO, MongoDB

It'll take time, but-

Moderator

Yeah. Yeah. So on that 2% share, Dev, you've got 50% logo share, which is incredible. Fortune 500, including Goldman, we use your product, but only 2% share of the database spend. What could you do to unlock greater share? And in particular, what if it's a logical part of it, that question, I think the strategic accounts program is an important lever to be able to unlock that growth. Where are you today in that program?

Dev Ittycheria
CEO, MongoDB

Yeah, so it's important for investors to understand, like, unlike other businesses, like, say, a Salesforce or Workday, that tends to be a top-down decision, where you make that decision and everyone standardize on either your Salesforce automation platform or your HRIS platform. Because it's not like marketing will use one HRIS platform and engineering will use another. That makes no sense. And our business is very different. It's the unit of competition is the workload. Like, you know, you're building a new app or you're considering replatforming an existing app, the development team and the architects need to think through, "Okay, what is the, you know, new tech stack I'm gonna use to build that application?" So we're always trying to...

Even though we're, you know, a customer like Goldman may be a customer, we still have to go win that next new workload or that next new use case that they're planning to deploy. And so we win business workload by workload. So in some ways, you know, we don't get the big bang of, like, everyone standardizing, gets some large multi-million dollar deal up front. Our new business, the workloads start small, but as you acquire more and more of them and they start growing, that's when the growth really starts kicking in. And so that's kind of the buying behavior dynamics of our business. In terms of, like, what we've called out as our strategic accounts, what we have seen is that there's been accounts where they've grown over time, they've starting to...

You know, we negotiated the, you know, the services agreement, the cloud services agreement. There's strong champion inside the account. We said, "Why don't we deploy more resources and see what kind of returns we get?" And the returns have been, you know, very, very strong. And so we are now trying to repeat that same motion in more and more accounts. And, you know, another financial services firm, not Goldman, literally, we did this with them about 18 months ago, and they were probably the low seven digits, you know, figure spend, and now they're close to $20 million a year spending with us in 18 months. So that kind of...

I'm not saying that every account grows that way, but those are the dynamics of these accounts, where once you penetrate and bring a lot of resources to bear, you can grow your share of those workloads and grow your business in those accounts very, very quickly. And so that, we're trying to do that now, not just in North America, we're doing that in Europe and also other parts of the world, where once we see an account at the tipping point, we'll bring a lot of resources to bear, and that has a disproportionate return on investment. That's an important, important thread for us going forward.

What we're seeing, actually, app modernization using AI, is actually causing more and more accounts to be kind of put in that bucket because, you know, when we see the estate, the legacy estate that these accounts have, and their real high interest in kind of working with us, we say, "Wow, these accounts could end up being the strategic accounts in the next six, twelve, eighteen months.

Michael Gordon
CFO, MongoDB

I think Dev's tipping point concept is really important for investors to understand, because a lot of times we do meetings and the investors say: "You seem to be having a lot of success in these strategic accounts. Why don't you just have more of them?" Right? But they... They're not successful because we call them strategic accounts, right? They're successful because the necessary conditions are precedent we've learned and experimented and iterated to get more, you know, intelligent and better from a capital allocation standpoint about that. And so to Dev's point, the goal is to try and get more and more of those accounts kind of ready or ripe, or however you want to think about it, so they are closer to that tipping point.

When we do make that incremental investment, we can get, you know, appropriately, you know, rewarded for that. But just simply calling an account a strategic account doesn't actually suddenly make it more productive. And so I think that's something that investors sometimes, you know, don't understand what happens underneath the surface.

Moderator

I wonder if there's some way you can feed this into a GenAI prompt and say, "These are the characteristics that make- made this a strategic account, so how can we harness the non-strategic accounts to become-

Dev Ittycheria
CEO, MongoDB

No, so we actually-

Moderator

-blown?

Dev Ittycheria
CEO, MongoDB

Have a process where we think we can tell, like, those accounts that aspire to be in a strategic account, that we say, "Okay," you know, because we obviously are forecasting where these accounts will be over the next, say, 12, 18, 24 months, so we can start seeing the signs of this account, A, you know, this geography could be a potential strategic account, but here's the signs and here's the progress we wanna make for us to deploy even more resources.

Moderator

I hadn't planned on asking this, but maybe it's a natural question: Do you deploy GenAI inside MongoDB to get efficiency at certain things?

Michael Gordon
CFO, MongoDB

Yeah, we do. Like many, we've been eager to test and experiment. We've had a fair amount of success internally, a lot with sort of making people who are customer-facing more context-aware, so case summarization, sentiment, things like that. We recently rolled out a next iteration of some of our kind of customer-facing knowledge base with generative answers in that, and that further helped them kind of self-serve and not have to open up, you know, underlying support tickets in a way that's actually been super helpful and obviously leads to both higher customer satisfaction but also greater efficiency in terms of how we serve those customers. So, absolutely.

Moderator

Okay.

So Dev, maybe bring the discussion to, more recent trends. You know, the hyperscalers have put up solid numbers in recent quarters, and it appears, to some extent, their database offerings are benefiting from the push towards generative AI. So we'd love to get your thoughts on whether, you know, this comes at the expense of MongoDB, and if not, you know, why?

Dev Ittycheria
CEO, MongoDB

Yes. So it's important, again, for investors to understand there's two dynamics in our business. There's the macro dynamic, which you see in the consumption of the existing workloads for Atlas, because that's now 71% of our revenue, and then there's new business. Our new business, except for Q1, where we you know had some operational issues that we've since resolved. Our new business has always been quite strong. So I think it's a function of one, a big market, very compelling value proposition, and the fact that deals start small, so you're not dealing with a lot of resistance to spending, like make a seven-figure expenditure. So our new business has been quite strong.

So we're our win rates against the hyperscalers have been very high, and candidly, the hyperscalers see that, and they also partner with us. Our relationships with Amazon, Azure, and Google have never been better. And, you know, we're all adults. We know that there are times when we partner and times when we compete. And frankly, all three of them also joined our accelerator program, our MAAP program, to really help customers get comfortable, you know, with reference architectures and proof points and integrations to get started quickly with AI. So the short answer is no, we're not seeing any impact from the hyperscalers.

Moderator

And Michael, Dev made reference to, you know, the strength of the new business, and I think that's been a really palpable part of the MongoDB story, you know, through kind of a weaker macro environment. So if we just focus on the expansion piece of the growth algorithm, like, what's happening underneath the hood from a consumption perspective, whether it's the digital natives or the enterprise opportunity? Could you speak to that?

Michael Gordon
CFO, MongoDB

Sure. The couple of times that we've tried to call out the macroeconomic environment, what we're really describing is the underlying database activity. So think sort of reads, writes, transactions, at the database layer. One of the benefits of being a general purpose database is we've got, you know, use cases across industries, across geographies. And what we see when we call out the macroeconomic effect, as we did, in the Q1 call, is less growth, slower growth in the underlying reads and writes, at the database layer. And we saw that in a broad-based way. So to your question, it wasn't some particular impact of a geography, of you know digital natives, of anything. It was really quite broad-based, and across the board.

We saw, again, in Q2, despite the results, we're pleased with the results year-over-year. We think about the consumption growth as week-over-week growth in consumption. That consumption is the actual, you know, what we get paid, but it very closely mirrors the underlying usage. What we saw is we saw that grow slightly slower on a year-over-year basis, sort of pointing to this slower macro environment that we talked about, you know, in Q1. That said, we did slightly beat our own sort of forecasts for Atlas in Q2, but as we described, it was really kind of within a reasonable range of alternatives, so there's nothing that would suggest the macro is getting materially better or materially worse, kind of quarter to quarter.

Moderator

Very helpful. And maybe just kind of shifting the discussion a little bit to kind of EA versus Atlas, right? It seems like EA continues to perform incredibly well. You talked about, you know, really solid pipeline into the back half of the year. So if you just talk about kind of those customers that continue to kind of commit to Enterprise Advanced versus kind of the Atlas opportunity for Dev or Michael.

Dev Ittycheria
CEO, MongoDB

Yeah, maybe I'll start, and you can... So, Enterprise Advanced is, just again, for people new to the company, is essentially a self-managed product. You get our software and you provision, configure, manage MongoDB yourself. And our belief was that once people got more and more comfortable with Atlas, there'd be less and less people who wanted to do that themselves. Because part of the argument of Atlas is there's lots of undifferentiated labor in terms of all that management that you have to do, self-managing the deployment of MongoDB. What surprised us is the staying power of EA. And what's become clear is that people want, especially at the high end of the market, want a hybrid world.

There are some workloads that run on-prem, some workloads that run on the cloud, some workloads that may move to the cloud, bring back, or vice versa, start on-prem and then move to the cloud, and so given that, we have changed our posture, and now we're investing more in EA first through our community product, which is a free version, where we're now rolling out, we'll soon roll out vector search and search, because it also helps developers who are starting with community product to just get started faster on those newer capabilities, and then that will get folded into probably a year later into EA, so you can see us invest more in EA.

EA today is still more of an upsell to existing customers than driving net new customers, but that may change as, you know, over the next couple of years as we invest more in EA. But again, the long and short of it is we've been pleasantly surprised of the staying power of EA, and that was something five years ago I would not have predicted.

Moderator

Model transitions are always very hard to pin down, unless you block the product that you want to sell less of. It's in maintenance mode, no more enhancements, in sunset mode. That's not something that you would want to release as a product. That's a really, really good product. Shifting to rate cuts, we just hosted Jan Hatzius, our Chief Economist, and we talked about twenty-five basis points, fifty basis points, maybe less probable. But how important is that, as you look at your business, and you've been through multiple cycles, could this be a tailwind or not? What are your thoughts?

Michael Gordon
CFO, MongoDB

So a few different, you know, reactions. I think to the earlier comment, you know, if you think about our business and the ways in which the macroeconomy affects our business, there are two possible ways. One is in terms of winning new business, and then the other is the underlying consumption growth, in existing workloads. To Dev's earlier comments, we've been very effective, despite different macroeconomic environments, winning new business. And so I don't think, yeah, it's particularly, a necessary condition or a particular tailwind there. We have a very large market. We have a great product. We have a small share. We have an excellent team. So that all adds up to kind of quarter in, quarter out, you know, a pretty steady, execution.

In terms of the consumption growth of existing workloads, certainly a, you know, better macroeconomic environment should help with the underlying, you know, reads and writes. As much as we've tried, you know, I can't walk you through the five, you know, macroeconomic factors and their respective coefficients in terms of the, you know, what that's gonna predictively mean, in terms of underlying outlet use. I think the last thing that I'd say, and I am not an economist, but is, you know, I think that part of the reason why the Fed is talking about cutting is to avoid more negative environment. And so I guess the absence of a negative would be a positive, but it's not clearly, you know, more positive than where we are today.

It's just sort of trying to prevent a future mess.

Moderator

Yeah. In fact, Jan said that, a year out, he expects rates to be a year plus out, three hundred and twenty-five basis points, which is a big reversal from where we have been. So if he's right, that's hopefully more optimistic. Dave, you've been through multiple tech cycles. Is GenAI overhyped or real? My voice is louder this time. Did somebody crank up the volume? Thank you.

Dev Ittycheria
CEO, MongoDB

It's the AI overlord. What I would say is, so that's Kash's way of saying, "Dev, you're old, so have you seen this. You've seen some of these platforms used before.

Moderator

So am I.

Dev Ittycheria
CEO, MongoDB

And so, I do believe that AI is not a question of if, but when. And actually, I was having this conversation in the last meeting and said, "I view the world we're in today as circa nineteen ninety-six, maybe nineteen ninety-seven." You know, Netscape was launched a couple of years earlier. People all got excited about the web, but the web was still very basic static web pages, and it wasn't that, you know, it wasn't that interesting. People were starting to build businesses on the internet, maybe Amazon and a few others, eBay, but you weren't seeing this plethora of kind of companies exploding on the market. And I think in some ways, we're kind of at the same stage in the AI era.

I think there's that old saying, "People always tend to overestimate the impact of a new platform or technology in the short term, but underestimate in the long term." I think one of the things when I look at the use cases, I typically see three patterns of use cases with customers today. One is, you know, chatbots. We all. You talked about that. I think what you're gonna see is more, those, the next version of chatbots will be embedding real-time information, right? So if you are working with a chatbot, say, your financial institution is now using a chatbot to interact with. You need to be able to, as the institution, know the last transaction that person did. So you need to embed real-time data. So you need to have, you know, real-time awareness of what that customer is.

You know, if they filed a ticket, if they executed a transaction, they've tried to put an order, and if something has gone awry, you need that real-time visibility. I think you're gonna see more and more chatbots get more and more sophisticated because you just can't use legacy data to make that work, or old, stale data. The second thing I see is on the research and summarization. The context windows are getting bigger and bigger, so you get essentially more memory. So you can, you know, essentially push more information through these LLMs, and by definition, you can kind of work with larger and larger data sets. So I think at a point, you know, the memory factors of LLMs will make them much more interesting, and frankly, that will also increase the switching costs to go between LLMs.

Because once people understand who you are, what you've done, and the history of your engagement with them, all of a sudden, you may not want to lose that history to go from, say, OpenAI to Anthropic, to Llama, et cetera. And then the third area around automation, what we're seeing is that, obviously, everyone's talking about agentic workflows and doing, having agents and multi-hop agents do a lot of work. I think that's still on the come. But I see a lot of people looking to build much more sophisticated solutions around this area. And I think you're gonna see, again, all that. You know, there's gonna be fits and starts. The key factor for all this is gonna be the research breakthroughs for the LLMs.

And I think a lot of people are saying, "You know, why aren't we seeing more apps in production? Why aren't we seeing more impact? Why aren't we seeing more returns from these investments?" I think it's like an iterative process, where as the, you know, the research teams at these firms, you know, get these breakthroughs. You could argue, GPT-5 was rumored to be out in the spring of this year, and why is it still not out? You know, a lot of people say it's because actually Blackwell was late. And so, you know, part of the research breakthroughs is also reliant on the compute architecture. And so, I think this is all iterative. As you kind of go through those kind of iterations, you're gonna start seeing the breakthroughs come, and that's when the opportunity arises.

For MongoDB, a lot of people, you know, we don't get the quote, unquote, "buzz factor" like a Databricks or Snowflake does, maybe more Databricks today than Snowflake, because we are not in the world of training. Where we come in is in the world of inference. So we will be a beneficiary as these, as these LLMs become better, they become lower latency, lower cost, you know, better accuracy, less hallucinations, and people get more and more comfortable, then they're gonna deploy inference workloads. That's where we come in. OLAP systems are not designed for inference. OLTP systems are, and we think we're well positioned for inference because of the complex, rich data structures that we enable developers to manage and query. So we are looking forward to...

All this money being spent on LLMs to us is a good sign because people will need a return, the research breakthroughs will come, and ultimately, those applications will start getting deployed in production, and we think we have a chance to win more than our fair share.

Moderator

Dev, what does that inference world look like, if you can imagine that?

Dev Ittycheria
CEO, MongoDB

Yeah. So, I think my belief is that if you're just building a thin wrapper on an LLM, your days are numbered, and we've seen some of those companies fall apart already, right? I think where you can build an app that the LLM is an enabling technology, not the key technology, but enabling technology, you marry that with some unique or novel data sets that you particularly have or you have insights on, and you embed that with deep workflow into your business process, that's where I think you can build a meaningful, you know, AI application. I think it's not a perfect example. One example of that is Perplexity. You could argue Perplexity did not create their own LLM. They, you know, wrap their search functionality around, you know.

Both the LLMs and different data sets, and so you can pick, obviously, different data sets based on the search that you want. And again, it's not a perfect example, but it's one example of what a future AI workload or application will look like.

Moderator

Dev, sticking with that thread on real-time information and inferencing workloads, it sounds like, you know, MongoDB is making a big bet on kind of the RAG architecture as kind of the primary enabling technology for a lot of these applications. So can you maybe talk about the distinction between maybe RAG fine-tuning and kind of how MongoDB's Atlas Vector Search and Stream Processing kind of fit into this?

Dev Ittycheria
CEO, MongoDB

Yeah. So, lot of this, this big debate between fine-tuning and RAG, and people say if you can fine-tune a model, you don't really need to use RAG. Given the way that models are being built and the break, you know, the way the models are kind of moving. I guess the performance of those models are kind of all gravitating to kind of, you know, one level. I think the way to distinguish yourself is actually through RAG. I think fine-tuning over time will go away, and RAG will be the future. And now, something called advanced RAG, where you do very, very sophisticated questioning, use multiple data sources, you do iterative questioning loops, and so on and so forth.

So I think RAG is gonna get much more sophisticated, and we do believe that, you know, embeddings, using vector embeddings, you know, and then using a vector search functionality to marry your data with what the data sitting in LM will become an important tool in your arsenal. Because data is where you'll actually define your business logic. Like, in the old world of SaaS, software is where you defined your business logic. Today, now data is the way you define your business logic, and that's why data becomes so, so important.

Moderator

So why not, why not become an apps company?

Dev Ittycheria
CEO, MongoDB

Well, I think-

Moderator

Generative AI or-

Dev Ittycheria
CEO, MongoDB

If we were at 80% share of the database market, maybe we'd contemplate doing that, but we're far from it. You know, we wanna stay... I think it's also very dangerous for a company of our scale to try and be all things to all people. And so we really wanna focus on our coordinating and where we think we're well differentiated.

Moderator

Thank you. And what are the things, Dev, you've done? You answered this in part, but from a product perspective, go-to-market perspective, what are the things that you've done to retool the company for success in GenAI?

Dev Ittycheria
CEO, MongoDB

Yeah. I would say, if you've tracked us, you know, since we've gone public, which is, you know, since 2017, one of the things that I think we've been very constant on is that we believe to build a great software franchise, you need to marry great product with great go-to-market. And the, when the magic happens, when you build, bring those two together. And yes, we had a stumble in Q1, and we still feel terrible about it, but, you know, historically, we've executed quite well. And I would say a lot of other companies who may be focused too much on product and not go to market, or focus too much on go to market and have a very thin product struggle.

And I think the real, you know, you know, the bear case against MongoDB, you know, that was like: How can you be, you know, go compete with, become a general purpose database? No one believed. There's no way a SQL to NoSQL database could become a general purpose database. Everyone said, "That's not gonna happen." We proved that happened. You know, our cloud business, everyone said, "There's no way you can compete with the hyperscalers." We proved that. And I would say our cloud business is the gold standard in terms of how to other companies kinda look at what they would try and do. And so I think that's all a function of, like, marrying a great product and innovating very aggressively on the product side, as well as strong distribution.

I would put our sales force against anyone day-to-day in terms of our ability to win.

Moderator

Excellent. Do a quick pulse check and see if anybody is brave enough to... If you've had your coffee. Yes, I do see two hands. Go ahead, and then we'll bring the... You can just shout it out now.

Maybe you can give a double click on the GenAI workload and see how years, five years, besides that, given the...

Dev Ittycheria
CEO, MongoDB

Yeah, I mean, it's, I mean, you know, I wish I could give you... You know, be so prescient as to tell you exactly how this, how this all is gonna shake out, if, and I don't think anyone really knows. But one thing I do believe strongly is that the inference market will be far larger than the training market because you, by definition, infer once or, I'm sorry, you train once, maybe train once or twice, but you're inferring all the time, right? And the R, there's lots of I, you know, going in terms of investment in these AI models. A lot of people are saying, "Where's the R?" Right? You know, Sequoia came out with that blog. There's other people who said, "You know, there's so much CapEx being spent." The R has to come through the inference workloads.

Someone has to get value out of all these trained models. Training is a means to an end, not an end unto itself. And so, and so that's why I think, you know, we're well positioned. It's taking longer. I think the difference, I think, if you had asked me this question a year and a half ago, is I would never have thought how expensive or how much capital is required to build these models, right? You know, when, you know, Facebook and the hyperscalers say they're gonna spend billions and billions of dollars every year on training and building out these models, you know, it just tells you that it's a very expensive proposition.

But the good news for us is that when those models are trained or people feel like they're comfortable really using them in production, we're the beneficiary of those trained models.

Moderator

So you're basically saying slow to learn, quick to conclude.

Dev Ittycheria
CEO, MongoDB

Yeah.

Moderator

There's another question. Was there...? Yeah, you can just... Yeah, if you can get the mic over then.

You definitely touched on this. All of us investors are trying to figure out who are gonna be the main players. You talked about Mongo being a core ingredient. So you touched a little bit on the hyperscalers. You just mentioned Databricks and Snowflake. Of course, there's, you know, Palantir, which I think today is in the S&P, like, talk about their ontology. Can you explain to us why Mongo, in the inference world, will be a core ingredient of AI versus all these other players? What's the difference? What are the distinctions of what you do that will allow you to be one of the winners?

Dev Ittycheria
CEO, MongoDB

Yeah. So what I'll say is, this is the same question asked of us seven years ago about why are you gonna win the NoSQL game, right? There's Couchbase, DataS tax. You know, the hyperscalers had their own NoSQL variants, and we ended up winning the NoSQL game. One is that the document model is a superset model, so you can do things like key value lookups, you can do things like, you know, more traditional queries, you can do joins, you can do time series use cases, you can do graph use cases. So architecturally, we are designed to be a general purpose database. The second thing I would say is that when you think about an AI workload, an AI workload requires the need to work with very rich and complex data structures.

We are designed to work with rich and complex, and query rich and complex data structures. JSON is the example of a rich and complex data structure, and we're a JSON database. A lot of people are saying, "Well, we now support JSON," but like a lot of these relational tools, they have to do very complicated and convoluted things like off-row storage for a very large object, like an unstructured, like a video or a graphic, and then, you know, the performance overhead with those kind of off-row storage techniques become very, very high, and then we're by definition, a natively distributed system, right? So relational databases are designed to be single-node systems.

Our most basic configuration is a three-node replica set, and we have customers who've deployed not just replica sets, but sharded clusters, where you spread the data across multiple nodes because your data volume is so high, and architecturally, we are designed to scale massively, and then the last thing I would say is the flexibility of our data model, where, you know, you start with a data model, but then it changes over time. There's nothing better than MongoDB. I mean, that's one of the strongest suits of it. Whereas like other data models become very brittle over time, and then it becomes harder and harder to change and add new features and new capabilities. So for all those reasons... and then the last one I'd say is developer mindshare. We are the world's most popular modern database, right?

Now, I would say long term, I would say there's gonna be probably a relational standard and a, you know, what I call a modern standard, that some people still want to stay in relational. But I think for those people who truly want to modernize, we have become the default choice.

Moderator

I think there was a question here. Bijan, you had a question? Yeah.

Many of us are paying close attention. I guess they're very consistent. They're undershipping versus the amount of demand that's out there. And I'd say, you know, intuitively, you guys would be some of the folks that would start to see it. We have customers that are waiting to product or expand and scale past the first half of calendar next year. I'm sorry. So I'm curious if the conversations that you're having would indicate support. A lot of folks haven't been able to realize as much of the production that they'd like to, but any kind of collaboration on that?

Dev Ittycheria
CEO, MongoDB

Yeah. So, as you know, we have 50,000 customers, almost like 60-70%, I forget the stat now, of the Fortune 500 are customers of ours today, so we get pretty good visibility into large accounts. I recently met with a large financial services customer, not Goldman, who has roughly about 45,000 developers. I asked them: How many AI workloads do they have in production? They told me 15. Then I asked them: How many of those are client-facing workloads? They said, "Zero." And the reason for that is they're terrified of the hallucinations as GenAI because with a probabilistic system, you can ask the same question 3 times, and you get 3 different answers, right?

And so they really need to get comfortable that they can put enough guardrails around you know these you know these probabilistic systems or AI models to feel comfortable to start exposing that to end customers who make financial decisions based on the recommendations or advice they're getting from these models. So I think as the research breakthroughs happen as these models become more accurate as they become more performant as they you know have lower latency et cetera people slowly start becoming more and more comfortable with deploying these apps in production. But I think it's an iterative issue of like the research breakthroughs happening you know before people start deploying these en masse.

The area that we're seeing the most interest in is app modernization, because the biggest problem customers had is rewriting the app code to modernize a legacy, say, Siebel-based Oracle app, to a more modern architecture. With these code generation tools, they can do three things: One, analyze your existing code, because by definition, a lot of these development teams for these old apps don't even exist. So you can understand what every line of, say, a million lines of code does. Two, you can reverse engineer tests, so you can then say: "Okay, with this input, I get this output." And then three, you can reproduce the code in a new, you know, more modern language, and then use the same test to make sure the input matches the output.

Because no one's gonna cut over to a new system if they cannot guarantee that the system works like it did before. So that's where we're seeing a lot of interest, because people say, like: "Finally, I have a chance to really, you know, you know, modernize my legacy estate.

Moderator

On that note, why don't we wrap it up? Thank you so much for your time.

Dev Ittycheria
CEO, MongoDB

Thank you.

Moderator

Thank you for your attention as well.

Powered by