MongoDB, Inc. (MDB)
NASDAQ: MDB · Real-Time Price · USD
258.20
-6.18 (-2.34%)
At close: Apr 28, 2026, 4:00 PM EDT
259.50
+1.30 (0.50%)
After-hours: Apr 28, 2026, 7:47 PM EDT
← View all transcripts

MongoDB.local NYC

May 2, 2024

Michael Gordon
CFO, MongoDB

All righty. Thank you all for coming. I hope you're able to enjoy and join the keynotes. A bunch of great announcements. We've got a pretty packed agenda here, so I'll try and help us get right into it. We're gonna start off with a product update. As I think people know, we mostly do these to give some pretty deep insight into our products, our product roadmap, give you access and insight into our customers, how they're using our product, 'cause obviously, we tell you a lot of things, but we always find it really helpful in this format and this venue to give you as much exposure to that side of the house. So that's really what dominates the bulk of the agenda.

We'll have an AI partner panel, just because of the recent announcements there, and how important a topic that is. That should be interesting. We'll host a customer panel, to, again, sort of give you that customer view, of how they're using MongoDB and what's going on, talk about partners. I'll spend a minute, giving kind of just a quick business update, and then we'll have Q&A, with Dev and me and Sahir. So with that, let's dive in. Obviously, I provide all the appropriate safe harbor statements, but, I'll actually particularly underscore this 'cause the timing of this event is a little bit unusual. Our quarter just ended two days ago, for those who don't know. And so, but the Javits Center was available today.

And so, we have this sort of unusual timing. So obviously, we're not gonna talk about any recent events or trends or anything like that, and you'll just save us a lot of hassle and headache if you don't ask about it. So with that, let's dive right in. So first, we're pursuing one of the largest markets in all of software. You've heard us say this before, the market is $94 billion in 2024 per IDC. That's growing to $153 billion in 2028. That's almost $60 billion of market creation, right? So when we talk about market size, that gives you a little bit of a sense of what we're talking about.

It's growing quite quickly, and quite rapidly, and that's because it sits at sort of the center of the strategic nexus for what helps businesses drive competitive advantage from their technology, right? That's the position that we're in. There are also some differences in our market, though. Many markets in software tend to be driven by a unit of competition that is the customer, right? The results in those accounts are binary. You know, we at MongoDB use Workday, or we do not use Workday. Parts of the team, parts of the organization don't use different HR software than others, right? It's sort of somewhat monolithic within the application within the estate. But in our market, it's quite different. What you see is the unit of competition is the workload.

So that means that, in general, we compete one workload at a time. We're never fully shut out of account. There are always opportunities, given that people are constantly creating new applications. But just because you win an application within a big bank doesn't suddenly mean that they take all of their hundreds of thousands of applications and migrate them to MongoDB. So it's an important dynamic in our market. It's different than what exists in some other markets, and it's one of the things we want to call out. And therefore, to grow with an account, we typically need to acquire more workloads over time.

Yes, an individual application will grow over time, but to really see the kind of growth that we're seeing and to kind of capture our full market potential, you need to win more applications within the account. So we're gonna spend a lot of time over the course of these couple hours talking about that, and a lot of what you heard in the product announcements and from us over the quarters is focused on how do we win more workloads within an account. And you can see, you know, this just sort of shows a progression over time. You know, the columns here represent different business lines, the rows with effectively different applications. It's really more illustrative, obviously, but just shows that you try and penetrate an account more quickly and more deeply over time.

The best way to win workloads more quickly within an account is to become a standard within the account. You know, there isn't one sort of textbook, Merriam-Webster dictionary definition of what being a standard is, but in general, for us, what becoming a standard means is that we're approved to be broadly used within the account. We're gonna talk a bunch about this. This is really what our efforts have all been oriented around. Once we become a standard, there are a bunch of different benefits to us. I'll talk about three. There's sort of explicit C-level support and endorsement, there's access, really reduced friction within the account, and streamlined application deployment and support. Once we become a standard within the account, we've got real buy-in from the executive team.

They understand the competitive advantage that they're gonna help drive from their internally developed software when they leverage MongoDB. That's whether that's the scalability, the nimbleness, the agility, that that gives them. They see the benefits in developer productivity and performance, and they want to leverage that more for their teams. Secondly, we get open access to developers within the account. They're encouraged to try the platform to learn more about it if they don't know about it. And all these things are designed to reduce the friction that complements kind of a bottoms-up motion in terms of going to market and kind of gives greater permission within the account. And then finally, once you build a workload and once you launch it and put it in production, you've got support, right?

Centralized support, often from the IT team, helping you develop, orchestrate the deployments, and giving you the peace of mind that, you know, you're not left as an island onto your own as a company, or as a developer team within the company, but you've got the full support of the company behind you. So I'm gonna walk through three different examples of what becoming a standard looks like and means. These are obviously illustrative, but it gives you a sense for what we're trying to do in our accounts and kind of change the nature of the game from winning, you know, workload by workload over time, and how do we kind of do that more quickly and more effectively.

So this first chart is a leading global financial services company, Fortune 100 global services company, top 5 global financial institutions in the world. And they had picked MongoDB a number of years ago to solve a critical pain point. That's pretty typical in an account like this, where there was some use case where they were really struggling, and we were able to help them solve that. But we weren't widely deployed in the account. As you can see, we kind of grew our ARR over time, adding workloads into the account. But these different teams operated independently. Each had to manage its own MongoDB instance. But you know, to the institution's credit, they recognized that they needed more enhanced capabilities.

They saw the popularity of MongoDB within the account, and so they started narrowing in on MongoDB Atlas, particularly in this case, on AWS. And they were compelled and drawn by, you know, our capabilities on full-text search, time series, workload isolation, and things like that. And so the momentum starts building in the account. You can see the ARR growing as it goes from 2020 to 2022. Interest is sort of increasing across the developer teams. The leadership is realizing the value of the document model, is seeing how popular we are with developers within the company, and it's also drawn to the resiliency of the cloud infrastructure. And so they're very focused on application modernization, they're focused on cloud adoption.

And so with those as priorities, they realize that MongoDB is the ideal platform for them, given the breadth of capabilities, the popularity with other developers, and the seamlessness with which we integrate with the major cloud providers. So all that culminates, as you can see here, at the beginning of January, beginning of 2023, when they select MongoDB Atlas and make MongoDB Atlas generally available within the institution, right? So it really signals a sort of key pivotal moment in our partnership with them. And what we're seeing today is we're witnessing product teams and product owners consolidate their technology stack onto MongoDB. You know, replacing Dynamo, replacing Cassandra, replacing MySQL, replacing Sybase, as they leverage this unified developer data platform that we have.

Also, you know, financial services, as many of you know, highly regulated industry, right? And so there's clear demand for a solution like MongoDB, that is multi-cloud, highly resilient, and serves all the different needs from a security and other standpoint, of those regulated industries. Secondly, this is a global, Fortune 100, healthcare company. So for years, this health insurance company, had relied on MongoDB, again, for kind of a core critical application. That's how we started out. We started out with use cases around fraud claims and analytics, and provider search internally that had become real pain points. And so we won those, and it's great, but they hadn't identified, any opportunities, to expand significantly, particularly with Atlas.

So we developed a business case working with them, identified the first potential Atlas workload. As they started getting insight and building kind of the business case, they identified other workloads that sort of made sense for MongoDB. And ultimately, what we moved into was production readiness testing. We performed incredibly well. They started developing more confidence. They run a process internally where they rank providers. So we went through that internal approval ranking process, and that led to them deciding to standardize on MongoDB. And so what you can see is for the past year and a half or so, Atlas has been the preferred database within this company, and any developer in any business unit can get started with a simple click of a button, right?

So when I think about not just a C-suite buy-in, but that removing of friction, right? Now I can do it. I can adopt it seamlessly, super easily, click of a button. Compared to if you're a non-preferred database, you have to go build a business case, get exceptions, et cetera, et cetera. So it gives you a little bit of a sense for like, what this means when we talk about being a standard and how it plays out in terms of specific companies. Third, I'll talk about a global retailer. We landed as one of the largest global retailers. We initially won an e-commerce application for e-commerce delivery for them. And it was pretty standard. You know, we're competing against Dynamo and RDS and Postgres.

And again, that application was going fine, and as you can see, you know, applications tend to grow, but as I said before, to grow within the account, you need to continue to win new applications. So, two years later, given the success of that first application, that we were selected for an Oracle replacement of their real-time store inventory system. And then what you can see is the customer really saw how valuable Atlas was, allowing them to scale across thousands of stores, adding new features quickly, scaling, innovating, providing that agility and competitive advantage that they were looking for. So in 2021, Atlas was chosen as the modern standard for their R&D organization, and across all the kind of key inventory systems.

And so today, what the customer's done is they've streamlined internal development, so any team can now create and link their Atlas projects. What they've done is they've bought sort of this centralized pool, so they've removed that internal bureaucracy, budget allocation, how does this work? So it's just, again, with a simple click, and that's accelerated adoption within those accounts. And we've had recent wins where Atlas Search was chosen over Elastic for the inventory search. Atlas Device Sync was chosen over a homegrown solution, and so we've continued to grow within the account.

So the idea here is really just to help people give a little bit of flavor behind when we say, "What does it mean to be a standard, and what does that look like, and what it play out?" Again, all these are illustrative and certainly, you know, in many of our accounts, we're not a standard yet, but we're continuing to be a standard in more and more of our accounts over time. What do you need to be a standard? We need to solve a broad range of solutions. So when you hear us talk about the breadth of use cases and the expansiveness of the developer data platform vision, that's what we're talking about. That's what gives you that potential to give teams confidence, executive teams in particular, confidence to build as a platform. Second-...

If we accelerate application modernization and help move relational migrations onto our platforms, that's a critical thing that helps strengthen the position and gives customers incremental confidence to invest in us. And then finally, you heard a lot of talk about AI. Obviously, it's newer over the last year and a half as a key focal point within development teams. But establishing ourselves as a critical partner for AI is really one of the key things that we do. And again, this also speaks not just to the day-to-day applications and use cases, but also really gives those C-suite level executives the confidence to center in us and MongoDB. So with that, I am going to. Whoops, go backwards one second. Turn it over to Andrew Davidson, who's our SVP of Product. Thank you, Andrew.

Andrew Davidson
SVP Product, MongoDB

Folks, good morning. Good to be here. I'm Andrew Davidson. So yeah, I'm gonna talk about why solving a broad range of problems is a crucial way for us to accelerate the customer value realization that allows them to make us a standard to consolidate workloads onto our platform. If we think about what's happened over the last decade in our industry, enterprises have been struggling with running into the fundamental limitations of legacy relational databases and layering in a variety of niche point solutions to work around those limitations. Whether it's running into scalability challenges, continuous availability challenges that cause someone to layer in a key value store, or query challenges that cause someone to layer in a search engine, or basically applications that start getting sluggish as they combine more engines, so they layer in a cache.

There's many different shapes of this kind of spaghetti snowflake architectures that you start seeing out there. And every one of our enterprise customers in this entire industry has some of this going on. This exists on-prem, in the legacy environment, it exists in the lift and shift state, in the public cloud, and it exists in the sprawling, out-of-control complexity of sprawling cloud services as well. And this complexity has a bunch of problems. It leads to developer friction, lots of different interfaces to manage. It leads to cost overruns because there's so many different components to manage and pay for, and a lot of duplication of effort. But the governance tax that this puts on your system, it really slows things down to make... It makes things extremely challenging to move forward.

Executives feel that they're investing an enormous amount just to stay afloat because of this complexity. They can't unlock the business value they're trying to unlock, and developers are spending the vast majority of their time wrestling with data rather than unlocking business value. Our industry is starting to realize we've been trying to modernize for a long time, but we're not seeing the return on investment yet. It's time to think differently. It's time to start weaving the way we think about software more fundamentally into how we operate as a business, to empower smaller teams, to reduce complexity and have significant business impact, all while reducing the sprawling cost and consolidating with a higher return on investment. Our developer data platform vision is all about bringing that idea to life.

It's all about the idea that you should have a singular place in which you can build for the vast majority of your software needs. You've probably seen this diagram or a diagram like it. The idea here is, whatever software application you're building up top, it should be a really easy to have that application connect through this wonderful, elegant abstraction, the document model, which we'll talk about later, and the MongoDB query API. And behind the scenes, to be able to have the database drive the wide variety of workload shapes that bring that application to life, from the bread and butter, operational, transactional, to full text search and time series, and other capabilities, which we'll talk about. And we, of course, do this on a secure, resilient, multi-cloud foundation.

That's kind of the core idea for us, and this allows small teams to build fast and to be able to unlock enormous business value as a result of how quickly they can do so, even though they have the power to scale. But I want to double-click into the first box, kind of the 10,000-pound gorilla, operational, transactional. This is the workhorse that sits behind the software-defined economy. Every transaction, every engagement, all of you typing on a live application, every single interaction throughout your life and behind the scenes on a business, any internal business system, all of that lives and breathes and dies inside the operational transactional database layer. This is a huge part of the digital economy, and it's somewhat hidden away by all the applications we use.

If we think about the time in which MongoDB was born, what, 15 years ago, at that time, it was so clear that the relational database model that had been pioneered half a century earlier was not fit for purpose for the needs of the moment. It was simply too inflexible, meaning you had to figure out what your software was going to do up front, and you couldn't change it from there. We're all conditioned to expect software to continuously evolve to our needs and our business' needs, and this model inhibited that from happening. The data model was also simply less able to handle the flexibility and boundless fidelity of the real world.

When you think about how software today has reached every nook and cranny of our lives, that's made possible because software can express those ideas, and relational database makes it very difficult to do that. It wasn't able to handle internet scale, couldn't handle globally distributed use cases, and the cost model was extremely punitive. The hardware scaled vertically, which led to exponential cost dynamics rather than the linear cost dynamics that our industry needed to see instead.... So our whole philosophy with operational transactional has always been an amazing developer experience, because if the developer, the artisan of the back end of this software, can't easily build software quickly, what's it all for? They're never gonna use it. And that's centered on the document model.

We have to make it secure and easy for these teams to build fast, and the distributed system makes it very reasonable to have a resilient and scalable foundation. This was a core differentiator for MongoDB since the beginning, and of course, this core differentiator as well in being able to run it anywhere, in the on-prem data center or in the public cloud in Atlas. And if we think about the fundamental reasons why the document model is so powerful, it's actually across a couple key dimensions that are highly complementary. The first is the developer experience. The fact that a developer can think in their code naturally, write an object in their code, JSONic, which is the lingua franca of the internet, be able to write that to the database in a first-class way, it allows the developer to feel like they can fly.

It's totally natural. But the flexibility that I was getting at before allows you to model the world as it evolves, and this is so important. If you think about what I'm saying to you right now, whatever you're internalizing from this conversation, it's not going into tables in your mind. It's going into shapes of data that are more natural for you to think about and categorize, and the document model lets you do exactly that naturally. But here's what's really interesting. The document model actually has profound advantages for performance and scalability as well. This is where it's kinda like having your cake and eat it, too. The document model stores the data the way it's accessed, and this is what allows for that linear cost scaling that's so important for infinite scale use cases today. This is what a document looks like.

You can see top-level fields, self-descriptive, different kinds of data inside. You can see the power of embedded arrays or lists of sub-documents. This structure, which is, for illustrative examples, arbitrary, but it could be any structure, is extremely liberating for people. The power of MongoDB comes largely from the ability to layer in indexes on top of this. These indexes allow you to drive those different shapes of workloads that drive those different kinds of software applications on top, the transactional, the search. And so in this case, we might have textual information, we might have metadata about images or songs or videos, and we could even have our vector embeddings field, the one that's highlighted, and we can layer in on top of that a vector search index, which we'll talk about later, all integrated into the power of the document model.

So the power of this platform has allowed us to feel incredible privilege, to have incredible companies around the world that have built amazing applications and in many ways, core transactional systems on top of this platform. What's so amazing about this is these are companies everywhere in the world, in every industry, at every stage of company, from someone learning to code for the first time, all the way up to the core transactional banking application that you may be using literally right now. So for us, we have to keep investing in that core, in that transactional capability.

MongoDB 8.0, which Sahir announced today, brings major improvements to performance across the board, including for time series workloads, major improvements to the rebalancing speed for sharding, as well as a continuation of our first-to-industry queryable encryption roadmap, with the addition of range support coming soon. So now let's shift gears to full-text search, such an important workload shape native in our platform. Let's talk about the alternative. The alternative is a world in which your database is completely separate from your search engine. This is what people did for years. They'd have to learn how to manage two systems, move data between them. They'd have to manage different APIs, pay for this complexity, pay for the governance overhead. It was inflexible.

MongoDB Atlas with Atlas Search unifies those into an integrated experience for the application, one endpoint, one system to manage all the power of search, combined with the transactional system. Reduces that synchronization tax, move faster, superior developer experience, reduced governance overhead, enterprise-ready in a fully managed platform. Our customers love this. They've been able to, in many cases, consolidate, replace search engines, and put this into Atlas Search in an integrated system. And also, our customers are simply able to build faster. They can reach market faster, they can have tech developers that don't have to learn a whole different system. They're ready to go immediately to build great search use cases. And by the way, search drives enormous business value, next best action in countless contexts. So on the search roadmap front, we've been doing a ton.

The most important thing to be aware of is that we've layered in something called search nodes over the last year. Those are now generally available on AWS and Google Cloud, and we were excited to announce today that they're now in public preview on Azure, so they're available on all three cloud providers today. Search nodes deliver independent scaling on optimized hardware on the back end of those database clusters, crucial for at-scale applications. We also made a really exciting announcement today, a big one, which is that search will be coming to MongoDB Community later this year. You know, MongoDB Community is in many ways for us, the top of the funnel.

Millions of developers around the world, everywhere, are downloading the database, getting started, writing code on their laptops, and bringing this extremely powerful capability, this strong differentiator of search, to those people everywhere they are, we think is a key differentiator for our platform. Shifting to vector search. You probably know that vector search is related to generative AI applications. The canonical application architecture of the moment, retrieval-augmented generation, or RAG, the idea of building these applications has a couple key steps to keep in mind. You're gonna have your source data, typically in your operational data store, and you're gonna create vector embeddings to summarize that data.

You're gonna take that natural language prompt from, say, an end user, and you're gonna use that to search via vector search to find the relevant context that was generated into a vector embedding, and you're gonna use that to pull back the relevant context from the operational data store, pre-vector embedding creation. And you're gonna push that through a prompt engineering pipeline through your large language model, which in turn will give you a cogent response to send back to the end user. This is kind of the typical RAG architecture, greatly simplified. Well, for us, it was just such a natural and obvious consolidation to say that vector search and the operational data store should be one cohesive platform, and that's, of course, what we announced last year. The retrieval-augmented generation workflow is fundamentally about the power of combining... People always say it's really just about vector databases.

Actually, it's about combining the power of vector with the document model and other classes of queries to do easy role-based access control and filters so that you can easily reason about the security model. So, for example, if someone in this room is only authorized to certain classes of data, we can so easily understand how to only push that data into the prompt engineering pipeline, and we can combine that power with the vector search query to find exactly what's relevant to them. Our customers love the fact that this is an enterprise-ready platform, where Atlas is already approved, they can build RAG applications, GenAI hackathons, et cetera, literally right now, and they can move fast.

The document model is a better fit for RAG than, or put another way, I would say RAG drives the value of the document model more than I've ever seen. Because the power of that ability to model the fidelity of the real world is particularly compelling in a world in which you need to chunk into the document all the context relevant to answer the question. So data modeling is core to RAG, and the document model is critical to unlocking that. We were excited that Retool did a survey last year, and we actually came into the leader quadrant for both the popularity and net promoter score dimensions, and this is before we were even generally available. We, of course, moved into general availability late last year. Our customers love the value.

They love the fact that they can build fast in an integrated platform. They've been able to put to use knowledge agent-type use cases that have unlocked material business value for them quickly on top of this platform, in many cases. We've been doing a ton of investment around vector search, as you can imagine. So much going on here, including optimized, dedicated vector search nodes on the back end of those database clusters with optimized hardware that's independently scaling, also generally available on AWS and Google Cloud today, also came to public preview on Azure today. We announced a deeper integration with Amazon's Bedrock service. We now have first-class support for their knowledge base and their agents framework.

Of course, as well, we announced that vector search will be coming to MongoDB Community later this year, enabling those millions of developers to build these modern applications wherever they are. Shifting to stream processing. I think we all know that the real world is not static. It's living, it's breathing, it's, it's pulsating with information. Whether a digital application, where every user action is being recorded to be learned upon later, or a power plant or a manufacturing assembly line, there's countless use cases for streaming data to be put to use in software, for software to communicate with other software and many other use cases. And if we look at the core components of a streaming system, there's the stream transport layer. This is basically the pipes that move data around.

There's the stream processing layer that will take data off those pipes and transform it and put it to use in a database, which is where it can be put to use in software. It, you can move both directions through the system. We realized it was such a natural expansion for us to move towards stream processing because it's gonna make it easier for our customers that are taking advantage of streaming to do so. In particular, the document data model is such a perfect fit for this type of data. When you have a stream coming off, any context, the fidelity of that data, again, it's going to be flexible. We saw other systems are very rigid for this. It was just, "This is our sweet spot.

Let's do this." So we, of course, have an integrated experience, reducing the operational overhead, making it easier for people to build quickly without feeling the tax. During our public preview, we've seen great customer validation, folks pushing through hundreds of millions of events per month and just feeling like, again, they're, they're developers who are already MongoDB experts. Didn't have to go become experts in a whole different system to build applications straight off the streams in their enterprises. So today, huge announcement, Atlas Stream Processing moved to general availability. We started with support for AWS. We'll be expanding to Azure and Google Cloud in the future, and we integrate seamlessly with Apache Kafka, Confluent, Amazon's managed Kafka service, and other Kafka-compatible solutions. Over time, you'll see us add more and more integration points there. And final topic I wanted to talk to you today, about today is Edge.

We all know that cloud computing has transformed everything in our industry. Giant, near-limitless amounts of compute available hundreds of miles away in data centers that we don't have to worry about... But in parallel to all this innovation in cloud, there's been a lot of innovation in terms of compute that runs inside the edge. The edge could be a car, could be an airport, a hospital, a stadium, an event center like this one. As we have more and more powerful compute locally, higher bandwidth connections between the compute locally, there's all kinds of cool things that we can start doing. There's many benefits here. We can imagine orchestrating a symphony inside a location like this conference center.

We can have lots of different systems in real-time, low latency, continuous availability, doing all kinds of powerful experiences that enrich our lives in a variety of ways, sometimes critical to core aspects of our society, like hospitals and airports, like I mentioned before, and sometimes, all kinds of other things that might be more temporary, like music festivals and everything in between. But traditionally, it's been very difficult to build these applications. You had to think about a completely different stack for your public cloud environment than for what you were going to implement in the edge. You had to deal with challenges of lots of different hardware and software that was gonna require different runtime environments and lots of patching and security challenges to deal with as well.

While these problems are never going to go away completely, that's the nature of edge. We have something now called the Atlas Edge Server that changes the game here quite a bit. Atlas Edge Server is a software solution. It runs on top of your hardware in the edge environment. And what it. You can think of it as basically bringing a mini Mongo down to your edge environment that can be used to be that conductor of that symphony locally. All the devices locally, kiosks, IoT sensors in a manufacturing assembly line, any number of other examples can all interface locally with that edge server with low latency, real-time, bringing that processing and increasingly, of course, inference into that edge environment. You can have many of these satellites all over the world, all synchronizing back up to MongoDB Atlas.

We deliver real-time sync, bidirectional, meaning you can make a change in Atlas and have it show up in the edge or vice versa, optimized for these kinds of use cases. Today, we launched Atlas Edge Server into public preview. We were in private preview last fall, and we're really excited to get to the next level with customers starting to bring this to life in a variety of new ways. So that's the end of my section. With that, I'd like to invite to the stage MongoDB's Field CTO, Modernization Factory, Paul Doan.

Paul Doan
Field CTO, MongoDB

Hi, everyone. Good afternoon. Really pleased to be here today. So my name is Paul Doan. For the last half a year, I've taken on the role of Field CTO for a new program of work we call Modernization Factory. I've actually been at MongoDB for about 10 and a half years now. Previously, I was a Distinguished Solutions Architect, working out in the field with many of our customers. So you might ask, what is Modernization Factory? It's basically a program of work internally where we're looking at how we can help our customers accelerate moving their legacy, relational workloads, and applications to MongoDB by leveraging AI. So as you can see on the slide, I've got a long history in database industry.

I actually started at Oracle as a graduate engineer back in the mid-1990s, and then moved on to various notable companies where I was building enterprise Java applications, working against Oracle and other relational databases. Then, following Oracle's acquisition of BEA Systems, the home of WebLogic, in 2008, I came back into Oracle then. And that experience with relational databases over the years is really helping with, give me the sort of insights required to look at how we help modernize our customers' legacy applications from relational to MongoDB for this program of work. So, we've talked about this before, and it's nothing new, but our principal competitor remains legacy relational technology.

For the last 10 years, when I've been out in the field working with our customers, I've been seeing this daily, where our customers have been choosing for net new applications, a database for their application. Invariably, our competition has been relational databases. 4x out of 5, I would say, our competition is relational databases and not NoSQL databases. And in those competitive situations, we tend to win. And so what's, what are the challenges with relational technology? Well, they were invented 50 years ago, and in this modern world, relational databases are constrained by many of the assumptions people made about computers way back then. Data tends to evolve over time, and the rigid schemas of relational databases resist and inhibit that change. And increasingly, modern applications and their databases need to be available 24/7 and be able to scale quickly to deal with fluctuating demand.

Relational databases were built to typically run on a single machine to service a very fixed single workload back when taking the system offline to patch was perfectly acceptable, and these things are untenable for the applications of the modern world. So the obvious question is then, if, like I've been seeing out in the field, we're typically winning most of those net new workloads against relational, why is relational technology still dominant and still has a dominant market share? Well, the reality is, most applications out there in the enterprise aren't new. They've been actually running for decades. Decades ago, when people were choosing their database for applications, the only viable option out there for a database was a relational database back then. Also, getting off relational databases is just hard. So let me try and explain why. There's a tremendous variety...

Yeah, one tends to think about relational applications as all being the same, but there's actually a massive variety when you look at one relational application and then the next and how it's composed. Each one is very different. So let me try and give you a sense of why that is... So if we think about a legacy application in enterprise, first of all, the database was chosen for that. Was it Oracle? Was it SQL Server? Was it Sybase? Each of them have subtle different variants of flavors of SQL they support, and then each of them will have things like stored procedures and triggers for embedded business logic that actually use completely different languages for that. So you've got that variance of just the database that was chosen for that legacy application.

And then what programming language was chosen to implement that application? Perhaps it was Java, maybe it was COBOL or C# or one of many other languages. There's another variance you've got to deal with across each legacy application you're looking to modernize. And then there's the runtime. When the application's deployed and running in production, it's typically, typically running on some sort of application server. And even in the Java space, there's millions of different types of application servers that have different APIs and different idiosyncrasies, and they force the application to be written differently, depending on what application server that, that application's running on. And then, in the code that was written in that legacy application, there is a myriad of different ways that that could be integrating with the relational database. Perhaps someone wrote the code using direct SQL over ODBC to talk to the database.

Perhaps they're trying to leverage stored procedures that they're writing directly into the database, or maybe they're using an object relational mapping tool like Hibernate, an ORM to talk to the database. So yet another choice, another layer in that same application that can vary wildly. And then, for that legacy application back in the day, how did they choose the user interface? Maybe it was a 1990s-based old data input screen-based technology like Oracle Forms for the UI, or maybe it was in the 2000s, and it was a native Windows desktop application that was built for that. Or maybe, maybe it is a more modern web application, but maybe it's using JavaServer Pages to implement that, or maybe it's using, ASP from, from Microsoft .NET to implement that. Again, a lot of variance.

You can start to see that each application that might have been constructed back in the day might weave its way between different components in each of these layers. The reality is, each of these layers isn't four different choices. It's typically tens or hundreds, depending on which layer you're looking at. So the complexity when you come to try and look at the challenge about how we help customers modernize those of their applications, we have to deal with this complexity and think about how we tackle this complexity. So, it's worth reminding you what are typically the main steps in modernizing a legacy application from relational to MongoDB. And we've talked about this before, but, first of all, you've got to think about how you take that relational data model and map it to a document model for MongoDB.

Then you've got the really hard part: How do you actually rewrite all the application code through those layers that I've just talked about on the previous slide that have a lot of variance? And then once you're ready with that migrated application, you've got to deploy it, and you've got to move your data over quickly from the running legacy relational database to your new now MongoDB deployed database with as little time as possible. So that was a high-level view, but it's worth going in a level deeper to think about what are the main steps involved to do that migration.

It's worth doing that, because as I think about Modernization Factory and how we can leverage AI to accelerate this process, we need to look at the different parts involved to see where we can apply AI in each stage to provide that acceleration. So first of all, when we're looking at a legacy relational application, we need to analyze that code base. We need to go and look through all that code and try and understand what was the technology stack there, like we showed in the other slide. What's the component structure look like? What is its architecture? Until we know that, we can't move on with the next phases of the migration. And then we need to create end-to-end tests, and probably a surprise to no one in this room, most legacy applications don't have tests.

It wasn't a thing people did back in the day, that people didn't see the value. It was too much hassle. And so a lot of these legacy applications don't have any tests. But if you don't have tests for the legacy application, and then you come to migrate it, how are our customers going to feel assured that what's been migrated is fit for purpose? So that slows things down and introduces risk. Then, we need to actually analyze the architecture and think about the target state, and this is where you might get into going from a relational model to a document model. But it's also when you start to think about, hey, this is a monolith, and as we bring this to MongoDB and the migration, we probably want to modernize to something like microservices as well to get the full benefits of modernization.

So you've got to deal with all those steps in that design phase. Then you get to actually having to do the rewrite and recoding and adding new coding. So you're partly changing code in all these layers to, instead of dealing with a relational model and talking to the APIs of the relational database or via the ORM, you're actually changing that to use different APIs to talk to MongoDB, using a completely different model, a document model. But then, as I talked about before, there's probably other pressures of why you're modernizing, and there's probably other aspects to that modernization that is not just about getting to MongoDB, where you actually need to leverage other newer capabilities, for modernization. And so that brings in the need to bring in more code and different code during that rewrite.

And then you're back to the user acceptance testing, which still happens, and those users are going to do that testing, and that's going to unearth bugs in what you've migrated. It's still not fit for purpose yet, and then you're going to have to address all those bugs. And finally, once you have that application ready and migrated from a code-based perspective, you actually have to do the switchover. How do you do that in such a way where you're going to go from a running production system, running against a relational database, to this new migrated production system on MongoDB as quick as possible with as little downtime? So as you can see here, there's a lot of work involved and a lot of variability... So last year, we announced the general availability of Relational Migrator product, and it's a product that really helps simplify modernization.

So we think if we look at those steps we just showed before, where does Relational Migrator product today help? It helps a lot with design solution architecture. So being able to take that visually, that relational model, and start to, say, dragging and dropping how that maps to the document model and capturing that. That, that's now a lot easier, thanks to Relational Migrator. And towards the end of the project, when you've actually deployed that application, and you've got to move that data as quickly as possible from that legacy Oracle or Sybase database to MongoDB, it enables you to do that very quickly, a push of a button, and live migrate over, minimizing any downtime and providing that safety, the validation that you're not losing data or introducing corruption.

And as you've heard probably earlier today, we've also started to do a first foray in Relational Migrator into helping to rewrite the app code, specifically around helping stored procedures and triggers, things like PL/SQL or PL/SQL in Oracle, and how we can convert that quickly. But as you saw before, there's a whole load of other languages and composition and components in the application architecture that's still not necessarily being dealt with today, that over time, we will look at further and further. So, the response since we GA Relational Migrator last year has been really tremendous. So, you know, the developers really love the fact that we provide a visual modeler to help you visually map from the relational model to the document model. It's a really easy thing to use. It's really intuitive.

Then the other thing is, you know, in the past, to be able to move data from one database to another, it takes a lot of engineering to validate and make sure that's right and reduce, eliminate any risks. So we provide all that plumbing now, and they really like that. The ability to click the button, synchronize the data, and have all that validation and checks in place to ensure that nothing's happening to that data, and it's moving over as quickly as possible. However, we can make app modernization much easier, thanks to AI. So, over the last few months, we started running several pilots, well, over the last, half a year and going into this year, to try and make this much easier.

And I think you heard in the keynote a bit earlier, one of those pilots was Bendigo Bank, a big retail bank in Australia. There's another pilot been going on for one of our customers, a Swiss private bank. And we've got many other pilots just starting to spin up at the moment and looking at how we can leverage AI to help with all those steps in terms of accelerating them and making modernization and migrations much quicker. So how are we doing that? If we think about... If we look again at some of those activities involved, typically in modernization today, when you're doing it manually, how are we believing, and how are we starting to see that we can accelerate them with AI?

So we've talked before about analyzing a legacy system, where a developer might have to spend days or weeks going through line by line, all the code base that they've never seen before and they don't understand. LLMs do that easy. That's really straightforward for an LLM to look at a code base and actually tell you how it's constituted, the technology stack, the componentry. And this is doubly important for legacy systems, because the reality is, in most of our customers, the developers that develop that legacy system aren't in that team anymore. In fact, it's normally worse than that. They've actually retired, or even worse, they're not even in the company. And so that knowledge is gone. And so the LLMs don't need that. They can actually go back to the source and make sense of that.

So that really speeds things up that might be taking weeks or months to do and bring it down to a matter of minutes. And then creating the end-to-end tests. What we're finding is we can record a user using that existing legacy system, maybe it's a web application, and we can record their interactions. We take a log of that. We can then feed that into the LLM, the large language model, the AI model, and that can make sense of that and then generate the functional tests that recreate those interactions. So all of a sudden, very quickly, we've got complete test coverage over a legacy application that never had that before. And that then, when we come to do the migration, we can rerun those same generated tests against the migrated app and be assured that that actually now is fit for purpose and working.

So that de-risks a lot of things and actually collapses a lot of the later cycles when it comes to testing as well. When it comes to designing solution architecture, I mentioned it before, one of the things typically, our customers want to do, as well as moving to MongoDB, they want to move from a monolithic architecture to a microservices architecture to give more agility and to deal with business change more quickly. And using LLMs, we can, you know, analyze that code base and the behaviors of users using it and make much more sense of the domain boundaries and what these microservices should look like in a future design, which is actually quite a hard task for a human to do.

And then, of course, coming to rewriting the code, the bit you would expect, it's much easier to take all that source code base and the behavior and feed that into the LLM, and actually generate these new microservices, the new components, and a large part of the implementation of those. And then when it comes to user testing, of course, you still need users, human users doing user acceptance testing. But then the bugs that they find and they log, we can actually feed that back into the LLM, and it can actually derive what the solution is, the fix for that bug, and then give us the code to apply that. So that's where we're getting acceleration as well.

And then lastly, when it comes to actually migrating and deploying, in reality, most customers that are modernizing applications do this in a phased approach. So they're gonna take component by component out and go live with that new component. So what you tend to find is, over a period of time, you have a hybrid, modernized, and legacy application coexisting together that is partly using the old relational database and partly using the new MongoDB database. So essentially, dual running, and that-... LLMs can help us with informing us how to structure those interim systems as we go on that path to finally dropping the relational database and having the fully migrated application.

So, on these pilots and the other ones going on now, we've got some early lessons that I thought it might be worth me sharing with everyone here, some interesting ones. So I've talked about this quite a lot, but it's so important, and we're getting such a benefit from it that I want to emphasize it again. Developers hate writing tests, and when they write tests, they don't tend to do it very well, and they tend to be lazy, and they don't do that coverage. So as well as finding AI, is it massively accelerating, you know, recording what users are doing and automatically generating the tests? We're actually getting better tests output and than actually humans are doing. So not just acceleration, but better quality that's gonna help and de-risk.

Also, as we've been working, as we're working on these different pilots, we're dealing with different LLMs and different cloud providers, LLMs and other providers, LLMs. And not all LLMs we're finding are built equally. Some LLMs we're finding aren't very good for modernization tasks. Some are good for some types of tasks we've talked about, and some are good for others. So we're starting to amass that knowledge of what LLMs to use in what situations, and you can imagine that can start to inform some of our future playbooks in terms of best practices. And then the last thing, the last thing that I find important as well is there's a slight concern out there that maybe what AI generates and generative AI generates isn't always 100% correct. And that's true, and it's absolutely fine.

So the way we're doing this is if we're using AI in any of these steps to generate content and accelerate that, we're still using an expert for the last mile validation and the last mile bug fixing there, those last-minute changes. So it ensures when we're doing this process, the quality of the output, the quality of the migration is exactly the same as if humans and experts had done this completely manually. However, at the end of the day, we've needed less experts to do this for a lot less time, and that's where the... That's the benefit of the acceleration of AI. So as we double down on accelerating Modernization Factory, what are we gonna be doing from here? So throughout the rest of this year, we're gonna double down on focusing on these pilots that we're running.

Deep engagements with our customers, where we're experimenting together about how, for these different variances of applications and their composition, we can apply AI in different ways to accelerate that. We're learning a lot, and the customer is gonna learn a lot. As we go into the future, we'll expect that to manifest then in two ways, those lessons learned. One, we'll expect to be building out playbooks of those best practices and templates and starter toolkits that we can then take into future projects with future customers, where lessons have already been learned, and accelerate their, their, their migrations, a flywheel effect as we go along.

Of course, we have the Relational Migrator product, and everything we're learning here is gonna be really important for feeding back into that product and driving roadmap, where we can actually then have accelerations like this out of the box that every customer would benefit from automatically. So in summary, getting off relational is really hard, but we're learning how to, we're now learning how to harness the power of AI to make the process much easier. So hopefully, that was useful. I'm now gonna hand over to our Chief Product Officer, Sahir Azam, who's gonna talk to you about how we establish ourselves as a trusted AI partner. Thank you.

Sahir Azam
Chief Product Officer, MongoDB

Good afternoon, everyone. Thank you for joining us. Hopefully, that gave a good glimpse of some of the exciting work we're doing on kind of the front end of kind of research and application of some of the technologies that we all hear about every day right now in the news, but in a really pragmatic and practical way. So the last piece I wanted to talk about is how AI also presents a way for us to drive that standardization in a faster path to that than we ever have before. You know, Michael walked you through those different journeys. And one of the things we're seeing is that, first and foremost, generative AI, in particular, has made this a board-level conversation. You know, we've had executives that we've never had access to before sort of engage with us.

It's very much top-down, driven from organizations, which is actually quite interesting. You know, oftentimes this platform shift gets compared to when the cloud first came, came out and really started to launch. And when the cloud started, it was very bottoms up. It was shadow IT, it was development teams going around, central IT operations getting started, and eventually, you know, that value was evident, and then it became more of a C-level or executive-level drive. AI is a bit different. It's actually starting from the top in terms of the opportunity to drive efficiency and amazing new customer experiences. But we are also hearing from our customers that they're struggling to formulate and understand what the correct strategy is or even what the right use cases are to drive the best ROI. And as you all know, the ecosystem is moving quite quickly.

You know, I talked about sort of a simplified version of our ecosystem of different technologies and tools that are out there in the generative AI stack. You see a lot of analysts, a lot of venture capital firms publishing kind of this idea of an emerging AI stack. Everything is moving very, very quickly. You know, even within the venture community, where most of the new funding is going, it's all around these AI-based technologies and how you empower organizations to use them. This, of course, is complicated for the average enterprise to figure out, okay, how do I get started, and where do I pick bets that don't lock me into a particular approach, but give me flexibility? This extends not just in the stacks, but actually at the models themselves.

And just in the last few weeks, you know, Mistral launched an amazing new open source model, then the next day, Meta releases Llama 3, and everyone's buzzing about the fact that Llama 3 is now as almost as good as, if not equal, to OpenAI's latest models. There's a lot of excitement around what GPT-5 might mean. So this race is happening very, very quickly, and we know there's billions of dollars of capital going in. But navigating is, of course, what the right use case is to apply which model is complicated. There isn't sort of a standardized way to do this yet. And we're actually starting to see data come out around this as well, and we found that when executives were recently surveyed, 87% felt like they weren't necessarily equipped to manage this transition to AI.

So on one hand, they know it can be crucially impactful in terms of driving efficiency, compelling new customer experiences, et cetera, but yet, getting started and navigating this complexity is quite a challenge. But as we kind of observe what enterprises need and what the bleeding edge companies that are starting to have success are saying, there's a few key themes that kind of emerge. One, I think there's general understanding that AI will, over time, apply to a variety of different use cases across large organizations. This could be everything from efficiency in a call center or customer support functions, which I think we're all starting to see some great case studies around in the industry, to actually you know, helping with creative life cycles.

You know, I had Adobe on the stage earlier today, some of their tools and technologies around generating, you know, imagery and content, so there's really a lot of excitement there. To, you know, what we talked about with Novo Nordisk, so being able to, you know, shrink the time to submit drugs for approvals. There's all sorts of use cases here that are evolving, which then, in turn, means that there are different needs for different models. Various models are stronger at different use cases. Oftentimes, they need to be fine-tuned with the customer's data to be even sharper for a particular organization's application, and there's trade-offs right now, especially between latency and cost. You know, there are smaller models that might execute really fast. There are large models that are super capable but are higher latency and higher cost.

Navigating all of that and mapping that to these use cases is something that customers are starting to build understanding around. We are also seeing $ billions being invested by the hyperscalers, our three key partners, not only at the underlying infrastructure compute layer, but in tooling, right? To make it easier for, customers to get started. And for the first time, you're actually starting to see some divergence, even in the differentiation between these cloud providers. And so for a customer who's got knowledge, information, IP, and their data, the ability to leverage the cloud services now from multiple providers is even more important than the early stages of multi-cloud that we've been talking about for the last couple of years.

And for the first time, enterprise data can truly be unlocked to power applications, and what I mean by this is the majority of today's applications are leveraging structured or semi-structured information that can fit in the database and be reasoned about and queried. But because of new approaches with AI, now you can build applications on top of unstructured information, text, obviously, with all these large language models we've seen, but even now, multimodal kinda models come out that can work with audio and video, and that's only going to increase over time. And yeah, we all know that, you know, applications have had images and things like that for some time, but now it's really about the understanding contained within all this information that can power an application like never before. So these kind of four factors, really, in our eyes, set up MongoDB quite well.

And some of this is luck, some of this is intention. So one, we have quite a bit of expertise. We moved really fast in applying generative AI, frankly, first for our own internal usage. We have technologies that our customer support agents, customer success agents use to get quick answers for customers, so the turnaround time in customer engagements is faster. Our support organization can now create summaries of an incident, a postmortem when something goes wrong, much faster than they ever were able to before. Our go-to-market teams have an internal Slack bot that we call Coach GTM, sorry, all these acronyms. And basically, it allows them to find answers quickly. So if their customer is saying, "Hey, I want to deploy Atlas, is it available in this particular region?" Boom! They don't have to go through documentation or enablement materials.

They can just find that answer super quickly. These are just a handful of some of our use cases, and now we can leverage that knowledge to help empower customers to do, drive similar use cases in their organization. The other area that we bring the expertise is, frankly, with our partners. We've been spending a lot of time, myself, personally, Andrew, a bunch of folks in our organization, with the venture-backed companies that are leading the way in this ecosystem. You'll hear from some of them today, to really get an understanding of the latest of what's coming and what can be applied today versus what's more futuristic. We've also built broad partnerships. We're model-agnostic. You heard from Anthropic today on stage with Dev, but we also work with the models from the major cloud providers. Cohere is here.

So we wanna make sure our technology is completely open to support that wide variety of use cases and the models that back them. We are, as you know, ahead of the market in cloud independence. We're in 117 regions today across AWS, Azure, and GCP. We're the only technology that can move data in an operational database and mix and match across all those 117 regions across cloud providers. So that independence to say, "I'm an organization. To power AI, I need my data to meet, meet the customers where they are, where the models, where the rest of the stack is, and be able to fluidly move that," is a key advantage.

As you heard from Andrew, and a bit from Dev earlier today as well on stage, our data model, the foundation and core of the product, is architecturally built in a way that makes it very easy to manage heterogeneous data. And so this sets us up really well in terms of becoming a thought leader and a partner to our customers at a very high level, and they're looking for us to be that trusted advisor. And that has a direct connection to becoming a standard. You know, we've been talking about, as Michael mentioned earlier, the motion for an operational database tends to be now in the, you know, modern era, a very bottoms-up, developer-driven sort of motion.

So we tend to land with a specific application that has a sharp, acute pain, or developers that really wanna work in a modern way. We build up some momentum across multiple apps within that team, then start to spread across teams, and eventually, we get to a point, like those examples, where we have a standardization and we see an inflection point where we go from a few dozen applications, perhaps, to hundreds or thousands, as you heard from some of the customers earlier today. But this ability to position high in an account and get to those executive-level relationships faster is really compelling for us, and AI is a way to do that, to engage in a more top-down way than ever before as an operational database technology, fundamentally.

And as evidence of this, just two weeks ago, maybe three, I was in London, along with Dev and a few others from our team, and we pulled together an event that was really targeted at C-level executives across large enterprise. So finance, retail, we had some tech firms there as well, and government agencies. I think we had folks from 10 or 11 countries across Europe come together, and the goal was not to sell MongoDB. The goal was to bring together executives to talk about where they are in thinking about AI, the safety, the risks, the regulations around it, the ethics, what are the use cases that are deriving value, and get everyone in a room kinda talking about this more as business leaders, fundamentally, as opposed to just starting with the technology conversation.

I was at one of these tables, I'm somewhere in that picture maybe, and actually, it was interesting. I was sitting next to a business leader from one of the major global banks, and she's actually has nothing to do with the MongoDB adoption in the organization. The technology teams, obviously, are large-scale MongoDB adopters, we've been in that account for a decade, but she actually was completely thinking about it from a business perspective. How could she transform her organization? What use cases? We're getting access not only higher but more broad with executives, and I think because of this ability to be this trusted partner. Now, obviously, we're not just letting this come to us, we're being very intentional in a variety of different ways to make sure we can capture this opportunity.

So as a technology company, no question, it starts with the product. So as you know, we've made a bunch of announcements this week, but for the last 18 months, we've been investing aggressively across every facet of our portfolio. So whether that's fundamental optimizations in the core database, our vector store, et cetera, to make it more performant and more capable and more cost-effective around these use cases, a broad range of integrations with a variety of different layers of this emerging AI stack, as I talked about earlier. And we focused on really identifying use cases that are much a natural fit for our technology and where we can help customers. And obviously, generating code, managing code, you heard about modernization, is a great use for where these technologies are today in their evolution. So this will obviously continue.

This is just a glimpse of some of the stuff we've been up to over the last year or so. We're also continuing to expand our partnerships. So I would say the last 18 months was largely about breadth. How do we make sure that we give customers the flexibility, no matter which emerging developer frameworks or stacks they're using, which models they're using? And we feel like we're in a good spot there, but we will always continue, as things get evolved, to maintain that breadth. However, for enterprises, they want a solution that's much more prescriptive. They want our knowledge, our backing, of the right stack to use for various use cases.

And so we announced this new program, this MongoDB AI Applications Program, MAAP, as we call it for short internally, and this is a combination of, first, technology, so Atlas, the core database, integrated, validated into, the surrounding ecosystem, with the key leading partners in each of those different segments, reference architectures that we can all stand behind and are proven in terms of their deployability, and of course, a set of services and knowledge with experts. And we really are looking at this in two ways. One, strategy development. This is more around organizations that may not even know which use case to go after first, so we bring in more of a broad-based view to say, "Hey, here's what we're seeing at other customers. Here's where we've had success, here's where our partners have seen value for other organizations.

Here's perhaps some use cases to show quick ROI and get started." Or we can get very hands-on, and there are times when customers know, this is the particular problem I want to solve. I wanna, you know, make it easier for my support staff to answer questions quickly for, you know, support calls. And in that case, we can actually get really hands-on and start prototyping applications and get started more on the technical level. So these are some of the ways we're getting engaged. This is a combination of MongoDB services, but also our boutique SI partners, who have specialization in AI, and they've already started delivering some projects.

So net-net, there's a variety of different lenses and things that are happening in the ecosystem and within MongoDB that we see that are allowing us in the market to become much more of an implicit standard, and we're excited about the potential that'll have in terms of the position, especially in some of the largest organizations, the most complex organizations in the world, that have, you know, thousands and thousands of applications. So, I'm gonna switch gears a little bit. So obviously, today, both in the keynotes and in this session, you're hearing a lot about ecosystem, a lot about technology partnerships. I mentioned we're out there learning as much as we can from the most interesting kind of early-stage companies, later-stage companies that are doing amazing things.

To get you a glimpse of that, I want to invite three of our strategic partners, these are founders and executives who started these companies, to have a little bit of a discussion around what we see in the state of AI-driven applications. Come up, folks. Maybe just the middle three. Yeah. Yeah. Sounds good. All right, thank you for joining us in person. I know, it's tough to get away from running and operating, you know, an early-stage company, and all the demands that it takes as an executive team, so really, really appreciate it. It'd be great for you all to first give a little bit about yourselves, your company, the mission of what you're focused on. Brandon, let's start with you at the end, and then we'll work this way.

Brandon Duderstadt
Co-founder and CEO, Nomic AI

Yeah. Hi, everyone. My name is Brandon Duderstadt. I'm the co-founder and CEO of Nomic AI. At Nomic, we're focused on building tools that help make AI accessible and explainable. Concretely, this manifests as three product offerings right now. We have Nomic Atlas, which is a tool that makes it easy to interact with and collaborate on massive, unstructured data sets in your web browser. We also have Nomic Embed, which is a model that allows you to represent unstructured data as a vector. It's how you get the vectors for vector search. And finally, we have GPT4All, which is an open source ecosystem of low-resource language models that enables you to run things like Llama 3 on your local laptop and other bespoke hardware.

Sahir Azam
Chief Product Officer, MongoDB

Awesome, Brandon. Lin, how about you?

Lin Qiao
Co-founder and CEO, Fireworks AI

Hey, everyone. Very nice to meet you all. I'm Lin, CEO, co-founder of Fireworks AI. We're strategic partner to MongoDB. Very, a lot of honor, for us to work together. Before starting Fireworks, I've been working at Meta for many, many years together with my founding team. We have. I was head of PyTorch, that is, dominating AI framework, used by industry. When we talk about GenAI, all the models are PyTorch models these days. We have. It took us five years to fully productionize AI infrastructure, building around PyTorch, supporting all Meta's product surface area using AI, all the way from ranking recommendation, to search, to translation, speech synthesis, site integrity, and so on and so forth.

Our mission of starting Fireworks AI is to bring the proprietary knowledge used by a hyperscaler to make it accessible for a wider variety of developers. And second, we want to heavily compress 5 years going to production to 5 weeks, to 5 days or even 1 day. So our offering is GenAI inference and fine-tuning service to all the developers in the world and enterprises in the world. Again, those application features of real-time, low latency doesn't change whether they use GenAI or not. So we provide extremely low latency offering, and the cost of operation running on top of GenAI extremely high, and we significantly reduce the cost by order of magnitude.

And last but not least, everyone building on top of OpenAI, there's no differentiation, using the fundamental technology, and we help our customer heavily customizing their model, using their proprietary data. So with their data, their model, they have their moat. So across pick three: quality, low latency, low cost, we offer all three. So you can think in a very simplified way, you can think about us as a better version of OpenAI for enterprises.

Sahir Azam
Chief Product Officer, MongoDB

That's amazing. Jerry?

Jerry Liu
Co-founder and CEO, LlamaIndex

Great. Yeah, so, at a very high level... You know, oh, first, I'm co-founder and CEO of a company called LlamaIndex, and at a very high level, our core mission is to enable your developer teams to build LLM applications over their own private sources of data. So this includes unstructured data, semi-structured data, and, structured data. And basically, you pick and choose all the components within the LM ecosystem, so language models like, you know, Fireworks, or embedding models through Nomic, and then also storage through MongoDB. And we allow you to build, use cases like retrieval-augmented generation, which I'm sure some of you have heard about, or basically chat bots over your data, agentic workflows and more.

So we're the leading framework for helping you build, not just stuff that kind of like works in a prototype, but actually production-grade retrieval augmented generation that's free of hallucinations, can actually operate over your complex documents and more. So we have both an open source framework that, you know, connects all these different components, as well as an enterprise service that helps specifically to clean and process your data, to make, like, to create, like, a production-grade data pipeline that complements the open-source framework.

Sahir Azam
Chief Product Officer, MongoDB

Very cool. So, as I mentioned a few minutes ago, one of the things that everyone's sort of trying to get a sense of is: What are the most interesting use cases? What are the best applications for this new technology, especially in an enterprise context? Jerry, we'll stick with you for a moment and then get a perspective from the others as well on what are the most compelling use cases. What are you seeing as you're on the front end of the ecosystem, working with some of the most cutting-edge applications?

Jerry Liu
Co-founder and CEO, LlamaIndex

Yeah, I mean, I think for us, a lot of the overall theme is around unlocking, like, knowledge, synthesis, and extraction. So basically, given your, you know, unstructured data, oftentimes it's like just you have 1,000 PDFs, or you have 1 million PDFs, or you have, like, billions of PDFs, basically. And, and somehow, you know, you haven't really tapped into that yet, or, like, a human really has to go in and read everything to really give back, like, an answer or synthesize, like, a report. LLMs have the capability to basically take in all this data, as long as, you know, it's processed the right way and actually give you insights from it. And really, the key trick is for your developers teams to figure out how do you go from 1 billion PDFs into insights that you can actually process?

Turns out it's actually a little bit tricky. A lot of the simple things that developers find on YouTube tutorials don't really do this very well. It might work over, like, 10 PDFs, but not over, like, a billion. And of course, besides PDFs, you have a bunch of other data sources, too. You have, like, Excel files, you use, like, Salesforce, like, you know, maybe you use Slack as well. And so you have a lot of different data in different silos, and the trick is to figure out, you know, how do you actually connect these different data sources to build, you know, even a very simple use case, like a chatbot over your data. And that's probably, like, the thing people start with. And people haven't done that super well quite yet, and so there's a lot of room there.

Afterwards, you know, there's more advanced things, too, like coding assistance, things that can automate, like, not just like synthesis and extraction, but can actually perform actions for you. And so that's where you get into the agentic territory.

Lin Qiao
Co-founder and CEO, Fireworks AI

Cool. Yeah, so, Fireworks, we support a lot of developers and enterprises. I would say the motion is going very interesting, going, goes both bottoms up and tops down. As Sahir just mentioned, the whole entire industry extremely active in developer applications. I would kind of bucketize that in three categories in a extremely simple way. One is new chat experience, and that chat experience is either going through text or voice activated. It can be all kind of assistant. People are building legal assistant, helping lawyers to do case study based on a vast majority of kind of public information. Or people are building educational assistant to help students or me, people like me to learn foreign languages or learn other classes.

People are building medical assistant to address the shortage, significant shortage of nurses and the doctors in the medical system. They're also building customer service automation to alleviate expense of human behind a call to answer questions. There's huge variety of assistant that is being built. And second, in the e-commerce world, the entrance portal of search, based on keywords mostly, now is changing towards chat-based. People are using natural language to interact with search experience and get the best product out of e-commerce, whether they are marketplace or I want to order a meal, or I want to buy something. So that is transitioning a large portion of the industry. The second bucket is not generating content for chat, but more generating other kind of content. For example, and Jerry briefly mentioned coding assistant.

We have been supporting all kind of coding assistant, giving a complete different coding experience, whether it's supporting new program language, new coding guidance, or it just make coding or code review much more efficient. There are other things, for example, business workflow can be. You can imagine that as a coding process, and it can be automated and generated. Business metrics can be generated through visualization as well. You can extract production metrics from production logs and so on and so forth. That's a huge bucket. And you can also generate tool calls, where large language model has limited knowledge, and you can call into other APIs to extract knowledge and get better answers. And the last bucket is not limited to text generation, it's multimodality.

And the answers can be a blend of text or image or video and other, medias. So, we have been supporting applications, building PowerPoint generation, where text and images are mixed. You don't need to do the tedious work to bring up the template. It will just, boom, tell... No, you tell what kind of PowerPoint you want generated for you, and then you can tinker on top of that. Super efficient. There are also a lot of brand marketing material that's blend of text and images, high quality images. There are, like, product catalog cleansing, where you extract from the images or product catalog with high quality product catalog information. So I would say there's a whole slew of disruptive, innovative application that's being developed on top of GenAI.

It's a matter of time that the whole industry is gonna transition massively to build on top of this.

Sahir Azam
Chief Product Officer, MongoDB

Yeah, I think, you know, the phrase generative AI, I think, captures roughly half of what is going on here. It's wonderful that, you know, we can sit up here and talk about applications like the generation of content, be it, you know, code or natural language or images or what have you. But the fundamental innovation that I find myself focusing on increasingly is the fact that there's a new data primitive in town, and that's the vector. Up until very recently, computers have not had the ability to perform rich operations over unstructured content like text, images, videos, audio, et cetera, and that's the majority of content on the internet, that's the majority of the kind of content that's stored already in MongoDB. And the vector gives computers, for the first time, a data primitive that lets them perform rich operations over this.

What this is gonna enable is an entirely new class of applications that can manipulate unstructured content in ways that we don't even fully understand yet. At Nomic, we're starting to see this firsthand with our tool, Atlas. As I mentioned earlier, Nomic Atlas is a tool that lets you interact with and collaborate on massive unstructured data sets in your web browser. That is a tool that is only possible by manipulating vector representations of data, and we are seeing that tool have market penetration across a variety of verticals, be it pharma, consulting, defense, finance. All of these things are getting these massive unlocks from this new class of applications that are only possible because of the vector.

Brandon Duderstadt
Co-founder and CEO, Nomic AI

So how does all this change the role of a developer? I don't know if anyone wants to start, but, I mean, this is like a... You know, we talked about the stack changing earlier. You're obviously part of that stack in various ways, all of you, but how do you, how do you see things being different than a more traditional kind of application development process?

Jerry Liu
Co-founder and CEO, LlamaIndex

I think one, one thing I'll actually say, and this is actually to your points here during the opening remarks, which is, kind of like, companies really have this uncertainty, basically, because they've realized a lot of the pace of GenAI is moving very quickly, and you're trying to figure out how to, like, de-risk and try to understand and build these use cases. I think fundamentally, in order to do that and stay on top of the, you know, rapidly evolving landscape of GenAI, LLMs and more, you have to empower your developer teams, and you have to empower them with the right tools. And basically, you know, when it comes to buy versus build, I'm obviously biased, but I have a strong preference towards build, and especially building with the right infrastructure components.

So you abstract away the system complexities for your developers, and they can spend their times rapidly, both prototyping and productionizing any of the emerging use cases are happening. And so this includes, like, RAG. This includes any sort of agentic, you know, like, knowledge assistance, code assistance, those types of things. But really, you wanna stay on top of whenever, you know, the new open source model comes out, the new embedding model comes out, like GPT-5 comes out, you wanna be able to adapt to all these things that are evolving. Yeah.

Sahir Azam
Chief Product Officer, MongoDB

So that flexibility to be able to transition.

Brandon Duderstadt
Co-founder and CEO, Nomic AI

Yeah.

Sahir Azam
Chief Product Officer, MongoDB

Lynn, what are you seeing?

Lin Qiao
Co-founder and CEO, Fireworks AI

I feel kind of one big difference, before and after, like, before GenAI app development and now more than app development on top of GenAI is the GenAI models are, especially large language models, they are probabilistic. This is a huge difference because before application development is logic is encoded in code. It's more deterministic. You have full control of how the application should behave, and now you're building on top of a large language model. The model architecture is actually predictive. It's based on probability, layer by layer, and to predict what is based on the tokens you see, what is the token of words it should generate with the highest probability. So this is very different because that means you don't have full control of what the content is gonna generate.

And the other interesting limitation is large language model, although we understand its capability, it's very knowledgeable, it actually has limited knowledge. Its knowledge is limited by its training data, so it doesn't have knowledge beyond that. So if you're asking a question, based on probability, it will have to give you an answer, and it's prone to hallucinate. I think Jerry touched on that also. So hallucination is not a good trait. It sometimes is extremely problematic. So the way a lot of app developers start to understand that, "Oh, it's actually I'm losing control here, but I need to regain my control," is through the approach to get there is by providing relevant context and providing that as a bounding box into a large language model, and the technology is called RAG.

And RAG basically is a way for developers to retrieve relevant information from all kind of data stored already in the ecosystem in their accessible storage layer, and then formulate that and feed into large language model as the context, so the large language model can answer the question in the relevant context. And that's where Fireworks partner closely with MongoDB, because a lot of operational data already exist in MongoDB document store, and through Atlas Search it can efficiently and accurately retrieve the relevant information, and then feed into Fireworks large language model inference engine. And that way, we provide the best quality to application developer. So I think using those tools to fast developing new modern application is the change that is happening now.

Brandon Duderstadt
Co-founder and CEO, Nomic AI

Yeah, I think, you know, the most obvious thing for me as a developer is I will never code without a copilot again. It has made me at least 10, if not 100 times more efficient, and I imagine for any developer in the world, that's the same story. And so the thing you have to ask yourself is, okay, if we have just 100 times more code or text or images in the world, what's the logical extension of that for applications? And so the first sort of idea becomes, as a developer, you know, I'm faced with a new class of problems and a new class of tools.

To deal with things like hallucinations, I'm gonna spend a lot less time writing each character of the code and a lot more time doing analytics over my data in terms of, you know, what's going into my models, what's coming out of my models, evaluating these pipelines. So I think that the role of the developer is going to shift away from the actual writing of the code to orchestrating more of these models that are generating massive amounts of unstructured data. As far as, you know, how that affects the world of MongoDB and databases, in a world where everyone is now 100x more efficient at creating data, they're gonna have to have a place to store it, and they're gonna have to have a place where they can efficiently operate over it.

I think the demand for data storage and efficient data access and operation is going to increase wildly over the next 10 years.

Sahir Azam
Chief Product Officer, MongoDB

So, Brandon, kind of building on that point, and certainly I want to—something I want feedback from all of you on and that would be certainly relevant to this audience is: Why did you choose to partner with MongoDB? Obviously, there are other data technologies. We're all very excited that you're part of our—this new program, the MAAP program, that we're rolling out, but I'd love to hear some perspective from you in terms of you see the whole ecosystem as well. What gets you excited about what we're building together?

Brandon Duderstadt
Co-founder and CEO, Nomic AI

Yeah. I think one of the things that's really key about MongoDB, and that you mentioned earlier, is the fact that it's a place where a lot of this sort of rich, unstructured data already lives. It's a database that's been turned to by massive organizations to store the kind of data that's exactly what you need to, you know, operate in this new sort of generative AI world. And now, with the vector search offering, you're gonna enable people to build this fundamentally new class of applications by giving them that power to flexibly retrieve over both the vector representations and the actual data representations that they have. And so I think the document model basically is just incredibly well suited to this new world that we're living in.

Sahir Azam
Chief Product Officer, MongoDB

Awesome. We think so too. Lynn, obviously, we got introduced to the team that you on your side and started working together things, but really quickly on getting some new solutions out. So I'd love to get your perspective as well.

Lin Qiao
Co-founder and CEO, Fireworks AI

Yeah, definitely. We're in this tidal wave of GenAI, and a lot of time we focus on the technology itself, but the real main character here is actually the new innovation in application. I believe there's a slew of disruptive application that's coming out, either from the startups or from incumbents transitioning to leverage GenAI. In order to, you know, drive this transformation, we would like to partner with the company that has the best technology and closest to developers, and that's MongoDB. And, of course, MongoDB's already host a lot of operational data and provide extremely flexible schema for application developers to evolve their products. And they're... Based on my knowledge, MongoDB has more than 500 million downloads. That's amazing. And, more than 47,000 paying customer....

So that penetration of the ecosystem is very important for us to kind of partner and and leverage that to get to helping all these developers to transition to build on top of GenAI. So that's the fundamental reason we are kind of working very closely together and seeing kind of a lot of common common actual application developers or enterprise customers who kind of can benefit from both technologies.

Jerry Liu
Co-founder and CEO, LlamaIndex

Yeah, and just to. Yeah. I think, I think one additional point, actually, and this is just kind of like maybe a little bit orthogonal to previous points, is MongoDB isn't just a vector database, right? You can store stuff in, like, a key-value store. You can store things like structured, semi-structured, and unstructured data there. And I think one thing, especially for developers, if you're choosing, like, components of this, like, data stack to basically build applications off of, you need that flexibility. I think one thing when people, like, find a YouTube tutorial again, to just, like, build some basic GenAI application, they overfit to just, like, really basic primitives of just, like, only using vector search in a very constrained setting.

I think you really need a flexible storage solution, not just to store your unstructured documents, but also your operational data, figuring out how to ETL that data from, like, unstructured to semi-structured to structured. I think MongoDB gives you, like, basically a unified storage layer for all that type of stuff. I think that's, it's kind of, like, uniquely positioned to do so. There's not actually that many. There's a lot of vector databases, but there's not that many, like, solutions that provide, like, a unified storage solution.

Michael Gordon
CFO, MongoDB

Yeah. Awesome. Well, we are very proud to work with all of you in terms of building out this new world and the partnership. So thank you so much for joining us and sharing a bit about your story and what you're seeing in the ecosystem. Really appreciate it.

Jerry Liu
Co-founder and CEO, LlamaIndex

Thank you.

Lin Qiao
Co-founder and CEO, Fireworks AI

Thank you.

Brandon Duderstadt
Co-founder and CEO, Nomic AI

Thanks. Really appreciate it.

Andrew Davidson
SVP Product, MongoDB

All right, I think with that, we are transitioning back to Michael, and we're going to hear from some of our actually amazing customers. So how people actually use our technology day-to-day.

Speaker 23

Excuse me, green tea. Music for a sushi restaurant. Farmers harvest. Music for a sushi restaurant. Music for a sushi restaurant. Music for whatever you want. I'm not going to get lost.

Michael Gordon
CFO, MongoDB

Give us one minute here as we get everyone mic'd up. Yeah, please make yourself at home. I'm going to turn out so I can see you all a little better. I'll try not to fall over. Mic'd up.

Speaker 23

If the stars were edible, and the hearts would never fall. Could we live with just a taste? Just a taste. 'Cause I love you, babe, in every kind of way. Just a little taste. You know I love you, babe.

Michael Gordon
CFO, MongoDB

Almost ready. We're not only at max seating, we're at max microphones, so. Welcome.

Speaker 23

I want you to-

Michael Gordon
CFO, MongoDB

All right, we're ready. Terrific. Thank you for letting us get everyone mic'd up here. So this next section is one of my favorites over the years. At this format, I always look forward to actually letting you hear directly from our customers. We obviously love to talk about our customers and our product, but there's no better substitute than actually hearing directly from them. So thank you all for joining us. Maybe I'll just, by way of introduction, like, let you all introduce yourselves. We're going to try and do this in as informal and interactive fashion as possible, so you can get a bunch of flavor. We cover as much ground as possible.

Maybe just quickly, you know, introduce yourself, your organization, and kind of key IT priorities, just to kind of level set with the audience.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Sure.

Michael Gordon
CFO, MongoDB

Brian, you can go first.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

So my name is Brian Hanks. I'm the Vice President, Chief Engineer at Anywhere Real Estate. You may not be familiar with Anywhere Real Estate, but you probably are familiar with our brands. So there's Better Homes and Gardens, Century 21, Coldwell Banker, Corcoran, ERA, and Sotheby's, as well as a title company, a relocation company, and you know, several other entities. You know, I guess our priorities or my priorities are to modernize the tech stack, move away from being sort of a siloed organization into more of a unified operating model. And ultimately, I'm kind of passionate about, you know, building these modern apps in a way that makes them efficient and effective, and that's where MongoDB plays a good role.

Michael Gordon
CFO, MongoDB

Thanks.

Ha Hoang
Cloud Engineering Lead, UKG

Great.

Michael Gordon
CFO, MongoDB

Ha.

Ha Hoang
Cloud Engineering Lead, UKG

Hi, everyone. Name is Ha Hong, and I lead cloud engineering infrastructure at UKG. UKG is a workforce management software company, and as a result of a merger of Kronos and Ultimate Software. So one of our kind of key priorities around modernization and resiliency. So as you can imagine, two large organizations, you know, coming together, there's a ton of overlap in tech and tooling, and so our modernization efforts is focused on bringing the tech stack together, especially at this suite level, and a ton of modernization. We currently have a number of technologies around data, the data layer technology, which includes Mongo. And we currently have a mix of kind of relational and non-relational technology as well, like SQL, MySQL, Elasticsearch, just to name a few. And so our focus is really around modernization and standardization.

Michael Gordon
CFO, MongoDB

Mark?

Yann Varendonck
VP, ZF Group

Hi. My name is Yann Verandonck. I'm the Vice President of Scalar at ZF Group. If you do not know ZF, it is TRW in North America. So TRW is part-- I see already a lot of nodding that that's correct. So I'm running the digital unit at ZF. And what are we doing? We are a SaaS business, and we are providing, let's say, digital tools to a fleet in order to run themselves better. So why are we doing this at ZF? Because we understand the braking system, the steering system, the autonomous system so good that we turned that into, as well, a fleet management system. So we are mainly operating in Europe and in India, and we are starting now with our platform as well in North America.

Michael Gordon
CFO, MongoDB

Yeah.

Kevin O'Dell
Director of Engineering, Toyota Connected North America

Very cool. My name is Kevin O'Dell. I'm a Director of Engineering at Toyota Connected North America. Toyota Connected is a Toyota company. And we're a smaller firm that really focuses on innovation. Our real focus is data science, AI, advanced software development, really to accelerate Toyota in the digital space. And our focus is on the connected vehicle. So all the vehicle that... or all the data that comes off of a vehicle, you know, all the different data points and the different, you know, cameras and sensors and all that kind of stuff, our organization is the one that actually takes the data.

The virtual assistant that's in the newer Toyota and Lexus vehicles, if you say, "Hey, Toyota," "Hey, Lexus," that responds back is actually something that we built at our organization. The digital UI cockpit experience that you get from your multimedia system in your head unit of your vehicle is something that our organization designed. And then the place that I actually lead at Toyota Connected is if you're ever in an accident, and an agent calls into your vehicle and says: "Hey, are you okay? Can I help you?

Can I get you 911 services?" Or if you hit the SOS button in the vehicle and need some help, or your vehicle is stolen, or you need roadside assistance, any of those events that happen to you on the road, my team and my services and what we built actually does all that work, or supports all of that. And then why we're here is, underneath all that, we use Mongo as a reliable data store for us to make sure that we can respond to our customers.

Yelena Stekel
Head of Public Cloud Data, Citi

Hi, guys. My name is Yelena Stekel. I'm the head of Public Cloud Data at Citi. I'm sure all of you know Citi, major financial firm, 200 years history, supporting, you know, customers across, investment banking, traditional banking, you name it. In terms of the priorities we have, is we are trying to take Citi to public cloud at scale. So we are focusing on all the strategic partnerships, in order to be able to build, those scalable, ways to, to take the Citi to the public cloud. So we're building guardrails , we are building self-service and automation in order to be able to allow teams, right, at the scale of Citi, to be able to leverage, cloud.

Michael Gordon
CFO, MongoDB

Terrific. Well, again, thank you all for joining us. So we got a little bit of context of the breadth and variety. Maybe the best way to just dive in is just talk about how you're using MongoDB, use cases, and we can just kind of have a conversation from there.

Kevin O'Dell
Director of Engineering, Toyota Connected North America

Sure. I guess I'll go ahead and start.

Michael Gordon
CFO, MongoDB

Yeah, we can mix it up. Sure. Yeah.

Kevin O'Dell
Director of Engineering, Toyota Connected North America

Yeah. So we use MongoDB primarily for our, I guess, what I would call our strategic API build. So we've been going through the process of creating enterprise capabilities, and those capabilities are sometimes associated to entities, sometimes associated to actually business processes. And almost all of those capabilities are using MongoDB as their primary data store. We're also using Mongo for some of our transactional systems. So, for example, we have a system called Listing Concierge, and that system allows agents and brokers to rapidly build marketing materials for listings. It uses Mongo as its operational data store. It uses Mongo for basically everything. And we are also beginning to branch into using Mongo for our search solutions.

So our real estate and property search is powered by MongoDB Atlas Search, and we're in the process of starting to look at vector search for some of our use cases.

Ha Hoang
Cloud Engineering Lead, UKG

Great.

Michael Gordon
CFO, MongoDB

Who wants to jump in next?

Ha Hoang
Cloud Engineering Lead, UKG

I, I-

Michael Gordon
CFO, MongoDB

Yeah, go, go for it, Ha.

Ha Hoang
Cloud Engineering Lead, UKG

Yeah. So most of UKG's database management system uses Mongo for online transaction processing, and we use it for some data analytics, you know, as well. One of the other use cases around creating microservices, and so one of the microservices that we have migrated off of monolithic is a payment service that runs payroll. And so we now have a number of other microservices that run behind it. And then specifically in our Pro product, we use it for our two, kind of two main modules, which is recruiting and onboarding.

Yann Varendonck
VP, ZF Group

Maybe I can add on the monolith. We are for kind of more than 10 years customer of MongoDB, and we are more than 35 years in the space of fleet management and connectivity solutions. So we have seen the change into technology as well in the last years. And yeah, like, it started with the server at the fleet, and then it started with the centralized server somewhere. It was in Dublin, I think. And then we had MongoDB, and it was a monolith, as you had as well. And we shifted 3-4 years ago to a new technology, which microservices in the cloud. So it fits more perfectly with our culture, how we want to work. We want to make sure that the DevOps teams can own end-to-end everything.

Then we shifted to, together with MongoDB, the new technology of Atlas. And it was, for us, the most logical choice, to shift from an on-prem to a more, SaaS service. So today, MongoDB is, for us, our default for databases. It's just our default. Whatever we do, every new service that we make, it is our default, and that's especially with the new features like, the time series, which is, for us, very important, for sensor data, because we are reading data from, the braking system, data from the steering system, everything, from every sort of sensor, and this is what we are using. So MongoDB is our default, in everything that we do today.

Michael Gordon
CFO, MongoDB

Yann, for you, I know everyone's at different spots.

Yann Varendonck
VP, ZF Group

Yeah.

Michael Gordon
CFO, MongoDB

For you, that's Atlas, right? 'Cause you're-

Yann Varendonck
VP, ZF Group

Yeah, correct, correct. That's Atlas.

Michael Gordon
CFO, MongoDB

Yeah. Yeah, maybe Kevin or Yelena?

Kevin O'Dell
Director of Engineering, Toyota Connected North America

Yeah, I'll just say a lot, a lot of what he said. Very, very similar. But I will say, so our system's fairly new. We actually insourced all of our safety system, our telematics service platform. And when we first built it, starting in 2016, 2017, we actually didn't use Mongo. We used a different provider. But once we got to production, we actually ran into a whole bunch of issues. And so we made a decision to convert to Mongo, which we look back on and, not just 'cause I'm here, but it was a very, very good decision. We needed something that was ultra, ultra reliable, something that we really don't have to worry about. And our team is a pretty slimmed-down team.

I don't have any real data specialists, no DBAs, nothing like that. The developers are able to use Mongo and do everything they need to do. And we've got about 200 microservices that run our platform, multiple databases underneath of that, and it's anything from your telematics data that's coming from the vehicle to say, "Hey, I'm in an accident," to our call center system about, "Hey, I'm talking to somebody," right? Or sending messages out wherever, right? So we've got all of that different data everywhere and we tell our folks, you know, everything we build, it has to be up 99.99% of the time, because I can't have anybody get in a car accident, and we're not there.

And so we have that expectation for Mongo to always be there for our customers because it's such a critical thing that we support.

Yelena Stekel
Head of Public Cloud Data, Citi

For Citi, we've been partnering with MongoDB for many, many years, and we see growth year-over-year. Specifically, at this time, I would say, like, vast majority of our usage of MongoDB is on-premises. And, you know, we are looking right now, me being the head of public cloud data, I'm obviously focusing on the Atlas quite a bit there. And it's going to be a transition, right? We are going to see some cloud-native use cases where they go directly to MongoDB Atlas. Some of them will be migration efforts from on-prem to the cloud, and some of them will have to inevitably be on-premises, right?

Because, for example, it may not make sense for them to migrate off, EA version, or, you know, there is a lot of complexity, right, in the legacy system to actually build it. And of course, the hybrid applications, you know, because there are certain like... Citi being a global firm, there are lots of, country regulations we have to abide, by. So you may not be able to move certain data to the cloud, for instance. So you, you wouldn't want to use MongoDB on premises and use completely different tech stack in the public cloud. So I envision all flavors of that, at Citi, over time, but I, I would imagine, with, with years, right, the, the percentage of our, our usage of Atlas will shift, like, and it will increase.

Michael Gordon
CFO, MongoDB

Terrific. Thank you. And we've touched on a couple different parts, but maybe, you know, you can each share a little bit of views and sort of the strengths of MongoDB, sort of why you picked it. Obviously, some of it's come across in the conversation, but I think it'd be helpful for people to hear a little about your thought process. So maybe we'll start, you know, somewhere, Yelena, if you want to start us off-

Yelena Stekel
Head of Public Cloud Data, Citi

Yeah.

Michael Gordon
CFO, MongoDB

... or Kevin, or, yeah.

Yelena Stekel
Head of Public Cloud Data, Citi

I would love to start.

Michael Gordon
CFO, MongoDB

Yeah.

Yelena Stekel
Head of Public Cloud Data, Citi

Yeah, where do I start? So MongoDB Atlas, so one thing that we love about MongoDB Atlas is it's available across all major clouds. That's huge for us. Ability to have an exit strategy without spending a lot of time building a solution to actually exit and move your data to another cloud provider is huge. With MongoDB Atlas, you can just set up your cluster that spans multiple cloud providers, and you can obviously do it as part of the your migration or, you know. There are certain use cases I can think of in the future where you may want to have a node that lives in another cloud provider for either business continuity reasons, or if you wanna take advantage of some unique capability of that cloud provider and just reference the data in that cloud.

So there are lots of, permutations and combinations I can think of. So I think that's one of the major benefits. Also, the elasticity, of course, being able to horizontally scale and vertically scale, that's huge for us. Security, like, of course, for being highly regulated, like, you know, the fact that MongoDB ha- MongoDB Atlas has, credible encryption, client-side encryption, those are key amazing features that, you know, really sets MongoDB apart.

Michael Gordon
CFO, MongoDB

Terrific. And then just out of curiosity, anyone else drawn to the multi-cloud aspect? Is that important for people?

Kevin O'Dell
Director of Engineering, Toyota Connected North America

I've got a little something there, yeah.

Michael Gordon
CFO, MongoDB

Yeah.

Kevin O'Dell
Director of Engineering, Toyota Connected North America

Multi-cloud was important to—I'd say almost was important to us. We actually did a big migration from Azure to AWS from Toyota, and it was a year-long effort for us to do this and make sure we did it right. And we had, you know, all these steps of what we have to do, and right in the middle was we need to migrate our database, right? We're using Atlas. We're in Azure, but we needed to be in AWS, and I will say that was the easiest and most boring part of the four-month implementation, was clicking a button and, okay, we're in AWS now. So the ability to be able to do that was just—it was very good.

But then also just the elasticity and being able to go across regions with our type of solution to where we have to be up all the time is very beneficial.

Michael Gordon
CFO, MongoDB

Did you have a multi-cloud?

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Yeah.

Michael Gordon
CFO, MongoDB

Brian?

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

We use both Azure and AWS. And so, some of our applications, and because we were a business that was very, somewhat siloed, I guess, in the past, some of our businesses are very attached to Azure, and others are, most of our new stuff is AWS, but we have both.

Michael Gordon
CFO, MongoDB

Mm-hmm.

Yann Varendonck
VP, ZF Group

Yeah, we are, we are not yet in mult-

Michael Gordon
CFO, MongoDB

Yeah.

Yann Varendonck
VP, ZF Group

in multi-cloud, but we are on-prem and in the cloud. So we have an old system on-prem, also MongoDB Atlas, and then in the cloud as well. That's what we're using.

Michael Gordon
CFO, MongoDB

And then your thoughts on strengths?

Yann Varendonck
VP, ZF Group

On the strengths, yeah.

Michael Gordon
CFO, MongoDB

Reasons you picked MongoDB.

Yann Varendonck
VP, ZF Group

Apart from the feature richness that I mentioned already, one of the key things that I really like about MongoDB, and I see it more... I'm not a technical person, so I really see it more how my people are working, is the customer centricity that MongoDB has. And it's not about the customer centricity with the focus, for example, on me, because I have to pay for the invoice. It's really on the developers. They really listen to the developers, and that's what I really like. I knew MongoDB, but I know that MongoDB is already a culture in my company. It's like, oh, yeah, we love them. That's how it is. So they were really pushing me to come over here, that I really start to understand what it is.

But that's, and you see that with other tech companies. They focus so hard on, let's say, the decision makers, but we are having the culture that, no, no, no, we would like to have that our developers have the tools that they really believe in, and that's with us, clearly, MongoDB.

Michael Gordon
CFO, MongoDB

Mm-hmm.

Ha Hoang
Cloud Engineering Lead, UKG

Yeah, similar. Our developers, they love the JSON, right? The ability to be able to store JSON directly into the database versus having to deconstruct, you know, SQL. And then they can also provide arrays. We're not in multi-cloud yet-

Michael Gordon
CFO, MongoDB

Mm.

Ha Hoang
Cloud Engineering Lead, UKG

... but it will be, you know, important for us later. But I think some other features is around, being able to have schema changes, without having downtime, right, unlike SQL. And then, you know, lastly is just the flexibility on the function calls. You know, function call versus through SQL, which is, I think, is just another language layer that doesn't add a ton of value.

Michael Gordon
CFO, MongoDB

Are you running in a hybrid world?

Ha Hoang
Cloud Engineering Lead, UKG

We're also hybrid.

Michael Gordon
CFO, MongoDB

Just to focus. Yeah, just to... Yeah.

Ha Hoang
Cloud Engineering Lead, UKG

Correct. So hybrid, we only got a number of our—'cause we also have a number of monolithics as well, and so a lot of that is still in a hybrid and kinda in our on-prem-

Michael Gordon
CFO, MongoDB

Yep.

Ha Hoang
Cloud Engineering Lead, UKG

Data center. Yep.

Michael Gordon
CFO, MongoDB

Yep. Okay, so I'll ask all of you to put on your, your future prognosticating hats, here. And don't worry, I read a safe harbor, so it's all fine. Don't worry about your forecasts. So just as you look ahead and you think about the ways in which, you know, you might leverage MongoDB, the ways in which you might deploy MongoDB, some of the features about MongoDB that you sort of haven't used yet or looking to use, maybe, you know, walk me through that. Let's have a conversation about that. Who wants to go first?

Ha Hoang
Cloud Engineering Lead, UKG

I guess I can-

Michael Gordon
CFO, MongoDB

Yeah, start us off.

Ha Hoang
Cloud Engineering Lead, UKG

Yeah, yeah. So, for us, you know, frontline workers is a key component, and so, we're certainly looking at the vector search as one of the key areas. And so, our internal RAG team is already looking at that, and the fact that we like the fact that Mongo can help us scale, and that we already have an existing relationship that can, you know, increase that usage for our vector search. So that's definitely one area.

Michael Gordon
CFO, MongoDB

Mm.

Ha Hoang
Cloud Engineering Lead, UKG

Yep.

Michael Gordon
CFO, MongoDB

Anyone using vector search already?

Ha Hoang
Cloud Engineering Lead, UKG

Yes.

Michael Gordon
CFO, MongoDB

Proof of concept level.

Yann Varendonck
VP, ZF Group

Mm-hmm.

Yelena Stekel
Head of Public Cloud Data, Citi

Yeah, same here. There's just so much, you know, again, like, Citi is huge, so we're hearing so much need. Everybody's asking, "When are you going to make it available?" Because at our organization, in order to make a capability available, we have to go through certain governance process. So there is just so much interest in vector search capabilities, for sure. That's what I was going to say as well.

Michael Gordon
CFO, MongoDB

Kevin or Yann?

Yann Varendonck
VP, ZF Group

Yeah, I'm not sure if we use it, but I think we use it.

Michael Gordon
CFO, MongoDB

Yeah, yeah.

Yann Varendonck
VP, ZF Group

I'm not that close to the

Michael Gordon
CFO, MongoDB

Yeah, yeah.

Yann Varendonck
VP, ZF Group

development, but I'm pretty sure.

Michael Gordon
CFO, MongoDB

And if you think about things-

Yann Varendonck
VP, ZF Group

Yeah.

Michael Gordon
CFO, MongoDB

... when you look forward, what's your outlook?

Yann Varendonck
VP, ZF Group

We are discovering the edge as well. Today, you have the time series, you have the geo index. It's all because we are so focused on the data and the vehicle, and our goal is to do much more with the vehicle. That's our goal, with the mixed fleet. If it's coming from Toyota or from somewhere else, it doesn't matter for us. We're really in the commercial vehicle space. That's where we are. We believe so strongly in the edge as well, and that's something we are going to discover.

Michael Gordon
CFO, MongoDB

Mm-hmm. And, Kevin, when you look out into the future, what do you see?

Kevin O'Dell
Director of Engineering, Toyota Connected North America

We actually got some of our developers actually investigating some of the time series, some of the geo capabilities as well. You know, obviously, when you start thinking out a couple of weeks, you're getting into the AI type of talk, right?

Michael Gordon
CFO, MongoDB

Mm-hmm.

Kevin O'Dell
Director of Engineering, Toyota Connected North America

Definitely top of mind for us, everything with, you know, automotive.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Yeah, we're gonna continue expanding Mongo for basically everything we build new. We're using a lot of Atlas Search, and that will continue to continue to expand. But the vector piece is the big piece for us. We have about three or four different proof of concept level things that we're working on right now, and I hope that we'll have some stuff around, you know, natural language property search, enhanced property suggestions, so that it's easier to actually get a potential buyer attached to the right property.

Michael Gordon
CFO, MongoDB

Mm.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

And, and ultimately, you know, like I'm, I'm even looking at vector from the perspective of, it doesn't matter what type of application it is, whether it's a music app or a social media app or whatever, there's, there, there are the apps that-

... that stand out from others, right? And if you look at those apps, they're generally. There's something about them. There's a level of, you know, intuition that you get from using the app that you wouldn't get from a typical search. So when I think of vector search, it goes beyond search. It's how do I start building an app that actually helps the user-

Michael Gordon
CFO, MongoDB

Mm-hmm.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Learn something that they wouldn't have learned otherwise?

Michael Gordon
CFO, MongoDB

Terrific. Then maybe just to bring us home here, we'll touch on AI. It would seem like I would not be doing my job if I didn't at least raise the topic. Although we're going to exempt you, Elena, given the city's compliance requirements. But maybe we can just talk a little bit about kind of AI strategy, how you're thinking about things, what AI-powered applications, you know, you're looking for, and so how do you envision things unfolding, you know, over both the near and the long term, recognizing that it's early on. So, yeah, Yann , start us off.

Yann Varendonck
VP, ZF Group

Yeah, I can start. Maybe it's good that I explain which problem we are trying to solve.

Michael Gordon
CFO, MongoDB

Yeah, great context.

Yann Varendonck
VP, ZF Group

We, so we provide solutions to the fleet, and the fleet has an amount of vehicles. It has to say it has to go to A to B, it has to stop over there, and all the rest. And in the office, there are dispatchers, and they have to make decisions the whole time. A dispatcher, and I'm giving now the context of Europe, can handle, depending how complex it is, on a heavy commercial vehicle fleet, 8-12 assets. And that's kind of the limit of a dispatcher to do it very well. And we are thinking about, hey, we want to scale this. We don't want to have that the dispatcher is becoming a limited factor in the operations of a complete fleet, right? We have fleets of 8,000 trucks, 10,000 trucks, but also of 10 and 20 trucks.

So what do we do is we provide a solution to optimize that one, but we also provide a solution to understand if the vehicle is still healthy. But the dispatcher is getting so much data live, he's getting signals, that he sometimes does not know what to do. And this is exactly what you're trying to discover with MongoDB, is not on the historical data, this is about the machine learning models, but more on the live data. Having the live data, all the notifications, everything, all the alarms that are coming, having an AI model, so it can start to assist the dispatcher. And that's what we're doing and trying to build step by step, so in such a way that we can scale the dispatcher and, at a certain moment, automate everything. That's what we're trying with AI.

Michael Gordon
CFO, MongoDB

Great. Kevin or-

Kevin O'Dell
Director of Engineering, Toyota Connected North America

Yeah.

Michael Gordon
CFO, MongoDB

Kevin, go for it.

Kevin O'Dell
Director of Engineering, Toyota Connected North America

I can jump in. Yeah. So if you think about from a Toyota perspective, you can start all the way from manufacturing and things you can do, and while you're manufacturing a car with—from an AI perspective, even if it's just fixing the machines that break when they're making the car, right? So there's things that we're doing and we're talking about, like, how do we automate that, or how do we use AI to help with that? All the way up to the dealer, and helping the dealers get leads and communicate with their customers.

But then where we really come in is after the sale or after the purchase of the vehicle, and how do we use AI in a customer-first way, make owning a Toyota or a Lexus more enjoyable for the customer. And so there's things that we talk about there. We've already got an automated assistant in the vehicle, but how can we make it smarter? How can we make it more predictable about what the consumer or what the driver wants to do? In the safety space, something that we are tangibly doing right now is, we've got about 300 safety agents that are waiting for someone to call them. They've always got to be ready.

And we get calls for subscription or kidnappings, suicides, accidents, like wide range of calls that we have to get. And so trying to use AI to figure out how do we help our agents in those situations, whether it's someone just asking about a subscription or someone saying that, "Hey, my mother with dementia just took my vehicle." How can we assist our agents using our past experience and our knowledge bases to proactively help them right away, or if they're talking to law enforcement? There's a lot of stuff you can do there in the call center environment, so we're tangibly looking at that. Obviously, Mongo plays a piece in that. We have a lot of our data, our call recordings, all that kind of stuff is all accessible with Mongo.

So being able to keep that stuff close is something that's, it's essentially on our roadmap as we're talking.

Dev Ittycheria
CEO, MongoDB

Great. Before I go to Ha, Joanna, I don't know if there's anything you wanted personally from, maybe not from a Citi strategy standpoint, but just your, your own views on AI, or I can jump to Ha, whatever's best.

Yelena Stekel
Head of Public Cloud Data, Citi

Yeah, I mean, I would just say that, what I can say, is that, there is a huge focus at Citi on AI, right? Because we see that as something that will help our transformation, and we want to be, at the forefront of technology. So there's a lot of focus. We are looking at AI as something that can help, help the developers, not replace the developers, but actually help them to be more productive. We are looking at developer tooling quite a bit because that, that's the low-hanging fruit. But of course, there are lots of use cases across the board that are being identified, and, needless to say, there's a lot of interest for sure. But you have to do it in a careful, controlled way, so that's a big area of focus.

Michael Gordon
CFO, MongoDB

Perfect. Thank you. Ha?

Yeah. So for UKG, it's our focus is frontline workers, and so how do we make them more productive, whether it's through scheduling, reporting? And so we're planning to embed, you know, GenAI into a lot of the end-user facing features, to get them to be more productive. Mm-hmm.

Dev Ittycheria
CEO, MongoDB

Great. Brian, you'd mentioned a POC.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Oh, yeah. Well, actually, I have stuff in production already.

Michael Gordon
CFO, MongoDB

Okay.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

So I have three-

Michael Gordon
CFO, MongoDB

Walk us through those.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

I have three different use cases in production. I have a what I'll call a summarization use case, where we have a relocation business, and there's multiple types of communications that come in over phone calls and text messages and emails, and being able to take all that information and send it into a model and generate a summary report and send it out to the person who's relocating, that was a multi-hour process. Now it's down to minutes.

We also have within our marketing and advertising product, we're using AI models to write listing descriptions, and we are actually just rolling out today. We have added a-

Michael Gordon
CFO, MongoDB

Today?

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Yeah. It's, we've rolled out another feature today, and this feature is even more exciting to me because what we're doing is we're using an Anthropic model, and we're feeding listing photos into the model, and we're generating captions and detailed attributes. Like, the level of attribution on the photos is such that it can enable search. Like, instead of just being like, you know, that's a front door, it'll tell you that's a, you know, a metal front door with a grate, and, you know, it's very, very detailed attribution.

Michael Gordon
CFO, MongoDB

Yeah.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Those are already in production. We're also looking at, and/or POCing multiple other things, including, you know, how do we use AI models to be able to answer real estate domain-specific questions for agents and brokers or even potential home buyers? We're using AI models right now, to enhance our leads so that you get a better chance of matching a lead to an agent properly. And, you know, I can go on and on about AI, but there are numerous different use cases, including, what I mentioned about vector earlier, right? Using the different, vector embedding models so that you can actually enhance the result set.

Michael Gordon
CFO, MongoDB

Terrific.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Great.

Michael Gordon
CFO, MongoDB

Well, thank you all for that. Thank you for being customers. Thank you for being here. Thank you for taking the time to share with this audience. I really appreciate it.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Thanks.

Yelena Stekel
Head of Public Cloud Data, Citi

Thank you.

Kevin O'Dell
Director of Engineering, Toyota Connected North America

Thank you.

Dev Ittycheria
CEO, MongoDB

All right, next up, I think we have a partner discussion, and so with that, we will give us a minute to transition, and we'll be right back. Great.

Speaker 23

If you say hide, we'll hide. Because my love for you would break my heart in two. If you should fall into my arms, tremble like a flower. Let's dance. For fear the grace should fall. Let's dance. For fear tonight is all. Let's wait. You could look into my eyes. Let's wait under the moonlight, this serious moonlight. And if you say run, I'll run with you. And if you say hide, we'll hide. Because my love for you would break my heart in two. If you should fall into my arms and tremble like a flower. Let's dance. Let's dance. Put on your red shoes and dance the blues. Let's dance to the song we're playing. Let's wait. Let's wait. Ohhh!

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Just talk?

Dev Ittycheria
CEO, MongoDB

Yeah.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

We'll give people a few minutes to,

Dev Ittycheria
CEO, MongoDB

Reset?

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Yeah. I think I want to take a bio break.

Dev Ittycheria
CEO, MongoDB

It's a long day.

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Yeah.

Dev Ittycheria
CEO, MongoDB

You have good attendance. I have, like, 100 people connected virtually as well, so you have-

Brian Hanks
VP Chief Engineer, Anywhere Real Estate

Yeah.

Dev Ittycheria
CEO, MongoDB

Very good pool.

Michael Gordon
CFO, MongoDB

Okay. I think there's a few people trickling in, but we might as well get started. So, thanks again for being here. This section is all about partners, and it's my privilege to have Alvaro Celis with us today from Microsoft. I'll let Alvaro explain his role and scope of responsibilities, but obviously, partnering is a key part of our strategy in terms of our business. You heard us announce the MAAP program, and we're very pleased to have Microsoft be part of that program. And I would tell you that-

... as, Alvaro told me that he doesn't do this for every partner, so very grateful that he came out here. And obviously, as you can imagine, you know, he, there's a high bar to do these kinds of things at Microsoft, so we're very appreciative of his time. And he did take a red eye, so, he's wired with a lot of coffee. So, but nevertheless, we're grateful to have you here today. Thank you very much. So just to start off, maybe just for the group to understand, if you could just explain your role and scope of responsibility at Microsoft.

Alvaro Celis
VP Global ISVs, Microsoft

Thank you. By the way, thank you so much for the kind invite and the kind introduction. It's a pleasure to be here, and it's a distinction. I've been with the company for a very long time. I'm gonna disclose that, if you promise not to do any math. 32 years will be in September. So it's been a lifetime of adventures and work across the company. Started in Latin America, stayed in Asia, being the VP of the region out of Singapore, and the last eight years in global roles. My current position for the last two and a half years, where I've been partnering with your team, is we work on global ISVs.

The mission of the team is to be sure that we look across our commercial solution areas, we look at the long-range plan, we look at where the opportunity is for us to accelerate and enhance that plan, partnering with ISVs, strategize our coverage and who do we need to work with, and we have the opportunity and the distinct pleasure to lead global relationships. That's where I work with your team day in and day out

Michael Gordon
CFO, MongoDB

Thank you. And before we get into specifics of, of the, Microsoft-MongoDB relationship, is there any ad-- You just touched on a little bit about the strategy. Was there any maybe details you can provide on how you think about who to partner with, when you partner, when you don't?

Alvaro Celis
VP Global ISVs, Microsoft

Yeah, that's a great question. At the heart, we are a platform company, so the philosophy is quite simple: You co-innovate, and you share success. We believe that when you provide partners with great platforms, tools that are world-class, you have the appropriate level of support, and you nurture them in an ecosystem that is in the same approach, you accelerate innovation, you create more customer value. You complement that with a GTM that plays to the strength of that partner, allow them to be more successful, to create more differentiation, of course, and to accelerate AI digital transformation across industries, because that's what people want. Look, at the end, it's around enabling customer choice. It's being sure that if you're a customer, you have the best-of-breed solutions in that platform.

We are very clear, when we have this vibrant ecosystem, we can work with more customers, and we can work with them even in more scenarios, in more use cases. So it's absolutely a win-win, where the partners win, the customer win, and we win.

Michael Gordon
CFO, MongoDB

Clearly, the follow-on question, I think, will be is: What's the appeal with partnering with MongoDB?

Alvaro Celis
VP Global ISVs, Microsoft

That makes sense. Okay, when you look at MongoDB, think about the role of the developer, right? And how critical they are in driving all this innovation, and the strong preference that developers all over the world have for MongoDB. So for us, it's about serving our shared customers and being sure that we can make MongoDB Atlas a world-class, a best-in-class experience for the customers in Azure, right? Our shared customers. Then you go and say, "Well, there's a fast pace of acceleration of customers moving into the cloud." So how can we help them make the most out of that? How do we secure that Azure is the best platform for those MongoDB workloads? Or if a customer wants to develop a cloud-native MongoDB application, and they can do that in Azure the best way possible.

In this era of AI, accelerating that migration and having the data in Azure will help customers realize more value, so what's not to love? On that creation of value cycle, you and the team honestly have done an incredible work creating value around the database. When you look at search, stream processing, Edge, and all the work you are doing to be sure that the data management is more robust. So we can use that as part of the innovation that we do together to generate more value for customers that are going into the cloud in this era of AI.

Michael Gordon
CFO, MongoDB

Great. So we spend a lot of time, we get a lot of questions from people here and, and other investors about: How do you specifically work with the cloud partners? And, I know we have lots of common customers, Boots, Temenos, Toyota, et cetera, who are all very co... You know, obviously, customers of both of ours.

Alvaro Celis
VP Global ISVs, Microsoft

Mm-hmm.

Michael Gordon
CFO, MongoDB

Maybe you can be a little bit specific about how we work together on customer engagements?

Alvaro Celis
VP Global ISVs, Microsoft

Yeah.

Michael Gordon
CFO, MongoDB

From your point of view, because obviously-

Alvaro Celis
VP Global ISVs, Microsoft

Yeah.

Michael Gordon
CFO, MongoDB

... we talk about a lot from our point of view.

Alvaro Celis
VP Global ISVs, Microsoft

Yeah, that, that's great. Well, you know, MongoDB is a global ISV. We, we have a very close relationship. You are a priority. They're a partner, so it's a very deep relationship between both companies, and you, you know that. So maybe the best way to explain that is think about maybe three layers. Let's talk about the product, let's talk about integrations, let's talk about the go-to-market, okay? So when you look at the product, our engineering teams are working together to be sure that that MongoDB Atlas experience on Azure is best in class. Actually, the teams are committed to be sure that our shared customer have that world-class experience.

In the last few months, as you know, we have lit up more than a total of 48 Azure regions, where we have now MongoDB Atlas available for more Azure customers, and that has been fantastic, as an example of the co-engineering work that your team and ours are doing together. Then, when you look at integrations, it's about the work on these data and search capabilities that Mongo has, and you look at Azure data services and how we are creating synergies that allows for our shared customers to make the most out of that, synergy across our product lines, like Fabric, Power BI, Azure data services, and so on. Then number three, our teams are really working together in the GTM across sales, marketing, and delivery, being sure that we keep being guided by customer choice and the customer preference.

One thing that you and I were talking before coming in here is how much our cultures found themselves with synergies in the customer obsession and wanting to do what is right for the customers, and learning that when we work together, when MongoDB Atlas is the right solution for the customers, we unleash incredible value for those customers. And that has become a fuel and an inspiration on the partnership. Sometimes the Mongo team calls Microsoft, sometimes Microsoft calls the Mongo team. We always are respecting customer choice. We have a clear communication guidelines that allow us to be sure that we're always on the same page on what is the customer choice that we're enabling, and that formula is resonating greatly.

Just over the last 6 months, to give you an idea, 6-7 months, MongoDB has been one of our top Azure Marketplace partners, right?

Michael Gordon
CFO, MongoDB

Yeah.

Alvaro Celis
VP Global ISVs, Microsoft

So that's a true testament of customer preference.

Michael Gordon
CFO, MongoDB

Yes. This follows a good segue to our next question. I would say it'd be... It would not have been likely for you to be on the stage two years ago. Like, our relationship was a little bit more fragile.

Alvaro Celis
VP Global ISVs, Microsoft

It was.

Michael Gordon
CFO, MongoDB

Things have changed a lot. How would you describe the current state of our relationship?

Alvaro Celis
VP Global ISVs, Microsoft

Well, I think it's a, it's a long-standing partnership as, as you mentioned, but it's a partnership that actually last year, when we signed the new agreement and we reimagined the relationship, we took it to the next level, right? And we just talked about the getting in progress. That has been fantastic. The customer experience now today on the MongoDB Atlas on Azure is, is world-class, and it's getting better and better, and we are delighted to hear that. Our teams are doing really guided by that customer choice, that is allowing us to work in service of the customer in a very mature way, which is a complete different level of impact than the one that we have. We are actively working in that with a curiosity and passion for the customer.

Now that we have landed on doing this for you, what else can we do? And that is unlocking more value, allowing us to learn where do we work together to generate more impact for customers. And as the technology evolves, we're also seeing new opportunities to come, to bring synergies across the Mongo stack and the Microsoft stack in service of those customers, right? So it's a, it's a continuum, and we, we see the flight wheel going in a very, very nice way. We have made way more than a year of progress in the last 12 months. Thank you for that, by the way.

Michael Gordon
CFO, MongoDB

Thank you. Another question that comes up often is when we launched Atlas, there was some investor skepticism about could we both partner and compete with the hyperscalers, and not just with you, but all the hyperscalers. And so you obviously have a broad portfolio of first-party services. How does Microsoft and the Azure team think about partnering versus competing when it comes to, you know, working with partners?

Alvaro Celis
VP Global ISVs, Microsoft

Yeah. So that's a very good question, and it goes back to my opening. If you're a platform company, that's a core part of what you need to embrace and make happen, because it's about customer choice. You want the customers to find the depth and the breadth of options in your platform that will suit for the needs that they have, and the suits are very varied, very broad. Your partners extend, complement, specialize, and create incremental value on your platform. When you present both options, you are just making the customer more successful and your partner more successful. Well, this is not new, as you know.

I mean, we have, we have that in many, many fronts, and I think that we have the maturity and the practices that allow us to have very clear rules of engagement, very constructive discussions on how do you make the platform better, how do we make your offering better in service of those customers. And on the execution, on the GTM side, as I mentioned, our teams have matured a ton on those rules of engagements, communication, and always being guided by customer choice, which ended up being the best, the best North Star for any relationship in the industry.

Michael Gordon
CFO, MongoDB

Yeah, and we have, as you mentioned, we have integrations on your first-party console. We have co-seller arrangements. We're in your startup program. There's a whole bunch of things that we do together.

Alvaro Celis
VP Global ISVs, Microsoft

Exactly.

Michael Gordon
CFO, MongoDB

Obviously, you know, you can't go to a tech conference without talking about AI, and so,

Alvaro Celis
VP Global ISVs, Microsoft

It took that long.

Michael Gordon
CFO, MongoDB

Microsoft has obviously, you know, really, been prescient about obviously making some very strategic investments in OpenAI and leading the industry in AI. How is MongoDB and Microsoft working on AI?

Alvaro Celis
VP Global ISVs, Microsoft

Okay, no, that, that's a very, very fair question. Can I take two minutes to give you - to give the audience-

Michael Gordon
CFO, MongoDB

Of course.

Alvaro Celis
VP Global ISVs, Microsoft

Context on what are we doing?

Michael Gordon
CFO, MongoDB

Please.

Alvaro Celis
VP Global ISVs, Microsoft

So when you think about it, we all know that AI is the most consequential technology of our times. So how do we democratize the impact that they can bring so you can empower more people, more companies, so they can be more productive, and they can go and tackle the biggest opportunities and challenges of the generation, right? That's, that's kind of the, where the heart is, and it's tied to our mission, as a company. Now, our approach, when you look at, again, being a platform company, is depth and breadth of offerings on AI. So it's a combination of our own research, partnership, investments, that allow us to have a, a whole breadth across the stack of AI, to offer customers solutions that might fit in multiple scenarios, in different needs, in different locations, depending on the strategy, depending on the solution.

Now, what is very clear, Dev, and that's where we start shaping up our partnership, is data is the power that fuels AI. If you're a customer that is trying to make the most out of the AI capabilities, you need a world-class data estate in the cloud, well-managed, well supported, well maintained, and that's where our opportunity to partner with companies like MongoDB, in service of the customer to have that data stayed in the cloud, are incredible. We partner at multiple levels when you look at our approach of partnering in AI. Sometimes we partner at scale, trying to support with our copilots, you know, thousands or, you know, hundreds of people, to be able to go and achieve potential and productivity through those solutions. In some other cases, we create highly sophisticated, specialized bespoke solutions that are AI-powered.

That is what the business needs, right? And in that continuum, we have multiple levels of partnerships. Our aspiration is to be sure that Azure is the best place for your data as a customer, and offering the broadest range of options for that data, database and relational and non-relational, open source, scripting, so whatever you have in there, and our partner offerings, as we were talking, right? So that's where MongoDB Atlas becomes part of that offering. I think that the ultimate layer where we are working and getting a lot of traction lately is how do we partner to help customers accelerate the solving those problems and getting time to value?

I mean, when, when our customers are starting to work on generative AI, there's a lot of enthusiasm and aspiration on what you can do, but in reality, there's friction, there's complexity, and sometimes you don't have even the whole talent. So what we can do together to make it simpler for them, to have integration, synergies in our stack, is gonna be, is gonna make a huge difference. An example is the work that we have done with the Semantic Kernel and your MongoDB Atlas Vector Search, and how those two things combined will help customers in, that are MongoDB Atlas or natural customers, being able to streamline data management and have semantic queries that will fuel an input to the specialized AI applications, right? So that's, that's the type of partnership that we have today, and we are, we keep exploring more opportunities moving forward.

Michael Gordon
CFO, MongoDB

Got it. And last question, before we wrap is, again, with all the safe harbor caveats, how do you see this relationship going forward?

Alvaro Celis
VP Global ISVs, Microsoft

Well, you know, I think that the, the relationship will keep growing in strength. I think that we are now in that positive spiral where the success that we're having, the customer momentum, is fueling confidence on each other, confidence on what we're doing, and giving us permission to be more curious on more options to, to work on that. I think on the... For example, on the migration path, we are working now on take an approach that will really streamline and accelerate customer migrations with a migration factory approach that we're doing together-

Michael Gordon
CFO, MongoDB

Yeah.

Alvaro Celis
VP Global ISVs, Microsoft

So we can help the customers have that time to value, and time to migration in a complete different level, with quality and certainty, right? I think that our work on the selling side, customers keep teaching us what is the joint value, so we keep listening to them. That opportunity is gonna be there for us to find newer scenarios that when we work together, we create more value for them. And as I was mentioning, the whole synergy on our technologies and that passion to innovate together in service of the customers, are one of the things that you will see more and more progress. And I know that there's good news coming down the road that you will share with the audience when the time is right.

Michael Gordon
CFO, MongoDB

Yes, very excited. Well, Alvaro, thank you very much for being here. We're very grateful for the partnership, and we look to do great things together.

Alvaro Celis
VP Global ISVs, Microsoft

Thank you. It's an honor to be here.

Michael Gordon
CFO, MongoDB

Thank you.

Alvaro Celis
VP Global ISVs, Microsoft

Thank you, everyone.

Michael Gordon
CFO, MongoDB

Thank you.

Speaker 23

Chasing my brain so I can get sleep and calm my mind. But she was a concept. I was a love bet. You are the one that spoke around. I never believed I'd make it out. You wasn't around for quite a while. When you said, "I'm here," well, you are the one that spoke around. I never believed I'd make it out.

Michael Gordon
CFO, MongoDB

All righty. Thank you for that. We're gonna run through... Whoops! The last agenda item, which is really a business update before we get to Q&A, where Sahir and Dev will join me, and we'll take any questions that you all have. So just on the business side, there are three things I'm gonna cover. I'm gonna provide a bit of a market and kind of product level update. We'll talk on this continuing theme of the benefits of becoming a standard, and I'll give you a little bit of a financial slice of what that looks like. And then lastly, conclude with the financial summary overall.

So with that, as I mentioned at the very beginning, you know, this is a very large market, $94 billion growing to $153 billion, but we're still in the early innings of enterprise. You heard one of the panelists earlier today reference to over 500 million downloads. Clearly, plenty of developer adoption, but we're still working on penetrating in enterprises. So we have about three-quarters of the Fortune 100, about half of the Fortune 500, and about 30% of the Global 2000. So we still have a lot of landing to do in that land and expand model. But even within these accounts, we also have a fair amount of expanding to do within the ones we've landed. So we have relatively small market shares.

This is based off the IDC data, and we had them cut this a couple different ways. So the left-hand side shows you the 2.4% estimated market share that we have among Fortune 100 customers, and roughly similar 0.1% among Fortune 500 customers. So you know, the customer base is broadly diversified. We talked about how we have to win workload by workload. We talked about the benefits of becoming a standard within, so you can kind of increase that market share. But what this shows is we still have a long runway for growth, not only winning and landing new accounts, as I mentioned kind of at this slide previously, but also in expanding within our existing accounts.

So that's really what sets up this market opportunity for us, and that we've talked about. So, I'll spend a minute talking about products. So let's dig in. Atlas is obviously the fundamental driver of growth. And given how large the market's growing, we've had a lot of success, but we only have about 2.6% share within the cloud database market. So again, kinda just to give you a relative sense of there's a huge amount of opportunity here still for us to go. I wanna talk about Atlas and Atlas consumption, in particular.

For those of you who pay attention, and were here last year or watched, this slide was very well-received and helpful view of the world, so we thought we'd revisit it, and give you an update on how all these things stand. I can see by the cameras and iPhones, you everything going up, that that is true again. And so, what we have here is now a 12-quarter view, right? And this shows week over week, average consumption. So when we talk about trends, this is what we're thinking about, this is what we're looking at.

What you can see is those first five quarters were sort of in the kind of the pre-macro view of the world, and the last seven quarters are more, you know, sort of macro affected and sort of in the slower, growth range. Again, what we're talking about here is still growth. And so what you can see is there is seasonal variability, but in general, these last seven quarters, when we talk about stability, this is what we mean. Now, we've told you before, we don't have a ton of data points when it comes to seasonality, but we're calling out the patterns that we've seen. We're obviously still learning about seasonality, but in general, you can see that Q1 and Q3 tend to be stronger than Q2 and Q4.

We also talked about, last year, that we were seeing more stability, and also that stability was translating into less variability. So here, if you look at the bars and you focus on Q3, you can see that Q3 this year wasn't as strong as Q3 last year. Conversely, Q4 this year was stronger than Q4 of last year, and so that's sort of that reduced variability of outcomes, that we've talked about. So that's a quick update on Atlas consumption, but we thought that would be helpful to revisit it, given the interest and attention, and the value that you all found in that, previously.

Speaker 23

Can you give us the YX next year over the AI to get it?

Michael Gordon
CFO, MongoDB

We could save that for the Q&A, but the answer is gonna be no. So we thought it would be helpful to spend a little bit more time on this, though, and give a deeper view to explain what does stable consumption look like when you look at it in the workload level, right? Part of what we've talked about is sort of this winning workload by workload, and so let's put that in context, because ultimately, I think it's most helpful to think about consumption as the making up the aggregate consumption of all the sort of underlying workloads, right? So, I'm now gonna present something that is highly illustrative, ignores seasonality, and is not to scale.

So this is not the slide to take out your protractor and ruler and everything to assess what's going on, but hopefully, this will help you all understand what we're talking about when we're describing the business. So let's dig in. Consumption growth obviously begins with the existing workloads that we already have on the platform, and those workloads grow over time. But naturally, as they age, their growth rate does slow. And so what you'd see is if you didn't add any new workloads, you wouldn't have stable consumption. Your growth rate, our growth rate, would naturally slow over time, right? As those workloads mature. And so if you... Sorry, just have to click ahead here. And so as we acquire new workloads, they add to the growth.

And as we've said before, those new workloads grow more quickly initially. So what you can see here is if we add in the green, you'll see the impact of new workloads that we add in the first four quarters, right? So this is just an eight-quarter view. And what you can see is as you add in those first workloads, they sort of are accretive to growth, right? They're adding to the growth rate. And then subsequently, the blue lines, which I'll sort of fill in here, are the impact of new workloads that you add in quarters five through eight. And so in aggregate, again, we're ignoring seasonality, this is sort of illustrative, but what you can see here is the result of the stable growth is a result of a mix of workloads, existing workloads, new workloads, and everything else.

And so theoretically, if you take that kind of theoretical construct and you apply it practically, what that meant in fiscal 2024 was the workloads that we acquired during the year, in addition to those that we acquired, you know, in the year before, just to keep this 8-quarter view consistent, contributed to the stable growth that we saw in fiscal 2024 relative to fiscal 2023. So hopefully that helps paint a little bit more understanding, around Atlas consumption and also sort of tries to tie it into this workload level view, that we've been walking you through. Switching over to EA. As we said before, EA continues to succeed and has, outperformed our expectations. Part of that is the success of the run anywhere strategy. Part of that is also the fact that it's a huge market.

And even though we've had great success and offered EA longer, we actually only have an estimated 1.1% of the on-prem market. And so I think that sort of helps further explain the success and durability that we've seen within EA. Many of the EA customers are also still quite early in their cloud journeys. So this takes a look at the EA ARR. About 83% of the EA ARRs for customers who are 80%+ EA. So again, we're starting to see people, you know, adopt more cloud workloads. I expect, you know, we'll continue to see that trend and that evolution, but many customers are still quite early in their cloud journeys. Lastly, just for completeness, we'll talk about sort of the other, just we've called this out.

It's particularly important as it relates to, you know, our guidance and some of the numbers we've previously shared with you. Q4 fiscal 2024 was an unusually strong quarter, year, excuse me, for this sort of non-EA, non-Atlas portion, including as a result of multi-year. We mentioned Alibaba and others. We referred to this in our guidance, but hopefully, this kind of visually helps people understand why that creates a headwind in fiscal 2025. So, that's a bit on sort of market and products. I'll spend a minute talking about the benefits of becoming a standard. And it's important to understand, this is really not about the size of how their current level of spend. It's really more about the size of the future opportunity.

And so typically, even a customer who is spending a lot, who might be a digital native, wouldn't necessarily be a great category to think about standardization. And so, we talked about standardization early. I wanna walk through the benefits of being a strategic account that we call... We've talked to you about this program, where these accounts were already a standard or close to being a standard, but where there's a lot of opportunity left. So again, back to, like, the digital native concept. If they've picked MongoDB, they've built their whole business on MongoDB, or their core application on MongoDB, we're gonna benefit from their growth. We don't need to think of them as a strategic account that we're trying to incrementally drive our wallet share from kinda low single digits into much higher.

So here's a snapshot of the strategic accounts program. Distribution mix of ARR by product on the left-hand side. A fair amount still in EA, which isn't surprising. These tend to be largest customers, customers who have huge IT spends, where there's still room to run. The middle bucket shows industry, slightly skewed relative to the bulk of the business. You know, we have a very diversified, broad business, but within this strategic accounts program, you have disproportionate representation from financial services, in part just because those have been customers who've been early on the platform. There's just a lot of running room within those accounts. Obviously, there's some overlap and interplay between the EA side and the financial services side, as you might expect.

And then lastly, more skewed to the Americas, again, just because that's where some of the older, longer relationships are. One of the reasons that the strategic accounts program has been successful is because we've been quite disciplined about it, and we've learned more information over time as we've continued to iterate. So these are some of the examples of the conditions that need to be present, in an account. We need to have adoption from multiple developer teams. We need strong, not just technical champions, but strong business champions. We need to make sure that we're demonstrating momentum in the account, and that there's a clear pipeline of opportunities for us, and so the kind of account would be ripe for the incremental investment. What does that incremental investment look like?

You take the rep, and they're solely focused on the account. They also take a higher quota, in order to help us, you know, recoup and generate a good return on the incremental investments we're making. They get additional investments from pre-sales, from sales development reps, from SDRs, from customer success, field marketing, developer relations, things like that. And then there are additional resources that are available to them, sort of on a case-by-case basis, whereas we invest more, their quota goes up even further. So it's not really just about the incremental investment, though, it's about finding the right time, when this account is ready. The result is that these extra resources cost more money, right? Not surprising.

If you think about it as sort of a selling unit, as a standard unit, they cost about twice as much as the average direct selling unit, because we're putting these incremental investments into them. But what we see is about 7x more new workload ARR being generated in a given year from these accounts. So part of that might say, "Well, why don't you just do more of these?" It's not just because of the investment. It's not just because you suddenly call an account a strategic account, that you get, you know, 7x out for putting 2x in. It's about the discipline, it's about the timing, it's about being ripe and ready for that investment.

That's a lot of what you've heard about throughout the whole day and really throughout the quarters as we talk about these things of what we're doing on the product side, what we're doing on the go-to-market side, what we're doing to get these accounts in a better position to move more and more accounts to this status, 'cause we'd love to do that. So the result of all this is that even these accounts are quite larger or meaningfully larger, they actually are growing much faster than the accounts. And so that's really been a terrific addition to the results. So that's just a little bit of a view of kind of the financial lens of, like, what becoming a standard means and why it's something that we continue to focus on.

Lastly, in terms of financial summary, we've gone through significant growth as a company, capitalizing on our market opportunity. We've also made material progress in terms of our operating margins on a non-GAAP basis. The balance, as we've talked about with many of you, the balance of our margin progress relative to our market share gains has been a little bit lopsided, so that's what explains the sort of 10 to, you know, the 16 last year, going to 10 at the midpoint of our estimates for the current fiscal year. But that still represents about 500 basis points of progress over the two-year period. I'll spend a minute just dissecting and disaggregating what the, you know, margin movement looks like from the 16 to the 10, just so people understand.

There are really two key drivers. The first is the impact of the $80 million of one-time, high-margin, revenue that we've talked about. That's from the, you know, excess multi-years that we had last year, as well as the unused credits and commitments. And then the second piece is from hiring. We had our hiring last year was extremely back-end weighted, and so this year, what we're doing is we're hitting the run rate anniversary, or the run rate or the annualization of those, and that's worth about another 200 basis points, which leads to the 10%. Just quickly on the hiring, to give you a little bit more of the details there. Last year, we slowed hiring from kinda 30+% to 9%....

But of that 9%, in the first half of the year, it was only 1%, right? So you can see that the growth rate was abnormally back-end loaded. And so that's what creates this sort of 200 basis points headwind that we talked about previously. Part of the reason that it was back-end loaded, and we talked about this, oh, a call or two ago, I think a couple of calls ago, was that fiscal 2024 was more stable than we had expected, which was great, and obviously, that benefited our results. But we weren't sure, and at the beginning of the year, we contemplated, you know, a range of outcomes, some of which were bleaker than others. And so we probably waited a little bit too long to add heads.

I think if we'd known that throughout the year that it was gonna be as stable as it was, would have been, we would have been more aggressive in adding heads, and so that's part of what results in the back-end nature of the headcount. We are still investing both in sales and marketing, given the limited footprint that we have, as well as in R&D, and you heard a number of announcements, and we'll continue to do that. And so I just want to reiterate our long-term targets at 70%+ on the gross margin, 20%+ on the non-GAAP operating margin.

Obviously, you know, at 16% last year and even at 10%, projected for this year, we've made a huge amount of progress since our IPO, where we were sort of in the negative mid-30s. So we've made, you know, 45-50 points of our 45 or 50 points of our 55 points that we needed to make, but we're still only about 2% market share. So again, we're trying to make sure that balance is responsible for long-term investors and to capitalize on the opportunity in the long run. I'll spend a minute just on history of free cash flow here. So despite this reduced emphasis on upfront commitments, last year was the first year that we've generated significant free cash flow.

As we've said before, we've been focusing on reducing friction in the sales process. That obviously has cash flow implications. So historically, we've been in the range of about 100%-110% of collections as a percent of revenue. You can kinda see that over the prior four years. It's a little bit volatile and depends on the timing of payments, but generally, that's, like, the range that we're in. In fiscal 2024, we further reduced the emphasis and the incentives for upfront commitments, and what you can see is that percent of revenue, collections to percent of revenue declined to 92%. And we'll continue seeing that in general in fiscal 2025, and that's sort of reflected in our internal view.

The result of that is that we have less upfront cash, and correspondingly, you know, that affects the free cash flow dynamic. What this tries to do is this tries to look at free cash flow, less operating income as a percent of revenue to try and put that cash flow dynamic, that cash cycle, into context. It's. We found it not as helpful to look on an absolute basis, just given the growth of the business, so this tries to put it on a percent basis. So you can see it was sort of a, a source, if you will, early on. But over these last couple of years, as we've been making this multi-year journey to reduce friction and everything else, that has implications when you're thinking about cash flow, and things like that.

Lastly, as executives and shareholders ourselves, we do care about dilution, and we're very sensitive to equity dilution. We primarily think about dilution on a net share basis. So as you can see, we've been focusing on reducing dilution. Obviously, there are a couple of financing events that convert in fiscal 2020 and secondary offering, secondary, primary offering in fiscal 2022. Excluding those, over the last couple of years, we've been bringing burn down, dilution down from just about 6% in fiscal 2020 to about 2.5% this past year. So this is something that we pay close attention to. Our board does as well.

From all the data that we have from our comp consultants and everything else, we feel like we're very much in line with the benchmarks, but it's something we continue to monitor closely and pay attention to. So with that, I will invite Dev and Sahir up here, and we will take any questions. Are you question mic running, Brian? Thank you. Awesome. Thank you.

Speaker 22

All right, I'll go first. Hey, Dev. Hey, everyone. Thanks for taking the question. So I guess the first question I wanted to ask to you, Dev, is obviously a lot of focus on GenAI. Kinda we heard from some partners and customers that this is kind of the, a new category of data, a new category of apps. I guess, as we think about the history of MongoDB, when the non-relational or NoSQL market kinda unfolded, Mongo wasn't actually the number one player at the get-go. You had companies like MarkLogic or Couchbase or whatever. Obviously, you've evolved and become that biggest player. But I guess, how do you kinda see the share dynamics contrasting in the GenAI world relative to the NoSQL wave?

In other words, are you winning kind of those high-value use cases and workloads today, or is this gonna be something that kinda needs the industry to mature and these apps to go more mainstream for you to win the lion's share?

Yeah, I'll start, and welcome Sahir, Michael, to add any comments. What I said in the last earnings call, and I, and I kind of implied it today, is that I think we're still in the very early innings of AI. I think the bulk of the investments are being made more in the infrastructure layer. When we talk to customers, customers, you know, in the past year, have been doing a lot of experiments, but there's-

... not that many applications in production. So first point is that it's still early. What we've been working on is to make sure that we're well-positioned to be the platform of choice for people to build applications on, and we're doing that on a couple dimensions. One, relying on our strengths and being able to handle multiple data structures. I talked a little bit about in the keynote, about where Vector Search to support different kinds of data structures, which is really critical in AI when you're trying to process voice, video, images, et cetera. Two, being very open and flexible, because obviously the space is evolving quickly. We see customers who may experiment, you know, with one use case, with one set of technologies, and then maybe the second experiment is using something else.

So we're LM independent, cloud independent, we work with a bunch of, you know, other application frameworks. So we're really trying to make sure customers have choice and make it very easy. And the third one is that we're, we're trying to, obviously, and that's part of our DDP strategy, is trying to address the broader set of use cases because we've heard clearly from customers that they can't really manage and have 17 different layers of, of databases or platforms to support. And so that's essentially our strategy. We feel pretty good about our position, just by the nature of the number of early-stage companies who are building AI apps already on top of MongoDB. Not that we take that for granted.

It's early, you know, it's not those companies that start to break out and become large companies, you know, like we saw... I think that's gonna take time, like we saw in the cloud, where people like Coinbase started early and became big companies. Getir, you know, started early and became a big company, a relatively big company. So I think the shakeout, you know, it's unclear what models, you know, what use cases and all that will really take off, but the fact that so many are building on MongoDB gives us makes us feel good. And I think, you know, as we talked about earlier today, we've seen vector search become very popular.

We've seen, you know, a bunch of people who want to—I mean, there's not many companies at the risk of sounding, having a little hubris, you know, who have AWS, Google, and Microsoft, who all partnering with us, right? And that speaks to how popular MongoDB is on all their clouds. And so from that point of view, I think we're well positioned, but it's definitely early. I don't know if you want to add anything.

Sahir Azam
Chief Product Officer, MongoDB

I think the only other piece I'll add is, you know, one of the things we are encouraged by, and actually, you know, we mentioned this AI event, I don't know, three weeks ago or whatever, in London. One of the Chief Product Officers for one of the big model companies, you know, one of the things she said that kind of stuck with me and that we're certainly seeing is, this moves AI out of the, you know, corner office of the, you know, team working in data science, to now becoming something that's mainstream for every development team to have to figure out.

And so in many ways, yes, AI, generally as a category, has been around for a long time, machine learning approaches, et cetera, but it's always been sort of, you know, very niche and focused on a part of the organization that's kind of around understanding insights, as opposed to really driving customer experience or driving true business process efficiency. And that's bringing AI now right to where we think our wheelhouse is. And that's really driving what a lot of these innovative startups and partners that we're working with, because they see us as having that developer love.

Sanjit Singh
Senior Java Backend Developer, Morgan Stanley

Sanjit Singh, Morgan Stanley. To pick up on Sahir's point around, and your point, Dev, around this sort of era right now of experimentation, do you wonder if that's actually an inhibitor to your growth? Because I would imagine if we were coming out of a multi-year or multi-quarter, downturn in sort of tech spend, and we're coming out of this, had there not been this Gen AI wave or vector, we probably been modernizing applications, building net new applications. And do you, do you think there's a sense that as we focus on these POCs and the evals, that's sort of taking energy away from developers, and turn- and that would in some way inhibit, in the near term, MongoDB's growth? Or, you know, I'm just wanting to try to get a sense of the dynamic of the cycle that we're, that we're in.

Dev Ittycheria
CEO, MongoDB

To make sure I understand your question, if I can paraphrase what you said, just to... So there's a finite amount of development capacity in any organization, and you're saying if some of that development capacity is being siphoned off to do these AI experiments, does that mean that there's less, quote unquote, "mainstream apps" being built today? I think that's a very reasonable thing to assume. Obviously, it depends on the customer, but I think this is such a long-term play. This is such a massive opportunity. We're quite excited about the long-term opportunity here. And I think all the building blocks are coming together, and I think we are seeing customers...

At one level, I see customers overwhelmed with in terms of what's the rate and pace of change, but I also see them fearful because they're always paranoid about one of their competitors, you know, using AI as a competitive advantage to disrupt them. And so there's this tension where I wanna be thoughtful about what I do, but I also wanna get going, and so we're kind of seeing that with our customers.

Sanjit Singh
Senior Java Backend Developer, Morgan Stanley

If I could just follow up, I think the message that I got from all the content today is that Mongo is gonna go prosecute the opportunity. You're just, just not gonna let the opportunity come to, to you, you guys are gonna be actively involved. There's a, there's a point that you made on, like, pushing reference architectures. And we saw, like, the last decade of cloud, we experimented with OpenStack, OpenShift, right, Swarm, Docker, and then we ultimately landed on microservices, Kubernetes, that stack. How influential do you think you guys can be in terms of helping customers land on a reference architecture that can create time to value? And any sense of, like, is that gonna take two years, three years, five years? Any perspective?

Dev Ittycheria
CEO, MongoDB

Yeah, I wish I could give you a clear forecast on how long it's gonna take. The one thing that has actually surprised us, but it makes sense, is that customers do view us as a credible thought partner. One, we know how to build modern apps. Two, we're not talking up our own book because it's not like we have our own LLM or we have, you know, some partiality to any one cloud. And so customers actually, and, oh, by the way, we run on premise, and that's something that we're starting to hear more and more from customers who are saying, "I like the notion of the fact that I can run these AI apps on premise." Right? Not everything is gonna go to the cloud.

And so that, you know, so that gives us, I think, a reasonable voice when talking to customers about what to do, where to go. I mean, this event that Sahir and I were at in London, I mean, the quality of customers who came, was spectacular, not just from the U.K., but all over Europe, and they generally wanna learn and listen, and they also wanna learn from their peers. And there was, in some ways, palpable relief in the room because everyone felt like they were behind, but then they realized everyone was kind of in the same boat as them. They're all in this experimental phase, and they're always worried that someone's got it figured out, and they're just going hog wild in terms of rolling out a set of, you know, apps. And I think, I think...

So, the short answer to your question is, I think we can be quite, quite a valuable thought partner to our customers.

Ryan Muldoon
Equity Research Analyst, Barclays

Hey, Ryan Muldoon from Barclays. Can you talk a little bit about the migration opportunity? Like, conceptually, I can see how you can kind of make it a lot easier, but the question is, like, how easy can it get? Like, if you look at GenAI now with things like, "Oh, my God, it's gonna do it all for you," it's probably not gonna be that easy. Like, but how far do you think you can push it, and how easy do you think you can make it for someone to migrate? Because there's obviously a clear kind of platform migration opportunity out there. Thank you.

Sahir Azam
Chief Product Officer, MongoDB

Mm-hmm. Yeah, I think it's going to be a journey, you know. And that's the mindset and philosophy we've taken on it, is, like, we were clearly enamored by the idea that these tools and technologies can be used to expand from kind of the data migration layer, which is where we, you know, we were kind of focused on with Relational Migrator originally, but to really now apply to the hardest part, which is the application itself and all that kind of legacy code. So I think we clearly had an instinct and interest, and all the demos we've been seeing about how, you know, this makes developers more efficient, there's a clear kind of link there.

I'll say, you know, just personally, I was more on the skeptical side at the beginning of the journey, and, you know, people like Paul, who you heard from earlier, and all, we kind of started a program and said, "Let's go try this out." Because if it was impactful, even 10%, 20%, 30% improvement widens the amount of applications for which the business case makes sense and, oh, and is significant in terms of even on a per account basis. So that was kind of the drive to... And Dev really pushed, saying, "Listen, we gotta try something, really researchy. This is gonna be, you know, really impactful if it works, but let's go learn." And so that was kind of the mindset.

You know, clearly, as you heard from Paul, there's a fair amount of iteration and manual effort in each one of those pieces of the process, so it's not like autopilot, by any definition. It's absolutely an assistant and an iterative process. But I would say the results are better than, you know, than we expected. You know, the more skeptics in the room, like myself, are actually now more like, "Let's go even, you know, further on this." And then in the time period over the last few months that we've been working on these pilot projects around app modernization, the model quality and race continues as well.

So if you project out two, three years, not just the individual model quality, but then these agentic workflows that may be able to automate more of the process that right now is manual, we do think the mix between manual effort and automation will only continue to improve, but we do see it being a journey. You know, the idea of, like, a push-button migration of some, you know, complex application that's housing live mission-critical applications, I don't think making that automated completely is gonna be something that's gonna happen anytime soon.

Dev Ittycheria
CEO, MongoDB

Who's got the mic?

Sahir Azam
Chief Product Officer, MongoDB

Oh, sorry.

Dev Ittycheria
CEO, MongoDB

Brad.

Brad Reback
Managing Director, Stifel

Brad Reback, Stifel. How are you guys doing?

Sahir Azam
Chief Product Officer, MongoDB

Good.

Brad Reback
Managing Director, Stifel

Dev, you had mentioned, earlier today when you were talking to, the Microsoft ISV rep, the relationship was fragile a couple of years ago. It's doing better now. I get the sense it's still lagging maybe where AWS and GCP are. So if you think about the opportunity for that to be a growth driver over the next few years, could you size it up for us?

Dev Ittycheria
CEO, MongoDB

I actually think it's actually better than you described. It's really become quite, you know, the relationship tenor has changed. I think what I would say is, GCP was always easy because they didn't have competitive products. With Amazon, it was easy to orient around customers. They're very customer-obsessed. I think with Microsoft, they were kind of feeling their way through. What I think got them over the hump was, one, they saw how so popular we were, both in their cloud and just in the industry overall, and two, they saw the consumption that we were driving in their cloud themselves, and they realized it's not a zero-sum game. It's actually, you know, a win-win for both parties.

We've had some big wins with them, and a big part of that, that we focus a lot, a lot of people think it's all about product and product integration. A big part of this is all about the go-to-market angle. Like, it's, you know, I and Scott Guthrie can align on, you know, great, you know, have a handshake, but if the salespeople are not incentivized, and they can't see how it's gonna affect their pocketbook, they're not gonna wanna really work together. So I saw Alan here, and I guess he may have left.

Sahir Azam
Chief Product Officer, MongoDB

He had to step out for a minute.

Dev Ittycheria
CEO, MongoDB

Yeah, Alan, who heads our partnerships, he and his team do make a lot of investments in making sure that the go-to-market teams are highly incentivized to work together. You also remember, we're a member of the first party console. We're a member of the startup program. So we're trying to also remove a lot of the friction. We're in the co-sell program, so all the friction, you know, around working together, if you can chip away, it just becomes that much easier to work together.

Sahir Azam
Chief Product Officer, MongoDB

... Just one additional kind of anecdote. I was talking to Alan yesterday, actually, and one of the things that's interesting is, you know, kind of prior, especially, I think, before this big boom around everyone's focus in AI, you know, a lot of startup developers, you know, were focused on Google or AWS. I know Microsoft certainly has its own ecosystem, though smaller. Now, in the startup program performance, we're seeing actually a shift here now, where Azure and the startup program, for more of the smaller, earlier stage company, is actually driving higher volume on Azure than we've seen in years prior working with them.

Rishi Jaluria
Managing Director, Software Equity Research, RBC Capital Markets

Great. Thanks. Rishi Jaluria, RBC, maybe just a two-parter on GenAI. First, when we think about tools like, you know, GitHub Copilot, Tabnine, right, they're causing 40%-50% greater productivity. In fact, one of your partners said 10x to 100x. Has that resulted in, you know, any sort of increase in the rate and velocity of applications being built on Mongo? Have you seen any of that? How do you think about the longer term opportunity? And maybe just alongside that, how do you think about the opportunity to use a lot of these GenAI technologies to create kind of a MongoDB Copilot, right? Especially given there's probably not nearly enough MongoDB developers out there for the level of demand. That'd be really helpful. Thank you.

Dev Ittycheria
CEO, MongoDB

Do you wanna take that one?

Sahir Azam
Chief Product Officer, MongoDB

Yeah, absolutely. So, I do think over the long term, the volume of software that will be developed in the, in the GenAI world will go up. Like, just naturally, if it's easier for developers to be more efficient, to create applications, you're already seeing some simple app-type frameworks automatically generate kind of components of applications. That's only gonna continue. So, you know, over the long term, I definitely think that'll be a trend, and I think it's too early to, like, be able to measure that, and the impact in terms of volume of apps specifically. But I think it's a pretty obvious direction over the span of years. I think in terms of Copilot, this is an interesting one. We did look really at, like, do we have to go train or build our own model for MongoDB?

What we found is actually, no, there's a lot of great examples and documentation, all that, publicly available, not just from us, but, you know, you look at all the content that's been created, like go to YouTube and search, learn Mongo, there's a whole bunch of third-party stuff. There's folks that run Mongo courses in Brazil. They get, you know, tens of thousands of people signing up digitally. So we already saw the baseline where these, you know, Microsoft or AWS or GCP, et cetera, the baseline of quality was quite good.

But then we said, "Okay, how do we actually make sure that we can raise the bar, given they're all partners of ours?" And so we actually worked with those providers directly, gave them some of our more proprietary information and best practices, our own code samples, to be able to make all those models kind of more effective. And that's something we're gonna continue doing, not just with them, but any of the other assistants. And in reverse, obviously, we've implemented some of those public LLMs into our own developer experience, so it's a nice kind of circular relationship, and we've seen really good results so far, without finding the need to have to create our own LLM to do that, versus kind of using this fine-tuning approach.

Dev Ittycheria
CEO, MongoDB

By the way, on the stats on the developer productivity improvements, it's all over the map-

Sahir Azam
Chief Product Officer, MongoDB

Yeah, it's

Dev Ittycheria
CEO, MongoDB

Depending on who you talk to. I was talking to the CIO of one of the largest banks in the world, and he said that, you know, they see about 20%-25% in developer productivity, but they're only coding for half the time, right? So that means it's about 10%-12.5% in terms of real productivity. While that's not, you know, nothing to sneeze at, I think, you know, what you hear from customers is literally all over the map in terms of what these code gen tools can do.

Brent Bracelin
Head of Technology Equity Capital Markets and Managing Director, Piper Sandler

Brent Bracelin here, Piper Sandler. Good seeing you guys. I wanted to stick with this AI trend, with the topic here around timing and then, consumption opportunity. On the timing side, maybe I'll ask a question a little differently for you, Michael. Consensus is modeling a trough in your growth rate at about 11% in Q2, and then it has it picking up in the second half of the year. How much of that is just tied to year-over-year compares, or is it concurrent with the assumption that you'd start to see these experiments in AI go into production, start contributing to the growth profile, one? And then two, Sahir, for you, as you think about a cloud workload versus an AI workload, how different is it?

Brandon at Nomic was mentioning an exponential increase in data and volumes. Walk me through what you're seeing in the same experimentation area around the consumption differences for AI versus cloud.

Michael Gordon
CFO, MongoDB

Yeah. So I'm happy to start. As I said at the very beginning, you know, we're not gonna talk about the recent quarter, obviously, because it only closed two days ago. But in our March call, when we gave our guidance, I think we were really quite clear that our view was AI, hugely beneficial long-term trend, not gonna show up in the near-term numbers. So, you know, I'll let consensus speak for itself in terms of whatever consensus thinks, and I don't know what embedded assumptions they have, but at least our internal assumptions are, it's... You know, we're gonna be a long-term beneficiary, and it's a long-term tailwind, but it's not something that's gonna, you know, juice the numbers in the short term.

Sahir Azam
Chief Product Officer, MongoDB

Yeah, and I think it makes it particularly hard to kinda tease apart right now, 'cause a lot of those applications we mentioned are early. They're in POCs, they're in experimentation. So by definition, the consumption of the AI workloads, if you looked at them, are probably smaller than the production cloud workloads that have obviously had many years to mature or are serving live customers. So that kinda makes it hard to get an apples to apples comparison. But in a more abstract way, if you think about those workloads, you know, one of my VPs like to say, "An AI application is still an application." So you still have operation, you know, transactions, you still have search, you still have, you know, stream processing, all these various needs. So that's kind of a baseline.

Then you now have the compute and storage needs of the vectorization and management of the AI data. There could be a world over time, as these apps mature, where an AI workload could, on a unit basis, be larger than an average application, but it's way too early to tell because of this kind of experimentation versus, you know, production dynamic difference between the two.

Mike Cikos
Senior Analyst, Equity Research, Needham & Company

Hey, guys, you have Mike Cikos with Needham over here. The question for you on the strategic accounts, I know that you called out the 2x upfront investment and the 7x faster growth. Trying to get a better sense of how are you guys measuring success on this go forward, right? The thought process being the first potential to cherry-pick the strongest accounts first, that's the ones that you went after, right? Versus, I guess, fine-tuning the learnings that you continue to take on. So is that 2 to 7 expected to hold, like, should that deteriorate over time? How are you thinking about that?

Michael Gordon
CFO, MongoDB

Yeah, so a couple different thoughts. So, in general, because of the attractive setup that we just walked through, if you think about, you know, if you were a fly on the wall in a budgeting conversation, right, the first thing you should want to do is fund as many of those as you can, right? And, and that's sort of where you start. But back to my earlier comments, just because we called them strategic accounts isn't what actually makes them successful, right? It's sort of all the other things that go around it. And so, we tend to be very, you know, returns-oriented, and disciplined. And so I don't think we would suddenly just, you know, throw a whole bunch of people on the program on the hopes that it would work out.

And so what we've done is each year, as we've grown the program, we've said, "Okay, what's the next set of accounts that's ready?" And with the goal of everything that we're doing to have more accounts be ready for that, right, and be kinda primed for that investment, if you will. And so I wouldn't overly fixate on sort of like the 2x and the 7x, and that those are immutable, and we'll always do those. Theoretically, if you had confidence that you could double the pool, and it would only be 2x and, you know, 6x, you'd probably still do that, right? But, like, what we're doing is we're tackling, or taking advantage of as many as we can, and trying to get more ready.

Dev Ittycheria
CEO, MongoDB

If I can just add one other comment, it would be that one of the takeaways also for us is that, unlike other businesses, you know, who use sales and marketing tools to do a lot of pipeline generation, one of the things that we learned through this effort is that a big part of the pipeline generation for us is developer education. Because a lot of people... It's amazing to us, even some of the accounts that Michael showed, where we're doing meaningful revenue, there's still developers in those accounts who just don't know MongoDB that well. So consequently, you know, when you don't know MongoDB that well, you may not choose MongoDB to address the next new workload you're being asked to focus on.

And so developer education is like, in some ways, for us, a pipeline generation point of view, because once they realize how easy it is to use MongoDB to solve a particular, say, you know, a content generation or e-commerce or, you know, payments application, and to use MongoDB to do that, it just unlocks so much opportunity. So, we are gonna be, over time, feathering in more technical resources relative to sales resources to, in these larger accounts, maybe not to the ratios of these strategic accounts, to really help with that developer education and awareness.

Much like we have this conference here, where we have a lot of technical people and customers come and learn about products and all that, the reason we're doing this in 23 locations around the world, and that doesn't include all the things we do inside accounts, where we have hacker days and developer days, it's all about that developer education and enablement, because that's where the unlock comes in, for us.

Michael Gordon
CFO, MongoDB

Okay, it looks like we have time for at least one more. Jake, do we have time for one more, or if this is his last one? 'Cause I know we're at time, but go ahead, Jason, please.

Jason Ader
Research Analyst, William Blair

Yeah. Thanks, Jason Ader with, William Blair. Thanks for a very informative session, guys. I have one clarification and one question. The clarification is, is there only one database standard in a strategic account, or does there tend to be two, three, four, five?

Dev Ittycheria
CEO, MongoDB

It really depends on the account, because depending on the large account, they may have a modern standard and a kind of legacy standard, just because there's some apps that they know they're not gonna, you know, not necessarily move, or if they make any additional incremental investments for legacy platforms, they'll stay on one standard. So they're typically, depending on the account, we kind of see a modern standard and legacy standard, then an exception process for everything else.

Jason Ader
Research Analyst, William Blair

Okay. And then the question is, just on the AI subject, it feels like over the next three to five years, there's three main opportunities for you with AI, and you know, tell me if I'm wrong, but vector search consumption, leveraging GenAI for Relational Migrator, and then just this idea of more code being generated that you referenced, you know, it's gonna generate more apps. How would you rank order those three opportunities, let's call it three-year timeframe, in terms of the impact on your business?

Dev Ittycheria
CEO, MongoDB

You want to?

Michael Gordon
CFO, MongoDB

That's a tough question. I would say, certainly, the vector database adoption is one of the soonest things we're seeing. So, you know, the big question mark there is just how fast it'll take or how long it'll take for those applications to become real production applications that are successful on- and that's on our customers, actually, not us. But, you know, as Dev alluded to earlier, our strategy there is to cast as wide a net as possible. So become a standard if it's a large enterprise and an adopted vector database, or if it's a, you know, work through all our startup programs, one of these types of events to get the broad-based, bottoms-up kind of startup community. I think I'd probably say that's probably the most near-ish term thing, but it's dependent on the apps.

App modernization, obviously, those are bigger, more established, you know, applications that on a per workload basis, have higher dollars, but this is very much in the early stages of us applying AI to that problem set. So encouraging, but I think before... It's not just, you know, having some great pilots, it's how does that turn to product? How does that turn into a repeatable process? How do we enable the skills of our services organizations to not do a couple pilots, but actually get this to a repeatable motion? And that naturally takes some time. I think the, if I had to guess, the compounding of more software on a global scale being generated is probably gonna be the furthest out, but perhaps in time, may have the biggest long-term impact. But, you know, this is just speculation.

I think it's really hard to put.

Dev Ittycheria
CEO, MongoDB

Yeah, I'm long term, I'm pretty bullish about the throughput of development teams. The point I would make is that if you look at what happened in the industrial era, right, you know, how people made cars, how people, you know, created clothing, it was very much a bespoke process, right? And then, the Industrial Revolution kind of standardized as it created that whole manufacturing process to do things at scale. If you look at software development, software development is still very much a bespoke, developer-by-developer, you know, kind of, process. And I think what AI will do long term is really allow people to produce code at a massive scale. Again, I'm talking longer term, not necessarily in the short term. And that, I think, is gonna fundamentally change, you know, the amount of software that's produced, in, in our industry.

Michael Gordon
CFO, MongoDB

Maybe just simplistically, I would say, if they're all tailwinds, they're all significant, and even though we're, you know, 6.5 years after our IPO, they would all be excellent parts of a slide that said, "Multiple vectors for future growth," if you were putting your IPO roadshow slide together. And so, with that, thank you for joining us. Thanks for spending the afternoon with us, and hope it was valuable.

Dev Ittycheria
CEO, MongoDB

Thank you.

Michael Gordon
CFO, MongoDB

Thank you.

Powered by