MongoDB, Inc. (MDB)
NASDAQ: MDB · Real-Time Price · USD
258.20
-6.18 (-2.34%)
At close: Apr 28, 2026, 4:00 PM EDT
259.50
+1.30 (0.50%)
After-hours: Apr 28, 2026, 7:47 PM EDT
← View all transcripts

Investor Update

Jun 22, 2023

Michael Gordon
CFO, MongoDB

Thank you for joining us. For those of you in the room thank you for being here hope you enjoyed the keynotes. For those of you on the live stream welcome hope you enjoyed the keynotes as well it's great to see a packed room here and i get to serve one of my favorite roles mc so our agenda today we've got a lot that we're cramming into three hours but the new announcements but also really importantly how our customers are using our products.

We know a lot of you cover lots of companies, and we're just one of the many companies, but what we do is, to us, really interesting, but also sometimes inaccessible from afar. We wanna make sure that we really help you understand what it is, how our approach is different, and how we're helping customers in the market. Here's the agenda. We're gonna run through a product overview, a number of the announcements that we covered this morning in the various keynotes. Sahir and Andrew will talk us through that. They will also then, because AI runs through so many different things, we've got a separate section on AI, just to sort of help put that all into full context for you.

We're gonna do a customer panel, which I think is one of the great aspects of this event, is hearing directly from customers, how they're using MongoDB, what challenges they're solving, how it's impacting their businesses. Dave's gonna host a chat with AWS, one of our many partners, but an important one, and then I'll provide a broader update on the business. What we'll do is we'll give a little bit of visibility into some of the Atlas dynamics that we've been talking about, and also some of our customer breakdowns to try and shed a little bit more light in the spirit of the theme of helping understand our customers, and how they're using MongoDB.

Pro tip, this is not some sort of pivot where I'm gonna announce some model shift, or, you know, unveil some new long-term target model. For those of you who've been around for a while and followed us, we have pretty consistent in our messaging and our execution, and nothing's changing there, so, don't wanna steal any of my own thunder, but, probably worth teeing that up for all of you. Look forward to the conversation and the session that we'll have over the course of the day today. The safe harbor statement, I will now pause for several minutes and let you read all this.

Here we have the standard language that we have and really look forward to being able to get into the conversation overall. We'll start off on the product side. If you think about the market, we are pursuing one of the largest markets and one of the fastest-growing markets in all of software. The reason for this is companies are increasingly needing to rely on their technology, specifically their internally built technology, to drive competitive advantage, right? That's where competitive advantage comes from. You can't get competitive advantage by buying off-the-shelf software. You need to build it.

Therefore, when you hear phrases like, "Software is eating the world," or every company becoming a technology company, or you hear about big banks talking about having more developers than large West Coast technology firms, that's really at the core of this. It's about driving competitive advantage. Every application that people are building has a database at its core. That's why this market is so incredibly large, with $81 billion spent in 2023, estimated per IDC, growing to $136 billion over the next four years. Most markets, certainly, databases have been around for a very long time. I tend to think of large markets, mature markets, growing in line with GDP.

Part of the reason why this market is growing in double digits is because of that strategic nature, is because that every application at its core has its database and has a developer data platform that we're offering. Our market is different. It's not typical. Many of you are experts at pattern recognition, and I think it's important to call out here that we don't quite fit the pattern that you're used to seeing. Most markets are fairly monolithic, and by monolithic, I mean in that regard, when you think about the application stack, the basis of competition, the unit of competition, is the customer. You win all the customer or none of the customer.

If you're providing HR software, if you're providing the ERP system, if you're providing the CRM, you tend to have the whole company running on your platform. If you're a challenger or an incumbent, you're locked out of that account, right? If you're running an HR system at a big telco or retail or whatever it is, you know, one team is not on a different HR system than another team. They all run on the same thing. If you're, you know, Workday and they're running PeopleSoft, you're sort of locked out until eventually you hopefully break in and win the whole account. That is not the market that we operate in. Our competition is not binary like that. We're competing for workloads.

As I mentioned, every application has its own database, and the result of that is that our unit of competition itself is the workload. You've heard us talk about this sort of land and expand model that we have. What this means is winning an initial workload, yes, hopefully, that workload is successful, but the way that we really grow within the account is winning additional workloads. Yes, winning that next subsequent workload is faster, more successful, ultimately economically more efficient, but you still need a new effort. You don't just sit back, you know, kick your feet up and have the workloads flow in. Obviously, you can win them, in bigger and bigger chunks, as you become the standard within the account, et cetera, et cetera.

I think it's just critical to underscore the fact that a basis of our competition is the workload, and that's how we gain share within an account. When you think about an illustrative customer journey. Here you can see a workload, that first alert workload. Every new customer relationship has to bid again with an initial workload, and that initial workload grows. The first couple of years tend to be when there's fastest growth, but the workload still continue to grow for many, many years. Really, what the opportunity is, once it's onboarded, that workload will tend to have its own growth behaviors, right? It tends to be a number of application-specific factors, as we've talked about, could be affected by macroeconomic conditions, and ultimately, it's the underlying read, write, query activity in the application.

Over time, the way you grow within the account is adding new workloads, right, and making them successful and continue to penetrate the account further. I think that's important setup and context for a lot of the things that we're talking about here today. Separate from the market, being incredibly large already, it will continue to grow because there will be an explosion in the number of new applications that are developed over time. As I mentioned, this sort of critical aspect of competitive advantage is people are looking to innovate more quickly. You heard Dave mention that in the keynote. That's one of the key drivers. More and more applications will be built, and that will continue to be a tailwind for us and our business.

Here, you can see the estimate from Microsoft, that more applications will be built in the next 55 years than the last 40 years combined, just to put one context or frame of reference. Secondly, the developer really is at the center of this. The developer is the one who's driving this innovation, developer productivity is critical, and developers are the real decision-makers in technology. Lastly, in addition to developers and the need for all these applications to happen, there will be other technological developments, whereby more and more applications will be built. There'll be a little bit of a democratization that will further accelerate the building of applications, all of which should benefit us. Our whole company is oriented around winning more workloads.

This is not just a go-to-market orientation, although importantly, it is a go-to-market orientation, and that's a change or that's a shift from winning a big multi-year contract on an annual, you know, subscription license, or maybe you do a three-year deal, and you kinda sit back. This is much more focused on winning more workloads within the account. Again, it's not just a go-to-market motion, this is also a focus across the whole company. When you think about the announcements today and what we, you know, communicated, it's really geared around winning more workloads. We don't have time to go into all the announcements in great depth, 'cause there were so many of them, but we're gonna focus on a few.

We're gonna focus on the ones that are highlighted here, app modernization, search, Queryable Encryption, Vector Search, and then streams, as some of the announcements that we're most interested in digging into with all of you. As I mentioned, we'll have Q&A at the end. Now I'm gonna turn the stage over to Sahir, our Chief Product Officer, to start with app modernization.

Sahir Azam
Chief Product Officer, MongoDB

Thank you, Michael. Hello, everyone. Good to see some familiar faces. Hopefully, you got to catch some of the keynote and get a glimpse of some of this, but we want to put a bit more context behind the investments we've made in the last year and the expansion of the platform. To start, we wanted to focus on modernizing the existing legacy estate that sits and captures billions of dollars of re-revenue in the database industry. As you've heard on stage from Dave, our foundational differentiation and technical advantage starts with the document model, the data model and architecture that we built the company and technology on from the beginning. The reason this is so powerful is because of three key reasons. One, document models are very natural for developers to be able to build and importantly, iterate and improve applications over time.

A well-written app using MongoDB is also much more performant. If you think about relational databases, they were optimized for a time in which hardware with disks were really expensive and people were relatively cheap. Equation's completely flipped today. Scalable hardware is available for anyone on a utility basis at their fingertips, but hiring developers and making them productive is the challenge that most organizations have. There's actually a performance benefit to the data model and how it persists everything on disk. That makes things much more scalable. We have a distributed architecture that can be able to scale applications horizontally in an efficient way. It actually goes further than this. You know, another context around this is the idea of rows and tables in a relational database doesn't naturally map to the way application developers think. They think about objects.

They think about managing a purchase or managing products in a catalog, or customers, or people as part of their application. These are all objects. That's started to become the driver behind new programming languages. Object-oriented programming arose over the last 20 years. One of the most powerful things about MongoDB is that's a natural way to map object-oriented programming directly into the data model. It's a perfect fit without a whole lot of translation or ORM technology to convert complex rows and tables into something that can work with the application. It's that intuitive experience that really drives this. Our principal competition in the market is still legacy relational technology, right? That shows up in a couple forms.

There's certainly the proprietary large vendors, the, you know, the Oracle or the Db2s or the SQL Server of the world, but then there's also the open-source alternatives. Fundamentally, the newer open-source alternatives, or maybe not so new in the case of things like Postgres, have the same fundamental limitations. It's still a 40-year-old architecture and data model that was built for a time of counting applications on mainframes. Now, to be more specific, we see our customers struggling to innovate on relational databases for a few key reasons. One, as I mentioned, they're not optimized for the modern way developers work. They impose constraints on how fast you can iterate on an application.

The idea of having rigid rows and tables with complex relationships, if you ever walk around, you know, some of your IT teams, you know, floors and where the developers work, you'll see giant maps of related entity relationship diagrams and how complicated it is to even understand them, let alone change them. This overall rigidity is what makes it hard for people to move fast, and in the narrow when anyone wants to innovate, this becomes a real hindrance. What do customers do? Modern application requirements are constantly growing, so they have to do something. They end up surrounding their correlational database with a bunch of band-aids, meaning specialized niche solutions for various use cases. That could be, for example, key value stores, caching systems, it could be a search engine, as I mentioned on stage, but bolts on more complexity.

It could be analytic systems and mobile and edge synchronization services. These are all components that get added to a stack. When I meet with customers, it's not like there's one complex diagram like this. Every individual application team has their own different, cumbersome version of this. There's no reusability, no skills development, and repeatability inside of that organization. This complexity kills. Why are people applying these band-aid solutions? They're doing so for two reasons. On one hand, relational databases are hard to get off of. You know, none of these band-aid solutions, excuse me, are able to fundamentally re-fit replace the transactional guarantees and capabilities of a relational database, so they end up being an adjunct, they end up being an add-on. Conversely, a relational database can't scale or add the different data models and manipulation types that all these different specialized use cases can offer.

On both hands, you can't replace one with the other. It just creates more sprawl. MongoDB is very unique because seven, eight years ago, we made a very intentional decision to focus on bringing forward a lot of the things people expect from traditional relational databases, strong consistency guarantees, schema enforcement, enterprise security, all the mission-critical enterprise features to a modern distributed document-oriented database, which places us in a little bit of an interesting place in the market because we're so associated with the non-relational or NoSQL sort of segment, but in fact, we're quite different than all of those because we fundamentally can replace a relational database system. Getting off of these legacy systems is clearly hard, and this is a very even simplified description of what it takes, but just getting through it very quickly.

The first is you need to update the schema. Modeling information in a way that's optimized for rows and tables in a relational database is very different than modeling data in something like MongoDB, which is a document-oriented database. The way the relationships between objects works is much simplified, but it's not the same. You need to, of course, rewrite your code. The application business logic, this custom stuff that drives and innovates and differentiates the software you're building, that needs to be refactored at the very least, but oftentimes needs to be completely rewritten for a modern programming language or a new architecture. Of course, you actually have to migrate the data, and there's different ways to do this that we'll get into, but this is at a very high level, this process that happens for a single app migration.

It does require a holistic approach. It's not just technology. We'll certainly discuss that, but it also requires people. It requires expertise in doing this safely, doing this repeatedly, and it requires partnerships to make this whole process work end to end. Starting with technology, today, we announced the general availability of a technology we call Relational Migrator. We announced the preview of this last year. We've been working with dozens of companies, with our pre-sales and field teams on testing it and actually now running live migrations of real production applications, so we're really excited to see what we've been able to accomplish, and now we're making this downloadable for everyone. It focuses on three key areas. First, we've built a UI that's beautiful and allows you to create a canvas for designing MongoDB schema.

It inspects the relational schema and allows you to map it and create a document-oriented schema in MongoDB, all with drag and drop. Next, it handles the migration itself, so it actually moves data from the traditional legacy relational system into MongoDB. Finally, it helps generate sample code. Now that we understand the schema, we can generate sample MongoDB queries that accelerate the development process. We support a broad array of different db, database services as a source, Oracle, SQL Server, MySQL, or even end-of-life databases like Sybase or cloud databases from our cloud partners. The destination can be MongoDB Atlas, or it can be a self-managed MongoDB deployment of enterprise on-premises in a customer data center, where there's still quite a bit of modernization happening.

On the migration itself, we support two models, a one-time snapshot, think of that as a bulk move of all the data at once, or a continuous change capture system, where you have a running application, the changes that are in the source database are automatically streamed to MongoDB, and then the customer can flip over the application to connect to the new database when they're ready to avoid downtime as much as possible. We've gotten a lot of great feedback around this. I'm not gonna read the quotes here, but the themes that are coming out are the stability and performance of moving the data is definitely an area customers are really impressed by.

The other is the usability, so that design canvas of being able to visualize your existing relational schema, turn that into documents, see what that new model looks like, helps developers rationalize how to think about application code changes going forward. Let's move on to people. We're also investing heavily in skills across the company. One, and most importantly, perhaps, our professional services team. We have centers of excellence and skills inside of our services teams that are built on analyzing the right applications or components of applications to modernize to MongoDB, and the process to create a center of excellence at the customer site to do this repeatedly, not just for one application, but for many applications. We're expanding our MongoDB University education programs.

Today, they've been really focused on how to build with Mongo, use a technology or query language or new products, but we're adding new courses that are aimed at existing SQL developers. We're starting with a specific course, but then we'll have an entire learning path that we can walk people through to get sharper and sharper on their usage of MongoDB. Last, and certainly not least, one of the things we've really scaled up in the last year is our developer relations days. This is where we take a dev advocate, we fly them around the world to one of our large customers. They gather together hundreds of developers, for example.

We do a training on MongoDB and the differences between relational and Mongo and the benefits. Then we break off with candidate applications and get hands-on and actually help them prototype so they can understand how that app can translate to MongoDB. It really just flips the switch in terms of the buy-in of how we actually can do this for applications that people would never have thought could be moved off of a relational system. Last, but certainly not least, our partnerships. Many organizations, especially in the enterprise, have deep existing relationships with the global SIs, whether it's Accenture, Infosys, et cetera. We've been working with them for many years on skilling up hundreds of consultants on MongoDB.

They have large app modernization programs, so we plug right into their overall go-to-market there, and certainly, this has helped us with some of the largest, most regulated companies that we work with. We also work with thousands and thousands of smaller companies or even teams inside of large organizations that wanna move a bit faster. We saw a need to create an ecosystem of more boutique SI partners. These are smaller organizations that can drop in on an application, get hands-on, and accelerate that process. We call this Jumpstart. We actually have an ownership stake in many of these companies, and you'll continue to see this expand as we go forward. We've seen great results in just a couple of years. Now, we'll continue to invest in this across all facets, technology, people, partnerships.

This is a long journey. We will tell you more about some specifics we're doing on the roadmap in the AI section later that Andrew and I are covering. It's important to note that we don't think everything will be ever fully automated. We're just gonna constantly move the bar forward, make it easier, which is really important for us because it expands the aperture of applications that customers will consider modernizing because the cost and effort now makes sense. With that being said, I'll hand it over to Andrew to cover Atlas Search and all the new enhancements there.

Andrew Davidson
Senior Vice President of Product, MongoDB

Thank you, Sahir. Folks, great to be here today. I'm Andrew Davidson, SVP of product at MongoDB, and been with the company over 10 years. I wanted to do a little bit of a level set on our search strategy. You hear us talk a lot about search, and why is search so important? Well, I want you to think back to those applications you might have used back in the nineties, early two thousands. If you think about those software applications, they kinda made it feel like you were just using a database directly. You would write data into some kind of form, and you would write it to the database, and it'd come back, and it was kludgy and slow, and frankly, you hated using that software.

When you think about the software that you love using today, it's interacting with you throughout your life, all day long. You're doing it, many of you, right now. You love this software because it's totally been revolutionized by user experience design, and one of the key ways we've been able to revolutionize user experiences is powered by search. Search enables these natural language experiences. Search allows us to allow end users to describe what they're looking for and tell them a recommendation, or give them what they think they're looking for before they even realize what they were looking for, or to do autocomplete, and so many other things, let alone the classic, looking for my item in an e-commerce catalog or many other use cases.

The challenge has been that most application teams haven't had the sophistication to layer in an entirely separate search engine alongside their operational data store. Let me explain what I mean by that. Every application at its heart, as Michael was saying, has an operational data store like MongoDB. This is powering the core system of record for that application. If you're gonna try and introduce some of these search capabilities into your software, you're gonna have to introduce a synchronization process, which needs to be governed. You're gonna be moving data out of a database like MongoDB. Traditionally, you would have to separately move that data into a search engine, which means you have to stand up, operationalize, manage, and maintain an entirely new technology, all of which requires significant governance and an ongoing burden.

Your software application, you're now going to be managing multiple APIs, multiple connection dynamics to these different engines. All of this leads to complexity and essentially represents an ongoing tax. This is kind of an example of those band-aids that folks can layer in, as Sahir was mentioning. We saw there's just such a huge opportunity here to ask ourselves, since this is something that is so powerful, it shouldn't be reserved for those software applications being built by the most sophisticated teams in the world. If we can make this something that every software development team can build with, then we massively unlock a lot of value for them.

We make them more agile, able to build software faster, and we make it so that far more teams than ever before will be able to build the power of search into applications, which means all of us will benefit. That's been our philosophy with MongoDB Atlas and Atlas Search. We've fully managed on the back end all of the synchronization, heavy lifting, all the complexity of building these rich search indexes. These are specialized data structures that make those capabilities possible. We make it so that a developer building software with MongoDB doesn't have to have all this expert specialty knowledge into how to really think about search. They can be a MongoDB-oriented developer and simply create a new type of index, and through the same elegantly integrated query experience, run those search queries adjacent to the rest of their applications queries.

This is an extremely powerful concept, and we've seen great resonance with our customer base on this. Tons of builders organically just adopting and layering in this capability. To quote a few software developers on the platform today, not having to deal with the replication and re-indexing just makes their lives so much easier, not having to manage additional infrastructure. All of it becomes so much more efficient and easy to use. It's kind of obvious, frankly. What we've seen is such great validation and adoption, so many people building with this capability for the first time every month, that more and more large-scale, mission-critical search use cases are coming onto the platform. It's just inevitable. They're growing up with us.

We thought to ourselves, "What do we need to do to think for the future, to be ready for this next level of scale, next level of sophistication?" We're very excited to announce today the private preview of our dedicated Search Nodes. This is an entirely new foundation that gives us the ability to independently scale the search portion of a workload with optimized hardware, better availability, in other words, uptime characteristics, and will set the foundation for much larger scale use cases of the future. The key point here to keep in mind is that as a software developer, nothing changes. You don't have to think differently about your queries or your index strategy.

This is all a back-end detail that Atlas brings to bear to make it so that you can go much bigger, but your software, your code, all of it is completely consistent, in line with our vision of continuing to be elegantly integrated into our developer data platform. If I jump into another topic, you know, if search is about enriching every kind of software in every kind of industry, in every kind of vertical, all over the world, because that's how ubiquitous search is, well, we also need to think about some of the most elite, mission-critical, special characteristic workloads and industries, and think about their special requirements. We have to make our investments there as well. That brings us to a very different topic, which is Queryable Encryption.

The topic of data encryption is nuanced, it's complex, there's a huge amount of academic research here, but it's only a partially solved problem. If we talk about data in transit or encryption over the wire, there's standards here that are well established. TLS network encryption, which powers HTTPS, which powers that little lock icon in your web browser that tells you, "I have an encrypted connection to my website." Great. Separately, for data at rest, we also have a pretty well-solved problem. If someone were to, you know, steal the hard drive out of the back end of a server, without the encryption key, all they're gonna be able to see is ciphertext from that drive, and that's extremely important. It turns out that encryption in use, which...

Sorry, data in use, which is basically the data in memory on a server, for example, where a database is actually actively querying that data, is not a solved problem when it comes to encryption. This has, frankly, some threat vectors associated with it. There are challenges of the inside threat from the service provider, the kind of who's watching the watcher type scenarios, which requires trust of the service provider, and there's always levels of trust, but you can never be perfect yet. Then there's separately this concept of CPU side-channel attacks, which you might have seen in the news a couple of years ago. These occasionally rear their heads and essentially allow you to potentially access memory from one virtual machine on the same physical machine as another, which is a huge problem.

There's been a lot of people looking at this for a long time, but it's tough. Up to this point, essentially, you had to decide. You're either gonna have your data encrypted in use or have it be queryable. You couldn't have it be both. This is essentially one of the hardest problems in computer science. It's at the intersection of advanced cryptography and database engineering research. There are many people in academia and research labs who focus on this problem full-time. You hear about a lot of different ways of tackling this problem, and traditionally, it really, no one has until recently seen a path.

If we think about a simple two-by-two, where on one dimension, we ask ourselves, "Do we have expressive queries?" On the other, we ask, "Is encryption in use?" Most databases offer at least some type of expressive queries without encryption in use. Some databases, including MongoDB, offer encryption in use, but without expressive query ability. MongoDB offers this in the form of our Client-Side Field Level Encryption, which was launched in, I think, 2019. We realized that research in academia was getting to the point where it was ready to enter the industry.

We actually identified a small team and acquired them out of Brown University, researchers there, about 2.5 years ago, and we spent the subsequent time bringing that capability into our platform, which has allowed us to become the first leader in industry, showing what can be done with Queryable Encryption, which brings expressive queries to encryption in use. We're very excited to announce the general availability of this Queryable Encryption capability in our upcoming 7.0 release. We made this announcement today. This is a really big step forward in the industry. Very specifically, this is targeting equality match with randomized encryption. This is a specific subset of the ultimate long-term roadmap for us to do more and more expressive queries, all while preserving this encrypted attribute over time. Why is this important?

When we think about these industries that build critical software for all of us, that we all rely on, healthcare, financial services, government, and more, we want to ensure that the next generation of software applications built in these crucial regulated industries are built privacy optimized from the ground up, from day one. All of us benefit from this. We at MongoDB are focused on ensuring that these types of industries can get to that next level, in line with our broader ambitions to enable builders in all of the crucial industries in this world. With that, back to Sahir.

Sahir Azam
Chief Product Officer, MongoDB

Nice job, Andrew, for sure.

All right, let's talk about stream processing. I think this was one of the announcements that surprised people today, based on a couple of comments I heard in the hallways. I'm really excited to get into it. Streaming data, in some sense, is everywhere. A lot of the experiences that we are used to day-to-day in the software we interact with is powered by data in motion. You think about a hyper-personalized experience in an e-commerce site or that perfect Instagram ad that tries to sell you something you don't want to actually buy, but end up doing anyways. All of that is fundamentally powered by streaming data.

That expands to the enterprise as well, whether it's manufacturing, and IoT, where you're collecting massive amounts of information from a factory floor, or driving applications, self-driving, where you need to reorient the pattern automatically based on current conditions and traffic. Of course, financial use cases, fraud detection, intrusion events for security, etc., are all powered by real-time data. In order to make this real-time data usable for an application, it's quite complex today. There's a few components that are necessary. One, you need a streaming transport technology. Here, Apache Kafka is the most prevalent, but there are also proprietary products on the major cloud providers and alternatives that are open source as well.

You need a stream processing layer, either something that's homegrown and built from scratch or an off-the-shelf solution that can actually query the data as it's flowing through this plumbing in this transport system. Of course, for the important data that you need to persist and maintain over time, that lands in an operational database, the heart of the application. Obviously, MongoDB is quite prevalent and popular in this, and we've really changed the database industry because of that intuitive document-based approach, our scalable architecture, the reach of Atlas, all those characteristics. We see an opportunity to do the same thing in the stream processing space. This is how we're doing it. First and foremost, existing solutions have some limitations. One, they're all fundamentally based on the same rigid schema model and SQL interfaces that are unnatural for developers to work with.

Same problem as the database space, happening again in the stream processing space. Introducing a dedicated stream processing layer adds, again, another component to that sprawling architecture that you might have seen Dave present earlier. It's more complexity, it's more cost, it's more operational maintenance, it's more to secure. It just adds to the mess. Of course, the developer experience is fragmented. A developer is not just working with one interface and API now, they're dealing with multiple drivers, which bloats their application. They have to authenticate it, comes multiple databases. It becomes more rigid, more brittle. Atlas Stream Processing focused on those three challenges in particular. First and foremost, of course, by centering it around the flexibility and intuitiveness of the document model. We brought forward all of those characteristics, our idiomatic drivers, our natural way of working with data to stream processing.

We focused on continuous processing. Our query engine no longer just focuses on data persisted in MongoDB or static systems. It actually can query data in motion as it's flowing through something like Kafka. It integrates for the first time in an intuitive way, data in databases, data at rest, alongside data in motion, into one experience that can power an end-to-end application. We are very excited about this. The customers that we've shown this to with our demos and preview were blown away at the sophistication and ease of use. It has that MongoDB experience that they expect. Net net, we're bringing that entire experience, not just from the database market, but bringing it forward to the entire streaming ecosystem. You'll see out-of-the-box integrations that we're releasing in preview, and we'll continue to expand to more platform players over time.

We talked about a few different areas of expansion of the Atlas platform. You wanna take a break? You wanna keep going? We're gonna keep going. All right. Now we're gonna talk about everyone's favorite topic in the industry, AI, and we're gonna get into some specifics about how it affects MongoDB and what we're building and doing around it. There's really four key things that we've observed and thought about over the last year or two as this trend was really starting to take off. One, we fundamentally believe that AI will increase the volume and sophistication of applications being built. There's going to be more software developed, and that's gonna mean that there's gonna need to be more operational data and more databases backing them....

We believe some of the inherent benefits of MongoDB that we just talked about previously, apply even more so to applications powered by AI and automation. There are a few key components we think are necessary for these modern applications, vector search being a foundational element. Last, we believe that app modernization journey from legacy relational systems to a modern platform like Mongo can be revolutionized with automation and AI. Let's dig into this. First, let's talk about the volume and sophistication of applications. Now, every time there's a platform shift in IT, the volume of applications and the sophistication of those applications increases by an order of magnitude. This goes all the way back to the 1970s.

You know, when you think about early mainframes in the 1960s and 1970s, there were probably hundreds of software suites and tens of softwares and suites that were used by a subset of enterprises. Over the 1980s and the 1990s, client-server architectures and packaged software became something that all knowledge workers could use. We could use spreadsheets for the first time, word processing, etc., but it was still a lot of packaged software. Now we're talking about hundreds of software suites, maybe in the thousands. We started to see the simultaneous growth of mobility and smartphones and cloud powering the back end, and that really led to this massive explosion of all the interfaces and applications and software eating the world, all the stuff that we've been adjusted to over the last decade or two.

We believe that AI is that next platform shift, and therefore, as software becomes easier to create, more sophisticated, and more automated, it'll be necessary that there are going to be tools that have to power that don't exist today. It starts with the development app of the app itself. You've all seen things like GitHub Copilot that make it easy for a developer to be more efficient. I demoed the ability to convert SQL code to MongoDB code easily using generative AI. All of this just means that building software will be easier, and it means that the definition of a developer is actually changing and expanding over time. AI-driven applications are also more data-intensive. They process higher volumes of data.

The experience needs to be increasingly real time, and since users are spread across the world interacting with these applications now with natural language or audio and voice, the data needs to be distributed in low latency, not only at the cloud but also at the edge. Applications can get more powerful, you know. Now we have ways of interacting smartly with video, audio, geospatial data, poems, unstructured text, in the same way we've traditionally dealt with structured data. It democratizes the experience of software even further across the world. You no longer need to be in front of a laptop or an iPad or even with a smartphone. It'll be embedded in our daily lives through things like AR/VR or the ability to interact in natural language. All of this is driving that growth, expansion of software.

As you all know, as Michael mentioned, we're fortunate to sit in one of the most dynamic and fastest-growing and largest markets in all of software. The numbers here, as I'm sure you all know, that are projected to grow by analysts over the next four or five years, these don't even account for the explosion of applications that are coming because of generative AI. We think this is a market-expanding opportunity as this platform shift occurs over the coming years. At a basic level, we see AI as a driver for these applications, and even if we did nothing at MongoDB, we just continue to execute our core operational developer data platform vision, we would benefit from this trend. In fact, we're already starting to see that.

What does an operational data layer need to be able to support these sophisticated AI apps? There's a few key characteristics that are worth digging into. One, AI-powered apps need to deal with a versatile set of heterogeneous data. It's not just structured information, rows, and tables. It's not even just documents anymore. Applications need to represent the real world, be able to understand and process images and video and voice and text all together in a simple way. They need to be able to efficiently handle rapid iteration. One of our product VPs likes to say, "We're in the AOL era of this new AI-driven application movement," which means that there's going to be a lot of change, a lot of experimentation. You can't do that in a system that's rigid and with a fixed schema that doesn't allow teams to work quickly.

The scale of applications that will be developed over the coming decades will be much bigger than anything we've ever seen because it's no longer humans necessarily sitting in front of a laptop screen, clicking with a mouse. It's actually software driving software, machine-to-machine communications. Let me give an example. If I have to take a trip, you know, I go to Expedia, I buy my flight, I book a hotel on marriott.com, I, you know, figure out what tour I want to go to, restaurants I want to go to. You know, maybe it takes me an hour. That's a certain level of scale and processing required to make that happen in time. Now, AI agents can automate that whole thing in a matter of seconds.

That level of pressure that's gonna be put on infrastructure will only increase as automation and speed of processing increases in an automated way. Of course, data needs to be very differentiated. You know, even at Apple's announcements recently, you can see how they're doing language models on the device for privacy. You can't just rely on data to be sitting in the cloud. You need this to be something that sits in a factory floor, a data center, multiple public clouds, and even edge and mobile or embedded devices, Dave mentioned earlier. What has MongoDB always been known for? We're known for, first and foremost, the flexibility of dealing with heterogeneous data. That's one of the main reasons people use MongoDB, is you can easily model all these different shapes and types of data....

We support a broad array of workload types, so you can iterate fast without having to bring in new technologies, integrate it into your stack, learn it, skill up your teams. It's just right there at your fingertips. We're best in class at efficient performance at scale. You can build massive applications. Some of the largest applications in the world already run on MongoDB. We're a leader in global reach and multi-cloud, whether that's the 110 regions across the three cloud providers, the ability to go on-premises, or the ability to move data seamlessly between cloud providers. You all know AWS, Azure, GCP, are great partners. They're investing millions in new AI services up the stack. For an organization to leverage that, they need to be able to use their domain-specific or proprietary data with those services.

We're the only platform in the world that can seamlessly move data around the globe and across clouds in a matter of minutes by just clicking a button. We think we're well set up for this trend. All of these competitive advantages I mentioned become even more relevant, so we're excited. We're starting to see this already in our customer base. On the earnings call, Dave mentioned we're aware of at least 200 app customers that came on in the last quarter. We have thousands that have .ai or doing something with AI features in the platform today, we're really excited to see this coming in and changing the dynamics of what our funnel looks like every week.

One of the things Andrew does is scrub the list of new companies and investigates what these companies actually do, and some of the more interesting examples are something worth talking to, maybe over coffee later. The first two were really about the core platform, how our foundational technology is well suited for this wave. I'm gonna hand it over to Andrew. He's gonna get into more specifics around how we're expanding the platform with new capabilities to make it even stronger.

Andrew Davidson
Senior Vice President of Product, MongoDB

Thank you.

Sahir Azam
Chief Product Officer, MongoDB

Yeah.

Andrew Davidson
Senior Vice President of Product, MongoDB

Let's talk vector search and vectors. This is a really big topic, and there's a huge amount of buzz around this in the industry right now. I'd like to demystify it for you. I think it's a little bit, it's ambiguous at first. What's going on is there's been a democratization of a technology that's actually been around for a long time. If you think about applications you may use, like Shazam, to find out what that song playing on the background is, or Google's reverse image search, maybe you took a picture of a flower and you wanna know what's that flower. Those applications are powered by the same technology, so it's been around for a long time. What's changed more recently?

In the last couple of years, really the last two years, I would say, there's been this proliferation of off-the-shelf machine learning models that you can use almost like a library as a developer, without having to have the sophistication of being a machine learning or data science engineer. You don't have to go build the model, you can just use the model. What do you use these models for? What these models do is they allow you to summarize any kind of source data. This could be images, video, text, a sound bite. They allow you to summarize that source data with a numerical representation of its meaning, and this is referred to as an embedding. The numerical representation is called a vector, and this concept is called an embedding. What the heck do we really mean by this?

Well, let me give you an example. We're gonna go back to math class for a second here. I know you're all quantitative. This is gonna be great. Let's talk about a simplified two-dimensional vector space. In other words, a plane in a two-dimensional vector space. Vectors are points on the plane. Here we have four vectors: pen, pencil, notepad, and book. If you think about it, the computer doesn't know what a pen or a pencil is. We know, but it doesn't know. It doesn't know what a notepad or a book is.

If we have a machine learning model that can understand the meaning of what pen and pencil are and map that into a numeric vector space that puts them closer together, then the computer can understand that pen and pencil are more related to each other, just like notepad and book are more related to each other. That's a powerful concept because it means I can now start doing these nearest neighbor style results.

I could say, "I think I have a pen here, but tell me what other things might be like a pen that are available to me?" If we generalize this concept and imagine that we could summarize any kind of source data with a variety of different kinds of machine learning models that can create these vector embeddings, then I can do all kinds of cool use cases that I'll talk about in a minute. Now, it's worth just reiterating that these vectors, they're really optimized for how computers can do calculations. Computers are good at doing these massively parallel vector distance calculations, New indexing technologies actually make it so you can very efficiently do some of this in a database context. Let's talk about some of the use cases.

There's the classic semantic search use cases, which is essentially an expansion of traditional search, where you're encapsulating meaning. For example, if you're searching, I don't know, a corpus of essays, and you want to describe the meaning of what you're looking for, it's not necessarily gonna be a keyword-based hit. You want it to be an intelligent enough engine to find what you're looking for. You can generalize this beyond text into images like we were talking about, into video and sound bites and more. If we think about what's You know, I mentioned that this whole thing has become really interesting in the last two years because you can now take advantage of off-the-shelf machine learning models.

Well, in the last 6 months, this whole thing has gone into overdrive, and that's because vector search is extremely powerful in connection with generative AI applications built on top of large language models and other generative models. This is a different kind of model than the type of model I was talking about before, if that makes sense. In the last 6 months, with the rise of ChatGPT, a bunch of new standards, which I'll walk you through in a moment, for building these generative applications take advantage of vector search, and you can use them to do expert systems, question and answer experiences. You can do conversational support, you can do personalization, et cetera, like never before. Let me walk you through a sort of simplified diagram for how some of the most popular frameworks for building generative AI-enriched applications look today.

On the right side, we have the application interface. This is where our users will interact with our offering. Maybe this is a simple expert system style, almost like a chatbot that's gonna bring to bear our unique knowledge base. While on the left side, we have the domain-specific knowledge or data that we have available. Maybe it's the data from our knowledge base that has information that we uniquely have and can make useful to the person who's asking for it. The way you build one of these applications is you pull data from your knowledge base, in this case, which could very likely be running in your operational data store, MongoDB, for example, and you pull it into this embedding creation preprocessing step.

Essentially, what you do is you break it up into chunks, and you send the raw data over to your operational data store, so it can be used in the application. I'll explain why in a moment. You separately send the vector embedding to a vector database or vector search engine off to the side. The way this works is a user will come in from the application with a request, maybe asking: "Hey, how do I do this thing that I'm trying to do?" There'll be a little bit of a processing step where that request will be parsed, and it'll be sent through the vector search engine. What's gonna then...

What that vector search engine will allow us to do is find corpus, find information in our knowledge base that's related to what the user was asking for, and that'll allow us to point to the actual raw data that's in the operational data store. Then we can use that raw data to do a prompt engineering step, where we can start feeding this data through, in a piecemeal way, chunked up appropriately, through a large language model, and in turn, return a cogent, intelligent-sounding response to the end user. Taking advantage of what the large language model can do to make it sound intelligent, but baselining it to the unique knowledge that we had in our application, in our unique business. There's countless applications for this. This is how you can avoid hallucinations because you're anchoring it in context that's relevant to your business.

This is an application like any other, meaning it has plenty of other use cases for its operational data store. Maybe the user profile is stored in the, in the operational data store. Maybe information about the history of what questions were asked and what answers were provided is stored in the operational data store. No doubt, you'll be asking your user: "Was this helpful for you? Was it not? What's next?" All of that's gonna be tracked in the operational data store. There's kind of a loop here that inevitably happens. When we look at this diagram, it's somewhat complex, and like Sahir said, this is the very beginning. These frameworks and standards will no doubt evolve very rapidly, and this is kind of a simplified view.

It's somewhat complex. We can't help but notice that this is totally ripe for us to think about, "Hey, it looks like there's another Band-Aid here, another niche off to the side, layered-in database or vector search engine. Clearly, there's something more we could do." That's why we were so excited to announce today, what I'm personally most excited about, the public preview of Atlas Vector Search today. We were able to make this the public preview today because we've been in private preview with this capability since the fall. I'll point out, that was before the rise of the generative AI ChatGPT wave. The reason we were early movers in this is we saw the need for this in connection with this democratization of those models that I was mentioning before.

All of this has gone into overdrive in the last six months, so we just feel, you know, we feel really good about the fact that we've been able to bring this public so quickly. What we do is bring vector search elegantly integrated into the exact same context of your operational data store. In other words, a vector that summarizes information sits alongside the information. It summarizes, so it can effectively be used as an index to pull that information back. I would point out that a vector without the information it summarizes is somewhat useless, frankly. It's a really natural fit for MongoDB, where you have this document model to be able to just embed the vector right there. The vector embedding step involves partners. You know, Sahir mentioned many... This is a prolific ecosystem.

You generate those vector embeddings through our partner in Google Cloud, Vertex, OpenAI, and Amazon Bedrock, or you can run your own models to do all of that. We're operating with a very open and pluggable approach. Let's review the diagram again. This is where we are today, as of today's public preview. We simplify the heck out of this loop. Your software application, it interacts with one operational data store that also includes vector search. It's kind of obvious, I'll grant you that, but the large language model is a crucial part of it in the loop. If I were to You know, Sahir was mentioning that I've certainly been tracking many interesting customers that are building on this kind of a framework. You know,

One thing I'd mention is it doesn't have to be a language model. This is, you could think of this as being a simple example, but you could also be doing image synthesis, for example, with a Stable Diffusion style model. As an example, we have a customer that creates a platform for fashion designers to basically describe what they would like to see in textiles, and it will synthesize a bunch of fashion suggestions for them. Guess what? They go through and look at them, and they point out which ones they think are good and which ones are not good, and all of that is used in a training loop. There's lots of different scenarios for this.

Another good example of a customer that's doing predictive maintenance by listening for sounds in a car motor. If they hear certain sounds, they believe, using vector search, that it's time to bring that car into the shop. Vector search is such a critical, logical expansion into our developer data platform vision, elegantly integrated right at the fingertips of a huge number of software developers to build these incredible next-generation semantic search and increasingly generative AI-enriched applications. Thank you. Back to Sahir.

Sahir Azam
Chief Product Officer, MongoDB

Thanks, Andrew. To wrap up the product section, we're gonna come back around to application modernization. As I mentioned, modernization is very hard. Getting off of any database is sticky, it's complicated, it's live operational systems. I walked through a simple process from updating the schema, to rewriting the code, to ultimately migrating the data itself. Today, we took a pretty big step forward with the general availability of Relational Migrator, as I mentioned earlier, and we saw parts of that overall process. What we're working on next is taking some of the technologies that Andrew has mentioned and applying it to the change of code that's actually necessary, the queries and the application code. That's one of the most challenging parts of a modernization exercise 'cause it requires development time. It's very error-prone. How do you make sure something's consistent? It's heavily iterative.

We're working on two key initiatives right now with the future of Relational Migrator. The first is SQL query conversion. We're building capabilities, and you saw some of this on stage if you watched the keynote, to be able to inspect an existing application, understand the SQL queries that interact with the legacy database, and automatically translate that to MQL, so MongoDB's query language and our Aggregation Framework. We can do this for simple queries and even stored procedures, so the business logic that often sits in an Oracle system or whatnot can be unwound and turned into logic with MongoDB. We're using fundamentally an LLM that we're training behind the scenes, as well as a foundational public model to make this happen.

The next phase is to move up the stack and to actually help with the customer's business logic across different parts of the life cycle, everything from assessment, so understanding the interdependencies and the business logic in the code, so you can start classifying which apps you can save the most money by modernizing, which are gonna be the most complicated to modernize, et cetera, as well as the actual code conversion. Being able to move from a traditional language like COBOL, for example, on the mainframe, to something more modern. Of course, the last piece is testing. A lot of...

We're working with some partners actually as well, that are working on automated testing, by generating test suites to be able to compare like for like software, and be able to make sure that the functional requirements as well as the performance are equal, if not better, to the original application. There's gonna be hit a lot of innovation happening in this space across our ecosystem. As I mentioned earlier, we don't believe it will ever be easy or push-button, but we believe that AI can make a step change in the amount of effort of these different levels of the process. Every time we make it easier to move from a legacy database onto MongoDB, customers move more applications over to us, given the benefits around performance, scale, and flexibility that we've been talking about all day.

We're very excited about this. This is an area where we're doing some real R&D work, experimentation. You know, the ecosystem is changing almost on a weekly basis right now, and so we're super excited. Hopefully, we walk through comprehensively, first and foremost, how this trend in AI is beneficial to MongoDB, no matter what we do. We're just well suited for it on the basis of all the work we've been working on for 15 years. More importantly, we're also investing to be a leader in this space. I think the next piece here is our customer panel. Definitely, if you have questions, you know, the team will be around. We're happy to dig deeper on any of these. Hopefully, just give a good broad brush and some background context behind what we announced today. Thank you.

Michael Gordon
CFO, MongoDB

Come on up, team. Thank you very much. Here, I'll jump down here. All right, thank you. Please come onto the stage. All righty. Hopefully that was a helpful overview from Sahir and Andrew on a whole bunch of the product announcements that we've had, and we'll look forward to continuing to talk about those over time. I'm super excited to host our customer panel. This is always one of my favorite things of our investor session. Welcome to all of you. Thank you for coming. Why don't I just first start it off by letting you each just sort of introduce yourself?

... let the audience know, you know, who you are, your organization, and kind of your critical IT priorities. We'll just dive right into it.

Michael Scholz
VP of Product and Customer Marketing, commercetools

Sure. My name is Michael Scholz. I'm the VP of Product and Product and Customer Marketing for commercetools. For those of you who don't know commercetools provides the leading composable commerce platform, giving our customers all the independent and required components to run outstanding shopping experiences across all different touchpoints. We are lucky to have some of the most iconic brands, we can call them our customers. We're talking lululemon, Sephora, Ulta Beauty, Audi, BMW, Volkswagen, and I could go on and on. These customers are choosing to run commercetools because they require that sophistication, that scalability, that flexibility, from their digital experience and digital commerce platform at lower cost, but also by increasing, or increasing efficiencies. We've been on an amazing trajectory.

Last year, 2022, we had a growth rate of over 80% in terms of our recurring revenue. We have similar aspirations to continue that trend. MongoDB is just such a tremendous partner for us because for us, every customer is, by association, a MongoDB customer. That's kinda like a little bit about me and about the company. In terms of the priorities that some of our customers are facing, I think it's the typical thing about change is the only constant.

Whether it's geographical sort of tension, whether it's pandemics, whether it's sort of changes in demand or customer expectations, the, the reason why companies come to us, for mostly from our competition, by the way, which are the likes of SAP, Salesforce, and Oracle, because they're limited in their ability to execute, and they need that flexibility, and they need to come to us for an answer on that.

Michael Gordon
CFO, MongoDB

Perfect. Thank you. Paul?

Paul Blake
Senior Director of Engagement, GEP Worldwide

Thank you very much. Hello, everyone. My name is Paul Blake. I am from GEP, another company you've never heard of. We've been in business for about 25 years, and we provide software, strategic consulting, and managed services to some of the world's largest and most interesting companies. We do that in a specific area, which is around procurement and supply chain. As over the last two or three years, you know, supply chain has been very much in the focus and in the news and on everybody's minds. We've been working extremely hard with these large organizations to get everything right and to get supply chains and their procurement processes working properly. That has to be done in software today. We're a software company. I said we've been in business for about 25 years.

I've been with the company for 11 years, the changes that we have seen over that time have been remarkable. Today, the challenges that our customers are coming to us with are all about how do they manage, as Michael said, a changing environment, a rapidly changing environment, one which is no longer predictable. Yet, which we're talking here about the oil majors, about pharmaceutical companies, global banks, the largest, most complex companies in the world, they have all of their processes set in stone from about 20 years ago. The world doesn't work that way anymore. They are now coming to companies like GEP and saying, "We need agility, we need flexibility.

We need to be able to work in a completely different way, but we need to be able to change that again in the future when the geopolitical circumstances change, when the environmental circumstances change, when regulations change. How do we do that? The heart of that is how we manage data. That's why I'm here today. That's why we're working with MongoDB. It's in order to abstract all that stuff that's going on in the data into a format that the customers can use to manage their businesses more effectively.

Michael Gordon
CFO, MongoDB

Great. Thanks, Paul. Dave?

Dave Conway
Managing Director, Morgan Stanley

Oh, hey, I'm Dave Conway. I lead a team of data engineers, analytic and reporting developers, as well as data management infrastructure engineers at Morgan Stanley on the institutional securities side. That's a mouthful, because we got a lot of data, and data is front and center. It drives everything that we do, and it drives data science. Without good data, you don't have any science, right? My mom reminds me of that every day. She calls me up, she goes, "Are you doing AI?" I go, "No, but my data drives AI." She's very happy. That, that's my goal in life. The challenges in front of us, primarily, are AI and the cloud. I'm gonna talk a little more about the cloud today because it's on my mind, and we've heard AI about 1,000 times.

It's cloud, cloud. We're early in our journey, but we're quickly accelerating. A platform like MongoDB Atlas has been an incredible enabler for us to get to the cloud very, very quickly.

Michael Gordon
CFO, MongoDB

That's great. Thanks. Joe?

Joe Croney
Vice President of Technology and Product Development, Arc XP

Good day, all. I'm Joe Croney, Vice President of Technology and Product Development at Arc XP, which is a division of The Washington Post. As you all know, The Washington Post is all about content, as is Arc XP. We're a SaaS platform that combines an agile CMS with a digital experience platform and a monetization engine that powers thousands of sites around the globe today, full of content. That's one of the reasons we rely heavily on Mongo, both The Washington Post for its own purposes, but also the customers of Arc XP. We've got about 4.5 billion documents in our Mongo databases today that appear in many different forms, whether it's video assets, audio assets, imagery, or content articles. One of the reasons we really love working with Mongo is the partnership that enables us to be agile and move fast.

...The story of Arc XP is one of hypergrowth that has challenged our teams to move as quickly as the market wants us to innovate, but also to scale up the services we offer to our customers. The Atlas platform has been fantastic in enabling our business transformation.

Michael Gordon
CFO, MongoDB

Terrific. All right, well, hopefully that overview was helpful. One of the things that I think is really special, and that we try and do during this session, is give people some more detailed insight into how people actually use our products, right? It's easy for us to say it, much better for them to hear it all from you. Maybe you can walk us through the use cases, where you're using MongoDB today. You can take it in any order.

Joe Croney
Vice President of Technology and Product Development, Arc XP

Oh, I'll dive in.

Michael Gordon
CFO, MongoDB

Sure enough.

Joe Croney
Vice President of Technology and Product Development, Arc XP

As I mentioned, Arc XP is about content. When we went about selecting the right technology choice for storing our data, we knew we needed a document database. There's many options out there. Prior to joining Arc, I'd actually been part of a firm that did our own custom implementation. I knew the mistakes that you can make if you select the wrong technology. One of the reasons MongoDB really fits us well is that data store. Also because we're offering a SaaS offering, we're cloud native. We really required a partner that knew how to operate at scale, knew how to take care of security, knew how to take care of encryption, really could scale with our business.

When you think about when you access any site that runs on Arc today, that content originates and is stored in a Mongo database. We also leverage some of the cert services, as well as some of the upcoming ones that are pretty exciting that were announced today.

Paul Blake
Senior Director of Engagement, GEP Worldwide

Yeah, I'll go next. Maybe we'll alternate along the line.

Michael Gordon
CFO, MongoDB

There you go. All right.

Paul Blake
Senior Director of Engagement, GEP Worldwide

Procurement and supply chain processes are about transactions, as you would imagine. It's everything from purchase orders and invoice-type transactions, to informational transactions between different players in the supply chain. Of course, you can conceive of managing that through a standard relational database architecture. We had done that, you know, 25 years ago. That's exactly where we were coming from. Now the requirement is for real performance. We're talking about hundreds of thousands of users at any one time on our system, millions of transactions. We process trillions of dollars of transactions through our system every year, and that has to happen very, very quickly, at very high levels of security and very high levels of integrity.

To be able to do that and deal with changing requirements and to deal with the fact that tax regimes change and political regimes change, and the processes of purchasing and supply chain and the transition of goods from one place to another are ultimately affected and obey different rules, perhaps sometimes on a monthly basis. You have to be able to be very, very flexible and to adapt to the new requirements. Having the old model, as we saw in one of the slides earlier, of an ERP, an Enterprise Resource Planning system that is heavily specified and set in concrete, it just doesn't fly anymore. We need something that is much more adaptable, and as we'll go on to talk about, AI is gonna be a huge part of that.

Michael Gordon
CFO, MongoDB

Great. Yeah.

Dave Conway
Managing Director, Morgan Stanley

We have numerous projects running on MongoDB right now, but I wanna cover a few and highlight some real success stories we've had. The first and one of the earliest projects we went live with is our risk calculation environment. Think every night, a financial institution like Morgan Stanley had to calculate its risk. This is a tremendous calculation requirement. This is huge, massive calculations, and MongoDB provides the database behind all of that. I think pretty much everyone at MongoDB knows this project because it's been challenging, but it's been a great success. Lots of capabilities were brought to the infrastructure as part of it, and this led to another project that we needed, which was a modernization of our equity swap lifecycle management, let's say, platform.

This was a business need for us to keep up in the marketplace with new capabilities, we needed a flexible, powerful database platform like MongoDB to be the backstop and the database behind such a huge renovation. One that comes to mind next is an AI one. In fixed income, we are running our data science with MongoDB in the background. Just jumping over to wealth management, which, you know, we do institutional securities as well as wealth management. They are capturing a data audit event, let's say, data audit events, they run analytics against that, they're using MongoDB to store that data. Finally, I'll touch upon the document model, which we've heard about a lot today. A lot of the products that we trade are complex products.

Just think of terms, thinks of agreements, rates, reference data, all around these complex products. This fits so well with MongoDB's document model. This is where we store these complex products across numerous business lines in MongoDB.

Michael Gordon
CFO, MongoDB

Awesome. Michael, yeah.

Michael Scholz
VP of Product and Customer Marketing, commercetools

Yeah. For us, it's actually pretty straightforward. We realized in sort of 2010 that the commerce platforms out in the market were really limiting folks to be flexible and scalable. We are the first composable commerce platform that takes advantage of microservices and API-first approach. We are truly cloud native, and we also have a headless approach because we touch so many different touchpoints, whether they're a point-of-sale system, whether they're the typical desktop, mobile, social, kinda like experiences. We wanted to build a very sophisticated platform, and MongoDB is the market leader that we see and we wanna collaborate with to achieving our flexibility and scalability for our platform.

When we are pushing multiple billions of GMV through our platform, we need, to your point about being the backstop, we need the data to be accurate, clean, persistent, all of those things. We have a lot of companies that are retailers, but we're also across multiple different industries. If you think about a typical retailer having Black Friday, Cyber Monday type, like, scalability issues, I think, we know that the Black Friday, Cyber Monday is not the Black Friday, Cyber Monday it used to be. You cannot plan for spikes. You cannot plan for peaks, because we have one of the largest telco companies in the U.S., and for them, peak is when the new iPhone comes out, when the new Apple, like, virtual reality headset is coming out.

Whether it's an influencer on Instagram, whether it's other peaks from different companies, we wanna be able and in a position to cater for that demand, and we wanna respond to that. For us, it's an integral part of our platform. It's the single source of truth for our platform, and it's also, to the cloud-native aspect, super important that we partner with someone that is available across all of these different public cloud providers. Also, about one month ago, we've announced our offering to be live in China. For us, it's super important that in mainland China, we can offer our platform based on AWS, based on MongoDB, and it's super critical to our success as well.

Michael Gordon
CFO, MongoDB

Terrific. Thank you for that. One of the things I think is helpful, and you've touched on it, but given that we've got some time, the ability to go into what you see as the strengths of MongoDB, and I think the specific examples, you know, really help people. We'd love to hear your takes on that. Who wants to go first?

Dave Conway
Managing Director, Morgan Stanley

I'll go.

Michael Gordon
CFO, MongoDB

Dave?

Dave Conway
Managing Director, Morgan Stanley

All right.

Michael Gordon
CFO, MongoDB

Go for it.

Dave Conway
Managing Director, Morgan Stanley

First and foremost is the horizontal scalability. Before MongoDB, the traditionals might have been one server, might have been. Once you only have one server, the only way you can scale that is vertically. Put more CPU in it, put more memory in it, but there's limits. To be able to scale horizontally is an incredible feature that a use case like the risk calculation environment would not have been possible without that capability. Hand in hand in that comes, I'll call it resilience, right? Once you're in a multi-server environment, once you have a multi-server platform, if you have a server problem within MongoDB, it's, you know, literally a non-event, unlike in the past, where this would have been a massive outage. Part and parcel with that, performance, right?

You've heard about a lot of the performance, comments here today. We would not have been able to accomplish the projects I mentioned without the performance capabilities, that MongoDB provides. I just wanna close with encryption. I have a bigger list, but I don't wanna go on. encryption is so key. you can't imagine how important it is. Our stringent security requirements mean that we must have encryption, capability, especially in the cloud. We would not be using Mongo Atlas if it didn't have the encryption capabilities that it has today.

Michael Scholz
VP of Product and Customer Marketing, commercetools

To extend some of Dave's points, we hit the document database, performance, security. The other angle that we enjoy from MongoDB is actually developer productivity, which is near and dear to my heart. At this conference, you see a lot of talk about enabling developers, helping them innovate, and that's truly what Mongo delivers for my team. Five years ago, we found ourselves growing so fast, that our developers were saying: "How are we gonna keep up? How are we gonna actually maintain all this infrastructure?" Atlas answered that problem, where today they don't see any problems in sight with being able to scale with our customers. Beyond that, it's enabled us to innovate for our customers through the services that Atlas provides, whether it's search services or document storage.

Really having the APIs available to my development team, having the command line tools for my development team, enable them to build more value for my customers, which ultimately is what we're all here to do, is to deliver value to the market. That's one of the reasons Mongo really has been effective for us as a partner.

Michael Gordon
CFO, MongoDB

Mm-hmm.

Paul Blake
Senior Director of Engagement, GEP Worldwide

Scalability is key for us. You know, we can take on one customer and suddenly add 15% more usage of our system just overnight. Security, of course. We are dealing with around 25% of the Forbes Global 2000, and we have all of their purchasing transactional information. It has to be secure, it has to be safe and accessible at all times. That's where the multi-cloud thing becomes extremely important to us. Cloud is starting to become something invisible from the customer's perspective. They want to be able to come to a company like GEP and say, "Give me a global procurement solution," and to not have to worry about whether a cloud is supported for a particular territory or for a particular use case.

That ability to extend into wherever you need to operate, is an amazing, liberating factor for us, and that speaks to what you were saying, Joe, about developer productivity. We're always seeking technologies and tools that allow us to concentrate more on delivering value to the customer and less on plumbing and engineering, because, you know, we grew up from a time where the servers were on the floor of the office. Now we get to a point where we don't even know where the servers are, we don't even know where anything is. We just have to worry about the fine nuances. That ability to be able to just focus on the end layer-...

where the, where we interact with the customer is what allows us to then take the next step and give the customer the control of what the application does going forward. We saw a quote on one of the slides earlier, about Gartner saying that by 2024, a certain % of applications will be built by people with no programming knowledge at all. I don't necessarily believe everything that Gartner says, but they're right in that.

Michael Gordon
CFO, MongoDB

Some people think that's happening today.

Paul Blake
Senior Director of Engagement, GEP Worldwide

Yeah, yeah. No, it is true, there's plenty of applications being built without any skill, for sure. I think the, where it makes a difference, just to slightly contradict something you said earlier, Michael.

Michael Gordon
CFO, MongoDB

Sure.

Paul Blake
Senior Director of Engagement, GEP Worldwide

We are an out-of-the-box software.

Michael Gordon
CFO, MongoDB

Yeah, go for it.

Paul Blake
Senior Director of Engagement, GEP Worldwide

... provider. Where you are right, is that we don't do a hundred percent of what any customer wants. There's always a gap.

Michael Gordon
CFO, MongoDB

Yeah.

Paul Blake
Senior Director of Engagement, GEP Worldwide

There's always a gap between what any solution does, in inverted commas, and what the customer actually needs. The great thing that we've been able to innovate in our system is the means to then stretch that final distance, either the customer doing it themselves, or engaging a third party to do that themselves. The underlying data structures is what allow us to do that.

Michael Gordon
CFO, MongoDB

Yeah, very well.

Michael Scholz
VP of Product and Customer Marketing, commercetools

I think what you guys said.

Paul Blake
Senior Director of Engagement, GEP Worldwide

That was easy, wasn't it?

Michael Scholz
VP of Product and Customer Marketing, commercetools

It was super easy. For us, it's more of the same. It's the scalability, it's the flexibility. I think it's the robustness and the security. Those are all the tenants that we share, like, the two companies, and that's, I think, why this partnership and this relationship is working so well. We started out with MongoDB, and then we moved over to MongoDB Atlas because we really believe in this best-of-breed approach. We wanna focus on commerce, and we wanna work with partners that know and do what they do best. We wanna focus on commerce, let MongoDB handle not only the database part, but also the managed services part as a result of that.

Because we wanna be innovative, the same way that MongoDB is innovative, with what we can, like, really change the status quo of kind of like some of these digital experiences. To my earlier point, the only constant is change, and, like, we wanna make sure that our customers not only have all the required components to be successful and build these outstanding, like, shopping experiences right now, but we also wanna either drive or predict the troubles they might get into and solve for those. It's really, really important that we can be focused on innovating on commerce and let MongoDB do the rest of the magic.

Michael Gordon
CFO, MongoDB

You know, I think it's really helpful. You know, we talk about concepts like developer productivity and trying to educate investors, but it's so much more powerful to hear it all from all of you. We talk about these concepts that each of you were hitting on around, you know, our way we try and summarize it is, you know, not have you all bothered with the undifferentiated heavy lifting, right? You all can focus and spend precious, scarce developer time on increasing, you know, functionality, improving user experience, and kind of driving those end results rather than a lot of sort of the back end, like I said, undifferentiated heavy lifting. Just put great and such good rich color.

Paul Blake
Senior Director of Engagement, GEP Worldwide

May I just share an anecdote with you?

Michael Gordon
CFO, MongoDB

Yeah, please, yeah.

Paul Blake
Senior Director of Engagement, GEP Worldwide

... on that score? One of our customers is an oil giant, one of thre you can name, right? In fact, we have all three, but that's not the point. The customer in question came to us with a particular requirement. They have a lot of people out in the field, out in on rigs and out in the literally in the field, exploring for oil and extracting oil and so on. They have a lot of requirements for maintenance, repair, and operations parts, you know, widgets and screws and bolts and things. Until last year, that was all pretty much done on paper-

Michael Gordon
CFO, MongoDB

Mm

Paul Blake
Senior Director of Engagement, GEP Worldwide

... because these people are out in the field, they fill in requests, they send them off, they get them back, and it's very, very difficult to control that kind of operation without on paper. They came to us, and they said: How easy would it be, or how many millions of dollars would you charge us to build an app that works on a mobile device, that integrates all of that into your procurement system? In fact, it didn't cost millions of dollars, and it didn't take very much time at all. It took around three months to go from initial discussion in actually deploying it, because the speed of development, the speed of prototyping, deployment, and testing is so much greater now that we don't have to worry about anything other than-

Michael Gordon
CFO, MongoDB

Yeah

Paul Blake
Senior Director of Engagement, GEP Worldwide

We do that through a process called low-code development, which again, takes a lot of the writing of the code out of it. It's kind of drag and drop development. It's really a model for how all of these systems are being accelerated, and the delivery of value to the end customer is where it's at. We can't do that if we're dealing with a static-

Michael Gordon
CFO, MongoDB

Yeah

Paul Blake
Senior Director of Engagement, GEP Worldwide

... set of rules that are underneath. It's a real benefit in the real world and in real time.

Michael Gordon
CFO, MongoDB

Yep. Great, thank you for sharing that anecdote. Okay, for each of you, when you look ahead, where do you think you might use MongoDB in the future? Kind of map out what that sort of, you know, next horizon looks like. Michael, you want to start first?

Michael Scholz
VP of Product and Customer Marketing, commercetools

Sure. I think we're gonna continue to double down on MongoDB and MongoDB Atlas as we are expanding. We have more to achieve in APAC. As I said, we just went into China, so there's plenty of opportunity there. I think, search is so fundamental to commerce that we need to crack that nut even better. I think if you look at across all of our competitors, we're all sort of using some form of Elasticsearch, and we don't really differentiate ourselves from them because we're so focused on commerce and other bits and pieces. I think search could be that differentiator, so using a tool like MongoDB Search would be amazing to really embed that and make it part of our stack. I think for us, the fact that we

... MongoDB is, we've talked about multi-cloud and sort of being cloud native, and I think a lot of people think about horizontal and vertical scaling, and that's obviously one approach to think about it. We think about it also that you're unlocking an entire ecosystem of GCP or AWS or even the MongoDB, like, ecosystem. We also think about how we can scale out and build, like, innovative applications that way. I think there's a lot to be gained across industries, across business models. Search looks very, very different when you're talking to a B2C company like Sephora or like Ulta Beauty, versus you're talking to like an Atlas Copco, that is a B2B company, a manufacturer or distributor or wholesaler, looks very, very different because search, in one case, is about discovery and exploring.

On the other hand, it's about finding things, and B2B behaves more and more like B2C, but there are still fundamental points there. I think I'll close off with this notion of omni-channel, which in and of itself is an overused term. If we're looking at us being the backbone from a commerce transaction engine, and we're serving all these different touchpoints, so whether it's, again, the desktop, the mobile, the social piece, whether it's the kiosk in a stadium or in your McDonald's, or we're talking about a point-of-sale system, like whatever that touchpoint is and whatever touchpoints are emerging in the future, we can cater to that.

Having, like, MongoDB as that accessibility layer for all data purposes, not in a black box, like accessible to everyone, is usually transformational for us, and we want to provide that opportunity and that capability to our customers.

Dave Conway
Managing Director, Morgan Stanley

All right.

Joe Croney
Vice President of Technology and Product Development, Arc XP

I've done a lot of talking.

Dave Conway
Managing Director, Morgan Stanley

Yeah, yeah. The, so you already got two of my top three, you know, cloud. It's all about the cloud. Second, tech search. A lot of potential there, but I would add one that we haven't heard yet, which is the high-speed cache. There's a lot of high-speed cache products out there, and we've recently started to use Mongo instead. It's been a great experience, and you don't really hear about that a lot. Maybe you do and you don't, but we, you know, we certainly were very pleased at the performance we could get, and we could just not have to have yet another product in the bank to provide that capability.

Michael Gordon
CFO, MongoDB

Yeah, that's certainly a recurring thing we hear about, just the ability to consolidate and sort of not have point solutions.

Dave Conway
Managing Director, Morgan Stanley

Yeah.

Joe Croney
Vice President of Technology and Product Development, Arc XP

I would echo what Michael shared, that we have both B2B search scenarios and B2C search scenarios. You can think about a digital storyteller or a journalist trying to do research about the stories they wish to share, really need a different type of search than a reader or viewer that's on a site or on a mobile device trying to find that content. We have a wide variety of search technologies across Arc today around different purposes, inclusive of Atlas's full text search. I think we see moving more to Atlas in terms of satisfying some of those B2C scenarios. The other thing we haven't touched on is that MongoDB provides options for app services, for us to move workloads onto Mongo that might sit elsewhere.

One of the benefits of being a cloud-native platform is it's all serverless code, that's supposed to open up opportunities to run that code in different places, whether that's at edge in a CDN, whether it's in AWS or in Mongo. I think that's another conversation we've been having a lot with our partners on the Mongo team about what services and workloads we could move to Mongo Atlas, and have it closer to the data that those workloads are running with. I think that's another area where we see working together.

Michael Gordon
CFO, MongoDB

Paul?

Paul Blake
Senior Director of Engagement, GEP Worldwide

For us, there are three really interesting areas of innovation right now. The first is in terms of inclusion of data that was never available to our audience before. Traditionally, people who are dealing with procurement and supply chain issues, they're dealing with their own data. You know, what they have and what they know, their own history of what they've done in the past. Increasingly, the kind of decisions that the chief procurement officer or chief financial officer need to make have to be informed by what's going on in the rest of the world, particularly when it comes to managing complex supply chains, with everything extending into global networks. The second thing, of course, as my colleagues here have already said, is search.

Within the sort of end-to-end procurement process, everything pretty much is a search function. You say, "What should we do? Where is our greatest opportunity to make savings over the next five years? What strategy should we use to enact that? Which suppliers should we invite to be part of this process? When we get the bids in from the suppliers, which one most closely meets our targets? Which one delivers the best value? What terms and conditions should we use to engage with that supplier in terms of building the contract?" When it comes down to the sharp end of people actually using those contracts, "Where can I find a new laptop?

Where can I get a new office chair?" All of these things are search functions, but at the heart of the whole process, there is this nasty little thing called a contract, which suddenly becomes unstructured because lawyers get to get involved. Either everybody in procurement likes to have everything set out in fields and items that they can search, and then a lawyer will get involved and present a, an entire blob of effectively unstructured text, which then needs to be restructured in order to make it searchable. The real power of things like vector search will allow us to say things not like: How many contracts do I have with commercetools, and where are they? That's easy search. Which contracts do I have across the world that expose me to a sudden change in United States import laws?

Where are the risks in my contract that leave me exposed? That's very, very hard to do, even with, you know, a modern Elasticsearch, it's very hard to get that meaning out of it. So being able to take a contract document, which is the heart of this entire process, and turn that into something that becomes intelligent, is a kind of a holy grail, and it's something that we're working very closely towards. The final thing, of course, which is like the big thing right now, is AI, generative AI in particular. Because of all of those steps in the process, they're all query response. It's perfect ChatGPT fodder.

Michael Gordon
CFO, MongoDB

Yeah.

Paul Blake
Senior Director of Engagement, GEP Worldwide

Right? It has to be correct. What should I do? The CPO sits down, "Okay, what shall I do today?" You want the machine to be able to say, "You need to focus on cardboard packaging, because that market is changing rapidly." Who should I employ? Who should I engage with to do that? What are the best terms and conditions? Where is my next opportunity for the next saving? That is a huge area of development for us. My developers are telling me that what they get from MongoDB is a lot of connections to AI algorithms and engines out of the box, but critically, the ability to build new ones much more easily than ever before.

Michael Gordon
CFO, MongoDB

That's great. I know we have to wrap just very quick. What are you most excited about, you know, 10 seconds or less, from today's announcement?

Michael Scholz
VP of Product and Customer Marketing, commercetools

I'll go first. Vector search, the whole idea of data streaming, super important for us, given the multiple touch points. I think that approach to really go to market from an industry perspective is powerful. Same with the developer community. We're propeller heads, we're a tech company, those four things.

Paul Blake
Senior Director of Engagement, GEP Worldwide

Well, what he said.

Michael Gordon
CFO, MongoDB

Okay.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

No surprise, Queryable Encryption.

Michael Gordon
CFO, MongoDB

Yes, I was.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Gotta have it.

Michael Gordon
CFO, MongoDB

I had my money on that.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

I'll round it out with vector search as well. Using AI for journalism and storytelling is nothing new. A decade ago, we were using it for simple things like finance reporting or sports scores. Now, we can use it for much more complex scenarios like Atlas Search will do.

Michael Gordon
CFO, MongoDB

Awesome. Thank you all for joining us. Thank you for being customers. Thank you for spending time here. Enjoy the rest of the day.

Michael Scholz
VP of Product and Customer Marketing, commercetools

Thank you.

Michael Gordon
CFO, MongoDB

Appreciate it. Thanks, everyone. With that, I'm gonna turn it over next to Dave, who's gonna host our partner spotlight. Dave, take it away. .

Dave Kellogg
Event Host and Executive, MongoDB

[crosstalk]

Okay, there we go. It's my pleasure to talk a little bit about how we partner with some of the largest companies in the world. Obviously, one of our biggest partners in our business is AWS. I'm really pleased to have here Chris Grusz with us. Maybe, Chris, we can just start by describing your role at AWS.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah. Cool. Thanks, thanks again for the opportunity, by the way. Yeah, my name is Chris Grusz, and I run the technology partnership organization for AWS. What that means is my team works with any of anybody that's basically not a system integrator or reseller, that's part of AWS Partner Network. That's a software company like a MongoDB. It could be a data provider, it could even be a chip manufacturer. Our team's charter is really to work with our partners in a kind of a build, market, and sell relationship. We'll work with MongoDB to help kind of figure out what those right integrations are into the AWS environment, figure out how we wanna go to market.

Where are we gonna go to market from a geographic or maybe a vertical perspective, and then really work with our partners in a co-sell motion. It's, you know, it's been a nice relationship we've had with MongoDB. Marketplace is part of that equation. You know, in terms of how we're going to market, it's really becoming how we're automating the partnership. Super excited to talk to the audience here today and thanks for the opportunity.

Dave Kellogg
Event Host and Executive, MongoDB

Thank you. Before we get into MongoDB specifics, maybe you can talk a little bit about, like, Amazon's philosophy on partnering, maybe in particular with ISVs.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

It's interesting. You'll hear a lot of times when people talk about Amazon, is that we're customer obsessed. What that means is that we're always working backwards from our customer needs, and we're always looking for new ways to delight our customers. When we look at partnerships, it's very much a version of customer obsession. Because ultimately, what our customers are looking for, whether it's a MongoDB customer, or AWS customer, or a joint customer, is they're looking for a solution, right? They're not looking for point products. By working with partners like MongoDB, we're providing a solution to our customers. We very much look at that in line with our whole charter to be customer obsessed.

The other thing that's from a partnership perspective, that's important to point out is we look at partners as the way that we provide what we refer to as the selection experience. When you think about Amazon, one of the things that people really like about Amazon is that it's the, it's the everything store, and it provides anything that you wanna buy. That concept of selection translates over to AWS in the form of Amazon Partner Network, and specifically how we deliver that with Marketplace. We're always trying to deliver, you know, solutions to our customers and provide that selection experience. That's really critical for how we, of course, go look at partners. Oftentimes that might even be for solutions that have some kind of overlapping functionality with AWS.

You know, it's especially important as we look at MongoDB. You know, even though there might be some overlapping functionality, that's fine from an AWS perspective, because we're, again, we're delivering on that commitment to be customer obsessed and provide that selection experience. Having a rich partner community to support that is always, is very critical. Then, of course, you know, continuing to always improve. Right, we're always getting feedback from partners like Mongo on what we can do better, and how we automate the partnership and how we scale it. So, we look at this as something that's evergreen in nature, that we're always gonna work on.

Dave Kellogg
Event Host and Executive, MongoDB

Obviously, the follow-on question would be is we've been working together since 2016, since we launched Atlas. Could you comment on, you know, what the appeal is to partner with MongoDB?

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah.

Dave Kellogg
Event Host and Executive, MongoDB

be brutally honest.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

No, well, first of all, it's not a new relationship. To your point, Dave, it's been since 2016, so we've built up a level of trust over the years. MongoDB, first of all, is built on top of AWS. It's not just utilizing our compute resources, they're using a whole variety of AWS services. It's not just a vendor relationship, it's a partnership. I think that's really important to point out as well. The other thing that MongoDB has done that's very important for AWS, is they've gone through our competency programs. It's not good enough from our perspective, just to have a solution that runs on top of AWS.

We wanna have solutions that are well architected, taking advantage of features that allow our customers to really scale their products up or down as needed. Again, you know, it's an additional level of work that MongoDB had to go through to get that, but that's very important from our perspective because it provides a better customer experience. The other thing that's very attractive to us is the common user base. When you take a look at MongoDB, they've got such a rich history and a large relationship with the developer community, which is a persona that AWS also has a very close relationship as well.

When we look at how we go to market, we've got nice synergies because we have that core customer, which is the developer, as a common ground that we can go work on. Of course, being in the Marketplace. MongoDB has really embraced Marketplace in a material way, and that's the preferred route to market for AWS in terms of our partner community. You know, you add up all those things, and it's just a really good relationship that we can have when we go do co-selling between AWS and MongoDB.

Dave Kellogg
Event Host and Executive, MongoDB

On the co-selling point, you've mentioned that now a couple of times. I think for people in the audience, they may not completely appreciate what that really means, but I think that, you know, in their mind, I think they will have questions about: How does our relationship show up to, you know, a common customer?

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah.

Dave Kellogg
Event Host and Executive, MongoDB

Can you elaborate on the co-sell kind of orientation of the relationship?

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah. I mean, because we have a common customer, we have a joint value proposition, right? We're not just kind of showing up independently at customer accounts. A lot of times, we're going in together and having that joint value proposition, which is really centered around helping our customers innovate, right? When they're moving to AWS, they're trying to innovate, and MongoDB is very much in line with helping their customers innovate, and get more access and more information on their data, right? I think both companies, we both refer to ourselves as being data-driven, and we help our customers get data-driven as a result. I think that's really important. The other thing is that it's not just a North American relationship. We're here in New York today. We have a global relationship with MongoDB.

You can take a look at, you know, even some of the awards that we've given MongoDB over the last year that are really a testament to that. Last year at re:Invent, as an example, we awarded MongoDB the EMEA Partner of the Year Award for Marketplace. That was because we were seeing over 100% year-over-year growth in that particular area alone. You fast-forward to this year, earlier this year, MongoDB was awarded the ASEAN Partner of the Year Award. For Singapore and that part of the world, again, we're seeing really good co-sell between AWS and MongoDB. Most recently, they won the Chile Partner of the Year Award down in LATAM.

You know, we're seeing really good alignment, not only in North America here, but in Europe, Asia Pacific, as well as other countries like LATAM. It's been really nice.

Dave Kellogg
Event Host and Executive, MongoDB

In the spirit of incentives drives the outcome, it's an old Charlie Munger quote that this group.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah

Dave Kellogg
Event Host and Executive, MongoDB

... has high esteem for, someone they have high esteem for. What are the incentives for AWS to work well with the partners? I mean...

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah

Dave Kellogg
Event Host and Executive, MongoDB

... do you know, how are sellers compensated?

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yep

Dave Kellogg
Event Host and Executive, MongoDB

you know, how is the organization incentivized to do this in a way that's natural and not some sort of forced or artificial relationships?

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

I mean, again, it kind of really feeds into our concept of customer obsession. You know, providing those solutions, that's ultimately, at the end of the day, what we're really trying to do from an AWS perspective, is really delight our customers. Now, you go beyond that, you know, what are the other specific things that are kind of in it for AWS? Well, again, it's built on top of AWS. When a customer buys MongoDB, you know, they're indirectly buying AWS. There is a strong financial alignment from that perspective as well. That really helps kind of support that relationship as we go forward.

The other thing that, you can also take a look at that's in it for us is, you know, the alignment from our field perspective. So you asked, like, what's in it for the seller, as an example. You know, they get happy customers, and that's always a good thing, but our field is actually compensated when the customer buys MongoDB. There's good financial alignment. MongoDB is in a program that AWS calls ISV Accelerate, which is our primary co-sell program. That actually provides that financial incentive for our field, the AWS field sellers, to sell side by side MongoDB, and they actually get compensated for it. On top of that, they're also gold on that.

One of the things that we really try to drive as a behavioral thing across AWS is to work with partners. You know, our field are gold on things like private offers, and it's not a dollar value that they're gold on. It's actually a volume perspective. We want to incent our field to work with partners because we think it just provides a great customer experience.

Dave Kellogg
Event Host and Executive, MongoDB

You know, obviously, acquiring a customer takes a lot of work and effort. Who is involved in actually finding these customers? You know, maybe you can talk about, like, the mix of, like, how much business you bring to the table versus MongoDB.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah. Yeah, for our most successful partnerships, we really kind of have two co-sell motions. The first one is a self-service motion. You know, and what we're trying to do on the self-service side is go well beyond what has traditionally happened in marketing, where you have an event and you have some leads that you follow up on. What we're trying to do is automate that entire experience, and so we look at that as a key part of why we work with MongoDB with Marketplace. You know, and, in that regard, we'll actually do self-service campaigns with our partners, and the design point there is not to generate a lead, but it's actually to generate a customer. We've got a really healthy self-service experience with MongoDB.

If I just look at the last number of months, we've done a number of campaigns with MongoDB, where we'll have over 10,000 customers that actually hit the landing page on Marketplace. Those are actually translated down to 1,000 new customers. You know, that's a nice experience 'cause we're generating opportunities and customers for our partners. On the flip side of that, we then will execute in the field from a co-sell perspective, and that's where Marketplace private offers becomes kind of the transaction vehicle. That's where it might be a large opportunity that MongoDB is working with a customer, and they wanna buy through AWS Marketplace.

They can submit a bespoke subscription through Marketplace with specific pricing, you know, a negotiated EULA, and the customer can accept that all through Marketplace, and it just goes right on the AWS bill. Oftentimes on the private offer experience, because your sales teams are involved, they're kind of leading the effort there. We look at the self-service side as a way that we're generating business for those partners. Again, for our most successful relationships, we have both sides of that equation, and that's one of the things that we really have with MongoDB that's nice. We don't have that with all of our partners.

Dave Kellogg
Event Host and Executive, MongoDB

I would be remiss, and not only throw layups in terms of questions for you.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah.

Dave Kellogg
Event Host and Executive, MongoDB

I'd be remiss in not asking, you know, clearly there's some areas of overlap between AWS and MongoDB.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Mm-hmm.

Dave Kellogg
Event Host and Executive, MongoDB

Can you speak to how you deal with that in general and in particular with MongoDB?

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah. Yeah, you know, it's actually a big reason why we do have Marketplace, even though we might have overlapping functionality in areas between native first-party service and a product like MongoDB, that's okay because when a customer decides that they want to go with a third-party product, we want to support that from an AWS perspective. That's why we have specifically put in compensation programs that, you know, when a customer says, "We're going with MongoDB," we want our field teams to also get in line with that. We're gonna pay them on that subscription and make sure that they're really incented to go do that. 'Cause again, at the end of the day, that workload is landing on top of AWS. That's beneficial for us.

They're using MongoDB, they're building with AWS, that's beneficial for us as well. Even though there might be some overlap in terms of the functionality, once a customer makes that decision, we wanna support that. We, you know, we don't look at that as a bad thing because it's still driving consumption for the AWS platform.

Dave Kellogg
Event Host and Executive, MongoDB

I don't know if you were here for the keynotes, but we talked about some of the announcements, and one of those was the, you know, being awarded the financial services competency award. You touched on competencies before.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah.

Dave Kellogg
Event Host and Executive, MongoDB

maybe you can just double-click a little bit on what that.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah.

Dave Kellogg
Event Host and Executive, MongoDB

what that competency really signifies then and what that means for the relationship.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah. Yeah. We talked about... Well, the first competency I mentioned was the data and analytics competency. That's where we work with partners like Mongo to kind of go through a verification of how that solution runs on top of AWS and making sure it takes advantage of, you know, key underlying functionality so that the product will work as efficiently as possible on AWS. Then on top of that, we are now having competencies around specific industry verticals like financial services, which is what was announced today. That's on an additional level of competency, where we'll actually sit down with Mongo, and we go through, you know, a number of use cases that they might have for financial services customers.

We actually go through some case study work, we really come up with a number of use cases that are very specific to those industries like financial services. Again, it's all in line to help provide a really good customer experience to our joint customers in financial services. That's just one of many. You know, you take a look at what MongoDB is doing, and they're also working towards our automotive competency for those very same reasons. Even though MongoDB provides a great solution that, you know, it can have a horizontal storyline, there's also vertical nuances, and that's really interesting for AWS to work with a partner like MongoDB, is make sure that we've got those use cases identified for customers like financial services.

You know, another good one to point out would be public sector, right? You know, recent FedRAMP moderate approval, and that's very important for AWS customers as well, to have that FedRAMP status. Again, just providing something that's very specific to a subset of our customers, but having that use case that's identified that we can work together on.

Dave Kellogg
Event Host and Executive, MongoDB

kind of the last question would be is: what do you think we can do more in the future?

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah. We've had a long relationship. One of the key things that we really point to that we did about a year ago was the Strategic Collaboration Agreement between MongoDB and AWS. These are agreements that we put together that are multi-years in length. This one's actually six years, it's one of the more longer ones that we have. It's designed to really kind of structure what we want to do from a build, market, sell approach over a multi-year journey. We're just one year in. We've been, you know, super happy with the results so far. That also gives us a good, you know, starting point for where we want to go.

The other place that it'll be interesting to work on moving forward is again, that vertical approach, right? The kind of announcements today that you had around Atlas for Industries, again, really maps well to what AWS is doing from a, from a vertical approach as well. Really interested to not only work with MongoDB as we have to that core developer persona that we've had in the past, also, how do we go then work with financial services customers, you know, public sector customers, and some of the other verticals that you're focused on as well?

Dave Kellogg
Event Host and Executive, MongoDB

I know that's a top priority for Adam, right? He's

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Oh, absolutely.

Dave Kellogg
Event Host and Executive, MongoDB

-industry.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

We're very much going in that direction.

Dave Kellogg
Event Host and Executive, MongoDB

Terrific.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah.

Dave Kellogg
Event Host and Executive, MongoDB

Well, thank you so much for your time. We really appreciate the partnership.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

Yeah.

Dave Kellogg
Event Host and Executive, MongoDB

I appreciate you being here, so.

Chris Grusz
Technology Partnership Organization Lead, Amazon Web Services

All right. Thank you for the opportunity.

Dave Kellogg
Event Host and Executive, MongoDB

Please thanks to Chris.

... minute break, before the final hour, just to let everyone, take a bio break or whatever they need, and we'll be back in 10 minutes. Thank you.

Michael Gordon
CFO, MongoDB

I was warned about giving you all a break. All righty. Well, hopefully, that was helpful on the product and customer side. Before we get to the Q&A, which I know just even from the break, that there are a bunch of questions out there, so we'll look forward to covering those in detail. Dave, Sahir, and I will all be up here. I did want to spend a few minutes giving a little bit of a business update. Again, we normally want to make these sessions focused on product and on our customers, but thought it'd be helpful to take advantage of the fact that we had you all here together to give a little bit more insight into what we're seeing. Thank you for doing that with us.

There are three things I really want to cover as we spend time today. I want to talk about the drivers of the Atlas business and make sure people understand those. We talk about those a lot, but really want to take the time to help highlight some of the key trends that we're seeing. I want to make sure and give a little more visibility and insight into the customer base, so we'll do some fun slicing and dicing. That'll give a little bit more visibility into what we're seeing there. And then third, touch on the financial side briefly and talk about how we've been demonstrating profitable growth and how that's an important part of the story as well. Without any further ado, I will drive straight into the Atlas drivers.

As I said at the beginning, remember, we grow in accounts by acquiring workloads, right? Everything is this workload orientation. We'll talk about new workloads, and we'll talk about growth of existing workloads, 'cause that's sort of the dynamic that we have. I showed you this sort of illustrative journey that we had, where we have to win that first initial workload. That workload will grow based on a number of application-specific factors, but also based on the macroeconomic environment. What we'll do, though, is we'll continue to win workloads in the account. Organizations can have hundreds or thousands or more applications within there. Sort of this workload dynamic sort of builds upon itself, and that's kind of the dynamic that we see over time.

As we've said, Atlas growth in the short term is really dictated by the growth of those existing workloads. New workloads start small, and we'll walk a little bit through that. That growth of existing applications, and existing workloads, is driven by the underlying usage activity. Think of this as sort of the reads and writes, the query activity in those underlying indications as a, you know, second-order effect of the actual end-user interaction with the application. We've talked about how we've seen slower growth of existing applications, as a result of the macro environment starting in Q2 of last year.

We thought it'd be helpful is to take advantage of the time that we have today to give you a visual demonstration or give you some insight into what does that actually mean, or what does that actually look like? Here you can see, there's variability kind of quarter to quarter, and that speaks to the seasonality that we've talked about. What I'm showing here is average week-over-week Atlas growth rates. Right? This is meant to sort of normalize for the fact that Q1 has fewer days, and some of those things that we talked about that you understand. But what this shows is the week-over-week growth, and this is one of the dynamics, one of the numbers that we pay attention to from an operational standpoint.

What you can see is it's within a relatively, you know, tight range. As we mentioned, you know, Q3 tends to be seasonally stronger, and you can kind of see that in the history. We've drawn this dotted line on the left-hand side that you can see, that's kind of like the average, you know, before Q2. Again, some quarters are a little stronger, some quarters a little slower growth, but within some reasonable band. You can see pretty dramatically the step down that we saw, and we talked about, and we disclosed at the beginning of Q2 in that Q2 number. You can see how since then, it has not rebounded to the pre-macro level.

When we talk about the growth and how the growth rates are trending, when we talk about the average that we've seen since the slowdown, this is what we mean and what we talk about. Here you can see that stronger Q3 seasonality that we talked about. Here you can see the more pronounced Q4 holiday slowdown that we talked about, and then Q1 being back more in line with what we've seen since the slowdown. Hopefully that helps put some sort of visualization and understanding around the words and context that we've been talking about, to really understand some of the Atlas drivers that we've been seeing. In the long term, though, Atlas growth is driven by our ability to acquire new workloads. That's really the key thing, because new workloads, in the long run, will drive most of the growth.

In the short term, new workloads represent a relatively small portion of the business. New workloads tend to start small, even though they grow quickly. What we've tried to do here is to illustrate what that means. Now, every workload is different, so there are a bunch of simplifying assumptions here and everything else. What we've tried to say is sort of in a given quarter, a relatively small amount of the impact of the incremental Atlas revenue is coming from new workloads, right. It sort of makes sense. You've got a large installed base, it's growing healthily, new workloads start small. What happens is, over time, if you play this out over five years, right, if you look over 20 quarters, what winds up happening, right.

The reality is, the new workloads that you add over that subsequent workload wind up being responsible for the vast majority of the incremental growth. It's somewhat logical and intuitive and sort of, you know, math friendly, but we thought that maybe visualizing it would help complement the words and the dynamics that we've been describing. That's a little bit about the Atlas dynamics, and what we've seen. I want to switch and start talking about the customer base and do a little bit more peeling back of the onion and share some, I think are interesting views of our customer base. As you know, we've got now over 43,000 customers.

Here I'm going to use the fiscal 2023 numbers, just so we're kind of talking year-over-year, we only do this investor session once a year, so it just seems easier to talk about everything in full year terms. Obviously, we reported the Q1 numbers and had continued strong growth, more than 2,000 customers added in the most recent quarter. We continue to see very healthy new business activity despite the macroeconomic environment. When we talk about total customers, we tend to be broken down into two different buckets.

There's the self-serve customers and the direct sales customers. Within direct sales customers, we'll talk about a little bit, that actually breaks down to enterprise customers in the mid-market. Those who followed the story for a long time have heard us talk about that, and talk about that breakdown. Customers also continue to grow with MongoDB. On the left-hand side, you can see very strong and healthy growth of our customers, over 100,000, and we report that every quarter, and continue to see strong and healthy growth there. On the right-hand side, here you can see customers growing, and now spending more than $1 million with us, and they continue to grow healthily as well.

These numbers for the last year have been about 25% or 30% year-over-year growth for the two different buckets, broadly in line with what we're seeing right now on a revenue basis. We have a broad range of customers. We have a very diversified customer base, just in general. You can see everything from sort of very large demanding telcos, to established and cutting-edge technology companies, you see core industrials, large pharmaceuticals, innovative healthcare companies, highly regulated banks and financial services and institution firms, and this is just a partial list. There's consumer and energy and retailers and utilities, and a whole wide range of customers who wind up using MongoDB.

You heard across, whether it's the panel today, in some of the keynotes, just the breadth of what MongoDB does, the general purpose nature we do, makes it valuable for all these different use cases. It's not sort of just a single trick, it's not the hammer that's sort of looking for the nail or anything like that. There's also a wide range of use cases. I'll highlight a couple. Again, we try to talk about this in the session, with the customer panel and other things, 'cause I think it helps provide a little bit of context for people. Let's talk about healthcare. Patient data in healthcare, is incredibly sensitive, highly regulated, and deeply siloed. One of the largest North American insurance companies is using MongoDB for its patient data interoperability.

What they're doing is they're using Atlas to collate and aggregate all the healthcare data across their multiple and disparate systems to achieve critical compliance needs, but also improve the patient experience with real-time data integration. Just one example of how people are using MongoDB. Similarly, within the telco industry, like any industry, but particularly the telco industry, fraud detection is an incredibly important part of what they do. One of the largest multinational telcos built its real-time fraud detection, fraud prevention platform on Atlas. They're running 50+, more than 50 AI models, and they're processing over 10 million events per day. What that's done is that helped reduce fraudulent activity by roughly 80%. Again, you can kind of see the impact, the scale of MongoDB.

Lastly, I'll just highlight, I'll pick a retailer, a European retailer. Like many retailers, competing for customers is incredibly demanding. You heard the commercetools folks here. Different situation, different use case. Personalization is incredibly important within retail. One of the large European retailers is using MongoDB to power its personalized shopping experience for its more than 2 million daily customers to its website. It's a one-stop shop for more than 5,000 brands, its relational existing relational solution couldn't scale. Think about this from a retail experience. It used to take 12 hours for them to update product pricing and availability. They switched over to MongoDB, now it's done within minutes. Just a little bit of some of the use cases that we're seeing.

Hopefully, that gives a little bit more flavor and insight. We talk more broadly, kind of away from the individual, we're gonna now talk about sort of in the, in the aggregate. We'll talk about cohorts and what we're seeing on a cohort basis. As I mentioned, customers do start small. This is looking at customer cohorts. They start small, they grow, you know, very healthily, and by the end of the first year, we've sort of indexed and grow 100, and you can see over the next subsequent two years, they grow about 2x, 2.1x, to put an exact number on it from a subsequent growth standpoint. Some of this, these are cohorts that have reached three years.

Some of these cohorts are not affected by the macro, so I think you need to keep that in mind. We talk about the impact of the macro, but that's really the growth that we're seeing from our cohorts. Interestingly, we also looked at this analysis for our larger customers. We said, all right, let's look in back, you know, over the three years of data, let's look at the customers at the end of fiscal 20 who are more than $100,000 in revenue. That's what the left-hand side represents. Even those customers, over the three-year period, have grown to X.

On the right-hand side, we took the customers who are already at $1 million or more at the end of fiscal 20 and followed that cohort over the next three years, and that cohort similarly also doubled over the course of the next three years. You can see this is sort of, you know, the logical conclusion when you think about everything that we're talking about, of huge TAM, a market that's dictated workload by workload, so the opportunity not just to land within the account, but then continue expanding within the account, and I think that's what these numbers help pan out. This is in one of the earlier slides, just talking about the size of the market that I walked through.

I think it's important to remember this and to underscore this, we're really quite early on in capturing this market share. In aggregate, if you add all this up, you'll conclude that we're closing in on 2% market share, right, when you do the math. I think what's really extraordinary is we're still in the early phases of winning even within enterprise accounts. Here are some stats that I thought was interesting in terms of sort of slicing and dicing the universe. If you look at the Fortune 100, we have just under 2/3 of the Fortune 100 now as customers. If you expand that to the Fortune 500, just under 40%, about 1/4 of the Global 2,000.

Even though we're winning significant numbers of new customers and we're chipping away at the larger customers, there's still an enormous amount of opportunity that we still have. In fact, if you look at the market size, and you kind of break it down into the Fortune 100 and the Fortune 500 and what they spend on databases, you can see that that same closing in on 2% narrative holds. We've got a very broad representation of the customer base, from the most demanding customers in the world to the newest startups, and equally penetrated, you know, across all of them. But I think the key takeaway is still pretty early on, in pursuing this market opportunity. We think about our customers. It's a broadly diversified customer base.

We'll come at this in three different slices here, in the hopes of helping paint some visibility. I mentioned the different channels. Within the self-service customer base, self-service is about 12% of ARR. That means the rest, the other 88, is more in that direct sales bucket. 75% of that, or 75 percentage points, excuse me, of that 88 come from enterprise customers, and about 13% from the mid-market. Enterprise is the largest chunk. It's quite geographically distributed, a little over half in the U.S. You can see just under a third in Europe and the balance in APAC.

When you think about relative customer concentration, our top 100 customers account for just over a third of the business, with no customer accounting for more, 2% or more of the ARR. It's quite diversified. Even within these largest customers, while we've had success penetrating them, we still have relatively low wallet share. If I spend another minute just sort of double-clicking further on the top 100 customers, we thought it might be interesting to take a product slice of things. You know, we've talked about how we're agnostic in terms of the product that we sell, and we here at MongoDB aren't really in a position to tell any of the panelists up here that we had, "No, you should build your application, you should put it in the cloud, you should run it on-prem." That's really their choice.

What this shows is customers pretty neatly cleave into either primarily Atlas or primarily EA. You can see they're about 47%, just under half, are mostly Atlas, so we just picked above 80% just to help people understand. About 38% are mostly EA. These are the dollars of ARR of the top 100 customers, and the small chunk, about 15%, that really are sort of in a more hybrid environment. If you try and look at them on a cohorted basis, and you say, "Okay, of your top 100 customers or the ARR from your top 100 customers, where is that coming from?" Not surprisingly, the bulk of that, right, 59%, is coming from people who've been on the platform 5 years or more.

When you think about the market that I described and the fact that it works workload by workload, that shouldn't be surprising, right? That shouldn't be a revolutionary takeaway. Similarly, while new workloads start off small, you wouldn't expect new customers that you just won in this year to be heavily dominating your top 100 customers. The newer customers are this much smaller slice of things. Hopefully, that helps paint a little bit of a picture in terms of the customer base. Then lastly, what I wanted to do was talk about the financials, talk about the P&L, and talk about how we've been demonstrating profitable growth over time here. If we take a step back, the revenue growth since the IPO has been fairly significant. Excuse me.

We've seen about, you know, 8x revenue increases, 8x over that time frame. Obviously, the public company guidance, around $1.5 billion for the current fiscal year. At the same time that we've been growing, we've also been scaling operationally. Here you can see the history of our operating leverage and the operating margin improvement. At the time of going public, we were in a negative 38% margin, and then last quarter, hit 5% margin. We still have a ways to go in terms of our long-term target margins, but we've done a fair amount of work over the last few years, not just in terms of penetrating the market and driving revenue, but also in terms of the operating model and the operating leverage.

Wanted to make sure that we underscore that. We've been able to do that all while Atlas scales. Atlas, as you know, includes the underlying storage and compute, is lower gross margin, and at the time that we went public, was in the single digits in terms of our revenue. Atlas is now just under two-thirds of our revenue. We've been able to very successfully execute on our gross margin plan. Here you can see on the right-hand side, that we've been able to do that despite the fact that Atlas has grown quite significantly. If we look at operating leverage, line item by line item, so let's look at sales and marketing and G&A and R&D. If you look over the last several years, you can see significant progress.

G&A has gone from 17% of revenue to about 8% of revenue from the time of the IPO. We've seen significant scaling in R&D as well, going from 34% down to 19%. Finally, also significant scaling within sales and marketing jumping down from about 62% of revenue closer to 43% of revenue. I think it was helpful to kind of put some of that in context in terms of how we've been trying to, you know, as I joked on the last call, walk and chew gum. Growing the business fairly significantly, really making sure that we're investing in the long term to pursue our market opportunity, but also taking care of the operating leverage side of the equation.

we'll keep investing in sales and marketing. As I mentioned, we have a limited footprint compared to the opportunity. Our win rates are exceptionally high when we're in situations. We're just not in enough situations, and to try and put some dimensions around our footprint coverage. Of the 20 countries in the G20, we only have sales presence in 13 of those. If you focus more domestically within the US, oftentimes people talk about NFL cities as sort of a proxy for the largest cities within the U.S., even though there are 32 teams, there are 30 cities. And we have more than two reps in only 60% of those.

Again, just tries to give you a sense for how big the market is and how early on we are in penetrating that market. In terms of R&D, we'll certainly continue to invest in R&D as well. We have an ambitious product roadmap. You heard a number of announcements, which we'll talk about in the Q&A session. That comes from the result of investing in our core database offering and building to explain our developer data platform. Those are both incredibly important to us, and we will continue to invest in those as well. All that said, our unit economics are incredibly strong. Those show through in the numbers. We will continue to see improving profitability as we scale. Don't think of that as a literal, you know, every quarter.

I think we try to talk about there'll be times when there are either seasonal or other variances, but the general trajectory and the general trend is clearly towards improving profitability as we scale. I think it's valuable to sort of go back to the long-term target model that we had provided at the time of the IPO and underscore the key aspects of it. Actually, I'll hit the numbers. This shouldn't be new to anyone in the crowd. We've talked about these before, but gross margin, 70+%, non-GAAP operating margins at 20+%.

I think the two key things that I would mention here is if I think about where we sit today and, you know, my confidence, our confidence, and our ability to hit these, they're significantly higher than they were at the time of the IPO. Again, as I mentioned, at the time of the IPO, Atlas was single digits in terms of revenue, meaningfully, you know, margin dilutive, and we've accomplished an enormous amount there. Confidence and ability to hit these numbers is much higher. The second, I would say, we're more focused on the upside of these numbers and trying to drive against the upside numbers. The plus is sort of more important than it was in the past. I just would want to underscore that piece of the puzzle as well.

With that, I think we will go to Q&A. If I could ask Dave, who's here, to come on up and join me. I think we are right on time. Perfect. There, come on up. Serge O'Brien, we're doing this handheld mic, so we'll pass them around. Amazing, we have some questions.

Kash Rangan
Managing Director, Goldman Sachs

Kash Rangan at Goldman Sachs, sitting right here. Fantastic session. Anybody that wants to take it on, Dave, maybe you. The market is so massive, $80+ billion, and you have a small share. Looks like there's a lot of replacement opportunity. Today's announcements, the general availability of Relational Migrator, plus a lot of the vector search capability stream. If you were to rank order, what are the biggest unlocks in the market? We all know that these markets don't move linearly, they go through a big step function, changes in competitive dynamics, replacement, and then they kind of stabilize. In your view, for the next four to five years in the world of generative AI, what could be the big unlocks for MongoDB?

Sahir Azam
Chief Product Officer, MongoDB

What I would say is, we're really excited about Relational Migrator, because the TAM opportunity in terms of being able to get people to consider replatforming off relational. Given Michael's talk, you saw that we have less than 2% share, there's a lot of relational out there. The easier we make it for and lower the switching costs of moving off relational to MongoDB, that is potentially could be a massive opportunity. With that- We're fairly balanced.

No one just wakes up every morning and says, "Hey, you know, I wanna re-platform." It's typically some catalyst. The performance of my application is really so bad that my end user is complaining. It's very hard to add new features because the data model is so brittle, or just the value to cost proposition is completely out of whack, and we just need to refactor that. There's got to be some compelling event, but since there's so much latent relational out there, you know, even a couple points of share would be obviously very meaningful.

Frederick Havemeyer
Senior Enterprise Software Analyst, Macquarie

Hey, thank you very much. Frederick Havemeyer with Macquarie. I'm really focusing in on vector search, as I think the thing I want to be asking about today. Firstly, I think, with the announcement, thank you, it will change how I'm hacking on the weekend a bit, and I'll no longer be donating some of my money to some of the startups out there that are doing vector search. I'm curious here, a two-part question. Firstly, vector search, is this something that you think would be expanding the volume of data that you're going to be storing within MongoDB? If so, do you have any kind of initial thoughts on how that sort of attaches to some of the documents that you store presently? Secondly, I mean, I'm running a generative AI project at my company.

We're doing quite a bit, and it's quite clear that there's a lot of both read and write opportunity with the amount of two-way information and querying that's going on. Do you have any additional thoughts also on how this may impact, longer term, the volumes of queries that are also run on MongoDB and through Atlas?

Sahir Azam
Chief Product Officer, MongoDB

I do think it affects both, to your point. You know, in terms of data volume itself, today, if somebody's building an application that requires a specialized vector database, they might be storing the metadata or the source data in MongoDB, or, you know, as Andrew mentioned, some of the characteristics about the usage of those embeddings back into an operational database, but the vectors themselves are not stored in Mongo. With this new index type, it's definitely gonna leverage some of that. Now, there's also benefits because we don't duplicate the data completely either, so that's cost advantage for the customer to use one consolidated solution versus both. It's not gonna be like for like compared to two databases.

Conversely, it's a new index type, and every time there's a new index type and query, you know, set of operators, to your point, that's gonna drive throughput, which drives the memory and compute usage of the actual database clusters or the environments that we're running at. As those applications get to scale, obviously, there's a lot of prototyping happening right now, a lot of, you know, toy applications, experimental, but as more of these companies take these capabilities in an existing app or a brand-new app to scale, to kind of tie to Michael's kind of life cycle of an app, description, certainly that, you know, would drive more throughput on the engine.

Brent Bracelin
Co-Head of Technology Research and Managing Director, Piper Sandler

Brent Bracelin with Piper Sandler. Thank you guys so much. One clarification on vector search. You guys announced basically previews for relational and QE last year. A year later, the GA. Are you thinking about a different timeline for GA around vector search, given that pace of interest is changing? That's the first clarification. Second question really is around vendor consolidation. We a little surprised to hear Morgan Stanley buy into the argument. They wanna consolidate these specialty things. That strategy seems to be working. Is caching becoming an area where you're seeing more folks looking to consolidate that into Mongo?

Sahir Azam
Chief Product Officer, MongoDB

You want to start? Yeah, the first piece of it on, you know, with vector search. Sorry, I lost the first part of it. How long before GA? Yeah, this is customer driven. I mean, you know, these are hard data products, so definitely getting the metrics we need to hit in terms of stability and throughput and all of that does take time versus a stateless application. That's why a year timeline, any data service you see out in the market typically has that timeline. We don't have it pegged as it's definitely, you know, MongoDB local next year.

It has a lot to do with the scale of applications that we now see in the public preview, which will be an order of magnitude more than what we've had, or multiple order of magnitude perhaps, compared to the private preview. We feel like when we hit the quality gates and the performance gates that we've set and the customer satisfaction and, you know, feedback around that is solid, then we'll look to put it to GA. That's typically the goal we have anyways. Caching? Yep, on caching, I think there's opportunity for us to potentially get into that space with certain use cases. In fact, the raw memory performance of MongoDB's in-memory engine is similar to a lot of caching solutions already today, but there's some capabilities we would want to expand there.

I think it's an and, not an or. There's also an opportunity to create better integrations to some of the common technologies, because there are scenarios where people want to cache multiple source data sets from multiple database solutions together, and we certainly need to be open and extensible to be able to support that well and wouldn't want to be shut out.

Brent Bracelin
Co-Head of Technology Research and Managing Director, Piper Sandler

Yep.

Tyler Radke
Director and lead Equity Research Analyst, Citi

Tyler Radke from Citi. It's a smaller conference, but I feel like two or three times the volume of product announcements, you know, very impressive. I guess I wanted to ask two questions on vector and on stream processing, which were kind of the key highlights. First on vector, could you just talk a little bit more about the technology stack? I think it's built off Apache Lucene, but just kind of your view on how vector under MongoDB is positioned against, you know, call it Pinecone or Weaviate, some of the other vectors out there. Secondly, stream processing. I did not expect to hear that announcement today.

I think a lot of folks were caught off guard, but could you just talk about, like, why now, and is the approach leveraging Flink, which is one of the popular stream processing open source technologies, or just kind of your view on your differentiation from a technology perspective?

Sahir Azam
Chief Product Officer, MongoDB

On the streaming side, we are not built on top of Apache Flink. We've extended our own query processing layer and engine over to the streaming platforms and be able to plug in and tap into a Kafka topic, for example. We did and always look out at the market at technologies we can either incorporate and integrate ourselves or even M&A. The big challenge we see is all of the typical players out there are, that are Flink-based or even alternatives to Flink, are not really good at handling flexible document models. You know, that, in many ways, kind of pushed us to expand organically our engine to deal with continuous processing of streaming data, as opposed to just wrapping something in that scenario. It was something we definitely have looked at quite carefully.

On the vector side, today, our vector search engine does rely on Lucene, though that's, I think, playing out so far to be quite a positive, 'cause there's a lot of other vendors that are also contributing code back, not just us. We've seen, you know, a lot of push around the vector size limits and dimensionality happening, so we're watching that space very closely. We've also made the decision to architect it in a way that over time, we could support multiple engines for different use cases, but still create a query abstraction and have operations where it still integrates the same driver, the same API, up to the application. It's meant to be over the long term extensible, and I certainly wouldn't say that Lucene is the only engine in the long future that we would support.

We might need multiple.

Sanjit Singh
Executive Director and Senior Equity Analyst, Morgan Stanley

Sanjit Singh, Morgan Stanley. You hear you're a busy man on stage, which is great to see because of all the product innovation. Maybe I'll toggle to Dave. If I step back and look at all the things that you've announced, Michael's point about being under-penetrated in terms of sales coverage, how should we think about the shape and durability growth going forward versus what we've seen in the past, which has been great? You know, you guys have 8x revenue. Is there anything to think about in terms of law of large numbers or a structural change in buying behavior, or customers getting more cautious, given what seems like a really open-ended opportunity? Just your views on how we should think about the shape of growth going forward and the durability of that growth.

I have some questions for Sahir as well.

Dave Kellogg
Event Host and Executive, MongoDB

I would tell you that I feel, you know, that we're better positioned today than we were, you know, kind of almost mirroring Michael's comment on the profitability targets and the, you know, and the long-term margin targets. I would say I feel like we're better positioned to pursue the opportunity today than we were, you know, six years ago when we went public, almost seven years ago when we went public. I say that because it's clear that, you know, if you look at the stages of MongoDB, the first stage was: Is MongoDB a toy? Can we really trust for mission-critical? The second stage was, you know, they're building a cloud service. Can they really compete, or are they basically roadkill for the hyperscalers? The third stage is: Can they really be a platform play?

I think you've seen us chip away. You know, in every stage, there were skeptics. I think you're the one who said the, the bear case on MongoDB has been written, you know, so many times. I think, you know, one of the things that drives us is just customer feedback. All these decisions we made, you know, the product announcement you heard is all driven, you know, predominantly by customer feedback. Customers said, "I don't want to use a search engine," you know, and that, "MongoDB, why don't you just, you know, put it all on MongoDB?" That drove us to build full tech search, so forth. I'm, you know, Michael will shoot me if I give you any specifics about, like, long-term growth models, but, I feel-

Sahir Azam
Chief Product Officer, MongoDB

I'll remind you of the law of large numbers is real.

Dave Kellogg
Event Host and Executive, MongoDB

I mean, we're also proud, and I want to reinforce a point that Michael made. We're really proud of the operating leverage we've shown because we are trying to build a durable business. You know, and getting the next generation of leaders at MongoDB to understand that capital is not cheap. You can't throw money and people at problems, that you have to, you know, be judicious about where you invest and sometimes where you divest because something's not working. That, I think, is a very healthy muscle for us to build. We're not a growth-at-all-cost business, but we feel like we can grow for a long time.

Sanjit Singh
Executive Director and Senior Equity Analyst, Morgan Stanley

Great. Sahir, just in terms of two quick questions on vector databases. Should we think about the vectorization opportunity specifically for data that's already in MongoDB, or can customers, if they want to vectorize some of their unstructured knowledge repository, could they send that to MongoDB, and so it being more of a just outside of the MongoDB engine? What indexing algorithms in terms of ANNs are you supporting out of the gate?

Sahir Azam
Chief Product Officer, MongoDB

Yeah. On the data side of things, you can index any external embedding and just persist it into our vector engine. It's not dependent to be pulled from MongoDB. In fact, we're not actually creating the embeddings. We're integrating into that layer to all the different models that create the embedding. Today, you know, where Mongo is used without a vector database, that's net new data coming into just the core operational indexes. That's typically metadata and information about the embeddings. Now we'll actually have the vectors themselves, either again, embedded in docs or side by side. It supports both models, and, you know, that's kind of the source data versus the metadata store stuff that Andrew was talking about earlier.

Again, on the model side, we built basically on open source models, and we made it extensible. We have, today, the operators baked into the search capability. We'll be building an extensibility layer to the operator, so we can actually swap in multiple KNN algorithms and still in our query language over time. you know, right now, most of it is just basic nearest neighbor, HNSW, stuff that we're using, but I think that'll evolve. Nag, who leads engineering for us on this initiative, has been very strong on keeping it composable and extensible, so we have the flexibility. What will stay the same is the developer experience. That has to stay unified and integrated northbound into the application.

Mike Cikos
VP and Senior Equity Research Analyst, Needham & Company

Hey, guys. Yeah.

Sahir Azam
Chief Product Officer, MongoDB

Mike's next.

Mike Cikos
VP and Senior Equity Research Analyst, Needham & Company

Mike Cikos from Needham here in the front. I'll probably echo Tyler's comments that the Stream Processing kind of caught us off guard, so kudos on all the announcements today. Two things from my side. First, my understanding is that you guys are actually a technology partner with Confluent.

Sahir Azam
Chief Product Officer, MongoDB

Yeah.

Mike Cikos
VP and Senior Equity Research Analyst, Needham & Company

With this stream processing announcement, can you help paint the picture for how this partnership plays out? Is there more competitive overlap, and is it relatively confined to Flink, or is it broader in scope when I think about what you guys are bringing to market versus what Confluent has out there today?

Sahir Azam
Chief Product Officer, MongoDB

Yeah. We absolutely partner with Confluent. They're obviously quite pervasive with Kafka and a lot of our enterprise accounts. We've had relationships co-developing the connector, which will also stay. The connector doesn't go away. There's plenty of use cases for basic data integration, CDC, but that's still very relevant. That'll continue. We also don't provide this transport layer at all in MongoDB. We're the processing layer because we think that's the layer that's really strategic in terms of integrating into an application experience. That does overlap with Flink. Now.

Michael Gordon
CFO, MongoDB

Which is an acquisition that Confluent just made in the last six months. It's this new space that they've also entered into.

Sahir Azam
Chief Product Officer, MongoDB

Exactly. We, we certainly overlap with the capabilities of Flink and some of the other relational-oriented data processing and query engines at that layer, so that's certainly some competitive overlap. I think, you know, even where we have overlapping technologies, we see Kafka, Kinesis, Google Pub/Sub, Redpanda, a whole slew of different transport layers. We plan to remain open, so we can tap into any of those and perhaps even make that experience easier over time out of the box on our platform.

Mike Cikos
VP and Senior Equity Research Analyst, Needham & Company

Thank you.

Jason Ader
Equity Research Analyst, William Blair

Jason Ader from William Blair. Two questions. First, are you planning on monetizing any of the new features you announced today beyond just Atlas consumption? Second, as you think about AI's opportunity to make it easier to switch out of relational databases to MongoDB, does this also reduce long-term the stickiness of the database? It makes it easier for others to switch out from MongoDB to other databases.

Sahir Azam
Chief Product Officer, MongoDB

Yeah. I think on the latter point, you know, we have to compete on the merits of being the better database and developer experience, scalability, price, performance. We often say internally that if the switching costs were zero, more data and applications would naturally flow towards MongoDB, not away from it, because we're a much more modern architecture, and that developer experience and ease of use is a real beneficiary. Yes, technologies like AI and others that can help with data modeling certainly cut the other way, but in an aggregate, we feel strongly that, you know, that's a tailwind to more coming to Mongo because we have the more modern product than a relational database system.

Michael Gordon
CFO, MongoDB

I would just maybe add quickly to that. If you go back to some of the market size slides, we have a lot more to gain-

Sahir Azam
Chief Product Officer, MongoDB

Yes.

Michael Gordon
CFO, MongoDB

-than we have to lose. Marry that with making switching easier, and our competitive advantage, like, I'd take that trade any day.

Sahir Azam
Chief Product Officer, MongoDB

Yeah. In terms of monetization, the new features are priced. They will, as people use them, the consumption will, you know, be charged for the underlying compute and usage of that. However, it shows up through Atlas consumption, and that's very purposeful because we want everything from, you know, a free tier user getting started, you know, kind of those 40,000 developers signing up every week, to experience the entire platform, build from the start with any of the capabilities, and not be gated behind certain contracts to get access to certain features.

The usage of those different capabilities, you know, if it was just using a database, illustratively, maybe we get $1 for that particular workload, but if they use stream processing or search or other capabilities in the platform, we're not just getting that $1 per workload, we're getting $1.20, $1.30, $1.40, and so on, the deeper they use the capabilities.

Michael Gordon
CFO, MongoDB

Illustrative numbers not to-

Sahir Azam
Chief Product Officer, MongoDB

Yes, illustrative numbers. Yes. No.

Michael Gordon
CFO, MongoDB

Brad.

Brad Reback
Managing Director covering Enterprise Software, Stifel

Brad Reback , Stifel. Historically, you guys have talked about, I think, the better part of two to three years for net new apps to sort of reach a steady state of usage. That's because they're net new. If you think about relational migration with an existing app that's being replatformed, should we think about that getting to higher usage levels much faster from your previous experience?

Michael Gordon
CFO, MongoDB

Yeah, I mean, I think the way that I would describe that is if you have an existing application, you know, that you are moving over, there's a known workload, there's a known set of usage. Compared to, like, the average new workload, yes, certainly, that will, you know, consume more out of the gate, you know, in that, you know, month 1, year 1, whatever kind of time period you want to look at. I think the corresponding factor for that specific workload would also be, you know, the normal pattern is we see pretty healthy growth for, you know, the first few years. It slows off. It is still growing, right, but the growth moderates over time.

I think you'd see sort of a different dynamic where it would start larger and probably be at more moderate growth because it's already like an existing installed application. There are kind of trade-offs in, in both ways. If you think about the market, while we will be better positioned to win, you know, many existing relational workloads, and each year, we've won some more dollars of relational, you know, displacement, there's still so much new that's being built, right? If you look at that, the growth of the market, right, the market size stats have $12 billion or $13 billion of new every year, right? If you take the $80 billion that we have today, and just for simple math amongst ourselves, just decide on a 10-year average life, there's sort of $8 billion of replatforming.

The relative market still is more in the new, even in the replatforming, although clearly, we're getting better and have more opportunity to win in the replatforming side.

Sahir Azam
Chief Product Officer, MongoDB

I think there are also a couple of different patterns that happen. You know, sometimes an entire kind of monolithic app gets shifted over, and that's a mature app, and, you know.

Michael Gordon
CFO, MongoDB

Yes

Sahir Azam
Chief Product Officer, MongoDB

The spend can start larger. With many of these complicated environments, they're peeling off functionality kind of piece by piece. They're kind of taking a monolith and breaking it down into microservices, and that ends up being more incremental.

Michael Gordon
CFO, MongoDB

Yeah

Sahir Azam
Chief Product Officer, MongoDB

of growth over time. It depends on the pattern. I would say the same thing applies to MongoDB migrations. You know, we see community or Enterprise Advanced migrations. Those tend to come in larger than a new application because it's mature throughput.

Howard Ma
Director and Equity Research Analyst, Guggenheim Securities

Hi, Howard Ma with Guggenheim Securities. Kudos on jam-packing a lot of substantive content in a clear and concise manner. Can you guys talk more specifically on the monetization timeline for Atlas Stream Processing? To what extent can you talk about it within this context? To what extent are your customers building applications on top of process real-time data today? Is it in the minority or is it in the majority? What is the process of getting off of a, I'll call it a competing solution? I mean, it could be custom-built, right? It could be Apache Flink or Spark or Storm or even Hadoop.

My understanding is that sometimes it takes a while, that the migration from batch to real time can take a while. How should we think about the, you know, the monetization playing out? Thank you.

Dave Kellogg
Event Host and Executive, MongoDB

Want to take that?

Sahir Azam
Chief Product Officer, MongoDB

Sure. I think the idea of event-driven applications or real-time applications is definitely emerging. Even if you look in the, like the revenue of the streaming, you know, transport and platform players, a lot of their use cases are things that are focused on generalized CDC and plumbing of data around the infrastructure. That's not really that interesting to us. What's much more interesting to us is how that data empowers the application. That, today, is a minority percentage of the overall streaming revenue pie, but is growing because applications are, to your point, getting more real time, getting more event-driven, and that's we want to skate to where the puck is going and not trying to be just a commodity kind of plumbing player. I think the product's early. You know, we just released a demo today.

We'll go into private preview and go through that typical arc of making sure the quality performance is there, before we really hit the gas pedal on, you know, lighting up our enterprise sales team and running heavy enablement and all that to push that out.

Dave Kellogg
Event Host and Executive, MongoDB

Right. I do want to add, our conviction is very high here.

Sahir Azam
Chief Product Officer, MongoDB

Yes

Dave Kellogg
Event Host and Executive, MongoDB

Because, what I talked about at the keynote is that we believe stream processing is a natural fit for a document model. The flexibility, the scalability, the data is mainly JSON, it's all developer-centric. It's kind of our sweet spot. There's plenty of work to do, but our conviction is high that we have a real opportunity here.

Michael Gordon
CFO, MongoDB

I'd say the other thing, away from stream processing specifically, but just macro, when you think about monetization, when you think about the developer data platform, I think it's tempting, and in some conversations, you know, you all ask about, "Okay, what's the uplift from X or, you know, what's the value attribution to Y?" I think in some contexts, it's also important in the context of this much larger market, with so many workloads to win, some of the time, it's actually creating the winning of the workload in the first place, right? It's the breadth of the offering and the integrated experience, and the simplification that that all provides. There are a lot of different ways to think about monetization that might not just be like, oh, what SKU is showing up on one invoice and what revenue-

Sahir Azam
Chief Product Officer, MongoDB

Yeah

Michael Gordon
CFO, MongoDB

Can you directly attribute? I would just encourage you all to keep that in mind as well.

Howard Ma
Director and Equity Research Analyst, Guggenheim Securities

Thank you.

Rishi Jaluria
Managing Director and Senior Equity Research Analyst, RBC Capital Markets

Thanks. Rishi Jaluria, RBC. Really appreciate all the detail and announcements today. Maybe continuing on that, on that trend, if we think about a lot of these newer offerings, like search or streaming or vector, what do you think about the ability to actually land net new customers with those products rather than? I think a lot of us are viewing this as you land on database, and then you expand into these newer use cases. What would you need to do from a product or go-to-market perspective to maybe, you know, see that happen more often? Thanks.

Sahir Azam
Chief Product Officer, MongoDB

Our core focus is, to Michael's point, accelerating the pace at which we can acquire more workloads because we're more differentiated by having all this capability collapsed into a single system. I mean, it's a crude analogy, but like an iPhone had three or four different devices collapsed into a phone, we want it to feel like there's three or four different separate components or more over time, collapsed into a single database.

The way we measure success is the velocity of new workloads that are landing on platform, and then secondarily, the dollar share that we drive per workload, but through the depth of usage and how broad of a platform they use, as opposed to saying, "Okay, we have to go after XYZ market share of some other segment of the data market." We're in a fortunate position where we have almost endless runway in the core operational data market. It's about differentiation and unit price per workload, I would say, more so than it is trying to pivot the go-to-market to a completely different model. These technologies have different competitors, different value propositions, and so there's certainly an amount of sales enablement and training that goes into it.

It's not automatic, but it's not like a completely different business unit with a different sales team that has to go after a different buyer. It's all very much streamlined, aligned to our, the go-to-market engine we've built.

Dave Kellogg
Event Host and Executive, MongoDB

Yeah, to that point, I mean, if you look at Search, we are winning workloads that we would not have normally won, not just of the search component, but the entire workload, because people see that as one big workload. You know, the value of putting it all together in one place has enabled us to win workloads we would not have won if we just had our OLTP engine.

Sahir Azam
Chief Product Officer, MongoDB

All right.

Ivan Feinseth
Senior Partner and Chief Investment Officer, Tigress Financial Partners

Hi, Ivan Feinseth, Tigress Financial Partners. Thank you for taking my questions, and congratulations on all the great announcements today. On the number of customers that are over $1 million in billing, how many have grown from a lower level that your functionality and services have helped drive that growth? What would you say has been your key advantage to winning large customers? The announcements you've made today, where do you think you have either first mover or best mover advantage that will accelerate that?

Dave Kellogg
Event Host and Executive, MongoDB

Michael, you can correct. I think almost all of our seven-figure customers started as six or lower figure customers, right? I can't remember the last time we just started with a new customer who started at seven figures. To the chart that Michael showed, where greater than five years, the size of those customers were demonstrably larger. Our whole methodology in terms of how we go to market to win workload by workload, which also means that you need to have a long-term orientation with that customer. For example, our sales incentives, as some of you have followed, how we've kind of evolved our go-to-market operation, our sales incentives are very inherently driven for long-term behavior.

We're not trying to get salespeople to go close a seven-figure deal this quarter, you know, because that one, that rarely happens, and two, the more confidence customers have with the first, second, and third workload, the more business comes. I think I've talked about in the past, you know, you go from like the side door, you know, where you're kind of chosen because the existing tech staff doesn't work, then you go to the front door because now you go on the approved vendor list. The goal is really to be on the loading dock because you're like a standard, and they're saying, "Our default standard is to go with MongoDB." That does take time, and those five-year customers who are north of seven figures, we've become the standard.

Julie Bhusal Sharma
Equity Analyst, AM Technology, Morningstar

Hi, Julie Bhusal Sharma at Morningstar. Just going back to the relational database opportunity. In 10 years from now, what would you expect, like, that mix to be of MongoDB workloads that originally came from relational conversions?

Michael Gordon
CFO, MongoDB

You want to start or you want me?

Sahir Azam
Chief Product Officer, MongoDB

I think, you know, the rough numbers, any given quarter today are 20%-ish, 30%-ish. You know, we don't manage to that metric, really. In fact, I give my product manager on Relational Migrator a hard time because I don't want him measuring on the percentage of our total workloads, because we want as many of the new workloads as possible, and, you know, we don't view that as a zero-sum game. Certainly, it's conceivable that as we remove more and more friction and the ease of mature applications moving on to the platform goes up, that that number could go up, because today it's still a very highly manual process.

Dave Kellogg
Event Host and Executive, MongoDB

Yeah, I think, I mean, 10 years ago, from now, there'll still be a lot of relational. I think there's a large ecosystem of other tools that people use. There's this learned behavior. Changing behavior is difficult for people, for some people. As I said, there's got to be some catalyst to replatform. You're not just gonna wake up and say, "That app sitting in that corner, you know, I just want to now replatform." There's got to be some compelling event for them to consider replatforming. We think, you know, we should have, obviously, a lot more share over the, over the next 10 years.

Michael Gordon
CFO, MongoDB

The other thing I would just caution about, if you over-index on the % or the numbers, you can get pretty quickly lost in the weeds, where all of a sudden you talk about someone who's replacing functionality, but because they're doing so much of a rewrite, maybe also moving to the cloud and modernizing it or watch other things, if you ask them internally, they may call that a new application, not a replatforming, even though you're actually replacing relational technology. I wouldn't get overly hung up on the % or the specific numbers, but certainly, we see the opportunity and the trend.

Miller Jump
Vice President and Equity Research Analyst, Truist Securities

Hey, sorry. Miller Jump from Truist Securities. Thanks for putting all this together. I guess, Dave, you had mentioned search earlier as just an area where you all are seeing some success. That was obviously an area where there was some enthusiasm during the customer panel today. I was just curious, when you actually look at that market, is it that you're going in and displacing other search tools throughout all of those use cases, or can you just talk to me about the extent to which you can add new use cases through your search tools as well?

Sahir Azam
Chief Product Officer, MongoDB

I'm not sure I understood the second part of your question.

Miller Jump
Vice President and Equity Research Analyst, Truist Securities

I guess, is it always displacement sales with these search use cases, or are they...

Dave Kellogg
Event Host and Executive, MongoDB

No, it could be a new workload where they recognize they need to have some key text search functionality, but they're deciding, should I go with? You have to remember, when a customer thinks about building an app, they're revisiting what tech stack do they use, right? It's almost as like a micro decision for every new app you want to build. Like, okay, what tech stack do you want to use for this particular use case? In that scenario, you know, they may not even have an existing search database, but they're saying, "Okay, do I go with, like, some combination of Elastic and Mongo, or do I just go straight with Mongo?" We are for...

You know, we have seen displacements where customers said, "I have an architecture, it's kludgy, I want to move everything to MongoDB." Again, I would still say we're in the very early days of search, and that we have a lot of opportunity, you know, as we improve the add more features, add more capabilities, and continue to drive the performance of those capabilities, there's a lot more displacement opportunities available for us.

Sahir Azam
Chief Product Officer, MongoDB

Yeah, I'll add the displacement opportunities we're laser-focused on, where we see all this runway, is application use cases that are driving the actual, you know, software experience. We're not focused on observability here, security analytics. We see that as sort of a different segment of the market that requires, you know, completely, in some cases, different go-to-market model. There's so much appetite for customers to consolidate technology on how they build these applications that we think there's a long runway ahead.

Dave Kellogg
Event Host and Executive, MongoDB

With that, we're at time. Thank you, everybody, for coming, and have a good rest of the day.

Michael Gordon
CFO, MongoDB

Yeah, thank you all for coming. Take care.

Powered by