All right. Welcome to another great session in day four of the Morgan Stanley TMT conference. I'm Sanjit Singh. I cover the infrastructure coverage within the software team at Morgan Stanley. We're super excited to have the management team here at MongoDB. We have CEO Dev Ittycheria. I think Serge is gonna be joining us on stage in about a minute. He's getting mic'd up, but given that we're a few minutes late, we're gonna get started. Before that, let me just get through the disclosures. For important disclosures, please see the Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales representative, so maybe just to level set, Dev, when we think about this year, another solid year, overall revenue growth 19%. Your cloud business, on a reported basis, was a 27% growth.
Reported earnings results last night, again, solid quarter, 20% overall growth. Cloud business, growing 24%. But, you know, the outlook did disappoint the market expectations for fiscal year 2026. Before we dive into the details and the debates around Q4, I guess the high-level question that I have for you is, from a big-picture perspective, do you believe that MongoDB can get back to that 20%, 20% revenue growth profile that you've historically achieved since you guys have been a public company?
The short answer is yes. And I think one of the key points we wanna emphasize from the call yesterday, and we tried to do that in some of the one-on-one meetings earlier today, is that Atlas growth has stabilized. Consumption growth has stabilized. And that's a question a lot of investors were asking us. And, you know, if you do the math, you know, based on the, feedback we gave on the non-Atlas business, I think you can, you know, reverse engineer into a pretty solid growth rate for Atlas. And that's a function of a couple of things. One, the base itself has stabilized. Two, the workloads that we acquired last year seem to be bigger and growing slightly faster than the workloads we acquired in the year before. Period.
That's also giving us confidence about the workloads we will acquire this year as we move up market and address more and more sophisticated use cases.
Yeah. That's a great way to frame it out. So let's talk a little about, Serge. Maybe you can walk us through the guidance philosophy, the assumptions that underpin that guidance, the puts and takes, and why we're that sort of outputting and netting out to, sort of a 12%-14% guide.
Yeah. Yeah. So the total guidance is 12%-14%. What we called out is that the non-Atlas revenue will decline in the high single-digit range. And that is entirely due to the fact that we're facing a very difficult compare when it comes to multi-year non-Atlas deals. So due to ASC 606, when we sign a multi-year deal, we recognize the term license component upfront. And we had two very strong years, exceptionally strong years in a row, of multi-year, non-Atlas deals, fiscal year 2024 and then fiscal year 2025. And so what that means is we look into fiscal year 2026, the opportunity set of deals in our renewal base to do multi-year deals is simply lower than it was in the prior two years.
Even though we assume that we will be equally as successful, just a smaller opportunity set yields $50 million less in multi-year revenue. That's the reason why the non-Atlas revenue is declining in the high single digits%. Dev already talked about Atlas. What's implied in the guide is stable consumption growth in Atlas fiscal year 2026 versus fiscal year 2025. We find that very encouraging in the context of a growing business, and that's no small feat. Again, we feel like we can accomplish it because of the strength we're seeing in fiscal year 2025 workloads as well as our increased investment up market in our strategic accounts where we see better productivity. So those are the puts and takes when it comes to the guide.
I think that, perhaps a better way to think about the underlying growth of the business is to normalize for the $50 million. And so the way that I would do it is I would remove it out of the fiscal year 2025 revenue base. So that's how that 12%-14% becomes roughly 300 basis points better. And that's more reflective of sort of the underlying growth of the business minus the sort of the accounting lumpiness.
Yeah. So one of the questions I'm getting this morning from investors is if you just look at big picture and you look at the trend line of growth, you guys, I think a couple years ago were, you know, high 40s%, the year after, low 30s%, this year 19%, now guiding if we sort of normalize mid-15%. So the question I'm getting is, is there a competitive issue here? Is there any issue with, you know, retention in terms of that line of concern, Dev or, and Serge, how would you address that?
Yeah. So our win rates are still very high. It does, you know, what we see. There's really three layers to the cake in terms of our growth. One is the base business, that workloads that are required, you know, post two years ago, and obviously that's the largest part of our Atlas business. There's the workloads required in the prior year, and then there's the new workloads. We did have issues last year this time where the base was not growing as fast as we thought, and then the workloads we had acquired in the previous year were not tracking as fast as we thought, and then we had a late start to the year with some organizational just changes with the start of the fiscal year. We fixed all that, so we feel like that's a big reason why we're calling out stable consumption growth in Atlas.
I would say that we do think this year is a year of transition. We are really excited about the opportunity in AI. But we also do recognize that customers, especially large enterprises, are moving quite slowly in the deployment of custom AI apps. Most of the AI use cases are fairly simplistic: chatbots, document management use cases, etc. but we are seeing people get, you know, very interested. And we think architecturally we are well designed for the world of AI. One, we support different types of structured, semi-structured, unstructured data. Two, we're highly elastic and scalable. Three, we natively embed text, or keyword search and semantic search through our vector search capability. And then we just announced last week the acquisition of Voyage AI, which is the best-in-class embedding and re-ranking models. And that's all designed to really essentially reduce the risk of hallucinations.
One of the things that we find with our customers is there tends to be two issues holding them back. One is a skills issue, skills gap issue, and the second is a trust issue. And so the way we're addressing the trust issue is to obviously do things like Voyage, where they can get better predictability and quality of the outputs from these AI applications. And on the skills issue, we're not just coming with technology, but we're coming with a very solution-oriented approach. One way of that is modernizing, you know, existing applications using AI. And the second one is really helping them build custom AI applications that really transform their business.
Yeah. A lot there. And I definitely wanna dive deeper into a lot of those points that you just made, kind of tying the bow on last night's earnings results. Serge, going into this fiscal year 2025 that we just completed, you guys had historically been kind of prudently conservative on the non-Atlas business of the multi-year deals. What is it about fiscal year 2026 that's gonna be different resulting in that sort of high single-digit headwind? 'Cause there was, I think, a $40 million anticipated headwind for this year that didn't seem to materialize. Why won't that, you know, resolve itself to the upside in fiscal year 2026?
Yeah. So, if we rewind sort of the story of multi-year deals just to make sure everybody's on the same page, we had an exceptionally strong fiscal year 2024. The biggest deal was Alibaba, but there was strength across the board, and well above sort of the average. So going into fiscal year 2025, we assumed a $40 million headwind, which would result in fiscal year 2025 being roughly an average year when it comes to multi-year performance. In Q3, we called that significant outperformance. In Q4, part of our outperformance was also due to multi-years. So most of that $40 million headwind actually did not materialize.
Mm-hmm.
Right? So we outperformed our expectations that we had in fiscal year 2025. And you have to understand that, like, forecasting multi-year deals is inherently particularly difficult. Sometimes those happen at the very last minute. And so it's not conservatism; it's just difficulty of forecasting is how I would put it. And so, but what that means now is we look into fiscal year 2026, and we think about sort of what's available for us to go get using historical sort of, you know, occurrence, if you will, of multi-year deals, whether they are renewing multi-years or new multi-years. When we apply to a lower opportunity set in the renewal base, that's what results in the $50 million.
So, in order to do better than that, we would need to see better sort of a greater percentage of customers signing up for those than has been the case for the last two years. So, that's not what we're assuming. It's just that the opportunity is lower.
Got it. And then the last one on you know, coming off of last night's results on margins. So up margins in terms of the guidance, you look at the midpoint, sort of targeting about 10% after delivering about 15% operating margins in fiscal year 2025, understanding the $50 million headwind from the non-Atlas business, which is a super high margin. Why not philosophically take the decision to protect margins after a weaker than expected outlook?
Yeah. So first of all, I'd just echo what you said. That $50 million is roughly half of the margin decline from 15% to 10%. And then the second is related to, and I'm sure Dev is gonna wanna chime in here, but about our confidence in terms of the investments that we're making and the opportunity going forward. So where we're disproportionately investing in the business is two areas. One is R&D because we see an opportunity to continue pushing the envelope in terms of performance of the core database, plus the investment in Voyage AI and creating an out-of-the-box GenAI solution that we think would be unique in the marketplace. And then second area of investment is awareness.
We hear over and over again, even from some of our largest customers, that a small percentage of their developers really knows the full capabilities of our platform. So that's an obvious opportunity to increase our ability to acquire new workloads. And that's an area where we're gonna invest this year and invest more consistently going forward. So that's the rest of the bridge down to the, you know, from 15% to 10% in addition to the $50 million.
Yeah.
The only thing that I would add is, so then if you kind of turn around to, like, what's the philosophy underlying? Those are the puts and takes of what's the philosophy. So we observe our margin performance over the years, and we have high confidence that the business scales. So we don't have to go back to the IPO, which you remember when the margin was negative 37% and kind of celebrate the progress that we made from there. Even over the last couple of years, whenever we slow down investments, we just see margins shoot up. That's the underlying unit economics of the business showing up. So this is a proactive decision at a moment in time that is unique in our opinion to invest in certain areas of the business to maximize the opportunity going forward.
We have the confidence that the business will continue scaling. This is not a forever state. This is a moment-in-time state. That's why Dev refers to it as a transition year. We're making that knowing that the opportunity set is coming to us, and we wanna make sure that we maximize it for the purposes of being the biggest possible company we can be in five years.
Yeah. I would just double-click on a couple of points. One, we just printed a 21% operating margin quarter, right? It's the best quarter of operating margin we've ever had. So the unit economics for business are very strong. I would also tell you that obviously we were private at the time, but we saw similar moments happening with the cloud, and we invested very aggressively in building Atlas. Obviously that wasn't available to all of you, but, like, our existing investors saw the investment, and obviously that's paid massive dividends over the last, you know, seven, eight years, of our growth. We see a similar opportunity with AI. Again, I cannot reinforce how, you know, if you believe in a world where the world's gonna change only more quickly with AI, you need a data foundation that's designed to be flexible and adaptable.
There's no more flexible and adaptable platform than MongoDB. And we're well optimized for the world of AI. And so I think we are investing as a vote of confidence. It'd be candidly, it would be a much simpler conversation with all of you to say, "Hey, we're gonna, you know, keep margins where they are, maybe even increase margins just because of what's happening in the marketplace." But we're actually investing as a vote of confidence because we think we see that opportunity. Customers are telling us that they wanna use us to do both build new GenAI apps and help modernize their existing legacy infrastructure. And I think those things will pay dividends in the years to come. Yeah. I wanna spend the bulk of. We have roughly, you know, 20 minutes left.
I wanna spend the bulk of the time diving into why MongoDB is well positioned to power the next wave of modern AI applications. And maybe we sort of level set the conversation. Dev, give us your thoughts about roughly the last 12-18 months, as 2024 progressed. What is your latest thoughts on where we are in the cycle? Are we moving out of the test eval proof of concept phase of the market and actually getting applications into production?
Yeah. You're talking about AI applications?
Yeah.
Yes. So I think this is a gradual journey. When I look at the enterprises today, so I'll give you a couple of anecdotes. I was meeting with a too-large financial services company in New York, and I asked them, "How many AI apps do you have in production?" One person told me about 25, 30. Another person told me about, you know, 20 to 15. And I asked them, "How many of those AI apps are customer-facing?" Both of them said zero, right? Because they're very, very nervous about the risk of hallucinations, especially in a regulated business like financial services, right? You know, so while they're very interested, you know, the use cases they're looking at is more around document management, customer service. You're just streamlining efficiencies to sort of play with agents.
But the challenge you have is that unless you really feel confident about the outputs you're getting, you're gonna be very measured in terms of the deployment model. That being said, when we think about, you know, and I just kind of, I'm old enough to remember the internet era where, like, people were kind of just building simplistic static web pages, and no one thought the internet would transform not only everyone's business, but everyone's life, right? I think AI will have that same impact. And I think you're seeing the innovation happen. You start with basic infrastructure, basic apps. Now you get into more sophisticated infrastructure, and you start to see more sophisticated apps. I think that's happening. Candidly, I think the first generation of GenAI apps will show up as ISVs.
You have companies like Harvey and, like, the code generation tools and all that, that I think that's where it's gonna start showing up first. But I think the only way an organization can really differentiate themselves is by building custom purpose GenAI apps that are meaningful to their business. Then why is MongoDB well positioned, right? So let's talk about that. One, fundamentally, we are a distributed architecture where we use a document-based approach to manage structured, semi-structured, and unstructured data. A lot of people say, "Well, Postgres supports JSON. You know, can't Postgres do what you do?" If you look at any performance results, as soon as you introduce a two-kilobyte JSON document or object, Postgres starts having performance issues because Postgres or relational databases need to do something called off-row storage.
Off-row storage is a different technique for managing data that can't fit in rows and columns. In fact, Postgres has this approach called the oversized attribute storage technique or TOAST, which is a way for them to deal with, you know, unstructured data, but it comes at a performance hit. Postgres is also very rigid in its schema. So it's not very easy to change. Postgres is not designed to be a single-node system, not designed to be a scalable system. Some people have tried to make it a little bit more scalable, but it still has inherent scalability limitations. So when we think about a world that's gonna change and adapt and deal with different types of data and different types of modalities, we think we are well positioned.
And just to be clear, a lot of, I get the question, what is the role of a database in the world of AI? I think of the LLM as the brain. I think of the database as the state machine and the memory machine. And then, like, I get questioned like, "Well, what's Voyage AI?" Think of Voyage AI as a very fancy librarian. So let me explain what I mean. Imagine you hired Albert Einstein to be your personal assistant, the smartest person in the world. You asked him a question about chemistry or biology. Even no matter how smart that person is, they still need to go do research to essentially find, give you the right answer to a, to a hard problem.
And the challenge is, like, they could go read every book in the library or, and they could essentially get all the information and come back with an answer, but that'll take a long time and cost a lot of money. Or you can say, "For this particular question, go to this section of the library, go to this aisle, this shelf, this book, this page, this chapter, you know, and this section of this page to get the answer you're looking for to formulate a response." And that enables you to be much more efficient about how you do very sophisticated search and retrieval. Remember, this is data sitting in an enterprise.
Obviously, the LLMs have been trained on internet data, but they don't have data, you know, in the businesses that you're in or, you know, large banks or financial, you know, financial services, telcos, media companies, tech companies. So you need embedding models to really help the LLMs become much more efficient to, in order to produce the results and produce high-quality results. And so we feel like as the world produces more software, and you're gonna see the software use cases expand dramatically, that traditional software could not, dealing with open-ended questions, reasoning, natural language kind of interaction, different kinds of user interfaces, et cetera. So if the envelope of software is gonna increase, then by definition, you need more data infrastructure to support that software. And you need real-time data. That's the other question I get is, what about you versus Snowflake or Databricks?
Imagine you're, again, back to an agent doing investment decisions. For you to make an investment decision, you might have been trained, but to act on that decision, you need to know what does that stock trading at, you know, what are the volumes, any other kind of real-time data to make a buy or sell decision. If you're a customer chatbot, you need to know exactly what's happening with that customer to be able to respond appropriately in terms of how to answer that customer's question. So that's where real-time data becomes incredibly important for these kind of mission-critical applications. And we think when you look at all the requirements, we are well positioned for that.
Yeah.
To pick up on some of those themes that you just laid out, and sort of incorporating the Voyage acquisition, I kind of rewind back two years ago, you know, vector database companies was kind of the hot thing. Increasingly, database companies across the market have embedding vector search capabilities. And now you guys seem to, you know, pushing the puck forward with, you know, having world-class embedding models, re-ranking capabilities. So it sounds like the unlock here is about bringing a solution.
Correct.
to the customer. So
not just a solution, but a solution and an elegant user experience, right? It's back to the point, like, the most successful companies have been able to take friction out of the user's workflow and be able to do things. Customers still had to go get their data embedded to use a vector.
You can't use a vector database without using, without embedding your data. But they didn't have to go to OpenAI. They'd have to go to Cohere. They'd have to go to some other third party. And we said that's a very painful process. Most enterprises don't know which embedding model to use, don't know which operational store to use, not sure which LLM to use. And then, you know, then they gotta figure out what vector store to use and stitch it all together. That's why, you know, we wanna make it much more simple and easy for customers because we can bring everything to bear, you know, in a very elegant user experience.
The classic kind of MongoDB value prop.
Exactly.
Right?
Exactly.
And so could you talk about, you know, what have been, what's been the storyline in terms of, vector search adoption, RAG adoption?
You guys released it, generally available, earlier last year. How's that been building momentum? And do you think Voyage, the acquisition with Voyage is gonna unlock more of those RAG and agentic use cases?
Yeah. So the uptick has been good. We have that. We've talked on the last call, not this call, like, we have had thousands of, you know, small customers building AI apps. We have a couple of large, well-known AI companies using us as their memory and state machine for the use cases they're doing. You know, obviously with the competitive dynamics of AI, they don't really want us to talk about, you know, who they are right now. But we're seeing, starting to see some of those apps starting to take off. And these are like seven-figure workloads.
And what I would say is, in terms of your question around vector, we're, we're seeing adoption, but I think the Voyage thing really makes it so much easier and so much more compelling to use MongoDB so that it's truly a one-stop shop. What's gonna be the timeline to integrate the embedding models and the re-ranking capabilities into core, into the core? Yeah. So the way we're doing is, today you can go to Voyage AI and you can get their models either from them or you can get them from Amazon Bedrock and a few other places. What we're gonna do later this year is do something called auto-embedding. So you can choose as data comes into MongoDB, you can choose if that you want that data embedded. So out of the box, all that is taken care of for you.
Then we will focus on building domain-specific models. Right now they have models for financial services. They have models for coding. But we see a lot of customers, like healthcare customers, saying, "I want models," but a lot of healthcare data is not publicly available. So we can go to healthcare customers and say, "We can build models for you that are optimized for your particular, you know, use cases, your particular data, and then enable you to leverage the power of AI to really do profound things in your business." And that's something that customers are quite excited by. I was talking to an early-stage company that's growing very, very quickly in the life sciences and biotech space. And they're super excited about being able to leverage some of these models to just improve the performance and the accuracy of the outputs that they're getting.
So there's a big opportunity. And then there's other sophisticated things we can do, like instruction tuning, where you can do even very sophisticated things around the models that we will just have a roadmap for going into next year.
You talked a little bit about the advantages or maybe the limitations of Postgres. If you look at the big picture at the operational database market, relational's still two-thirds to 70% of the operational database market. So my question is, is that it seems like relational has this sort of, I'll call it a supply-side advantages. You know, kids go to school, they, you know, undergrads in computer science, they still learn SQL. So you're pumping out thousands of students a year that know SQL and relational.
If we take a step back, what is the MongoDB strategy to penetrate that supply side and get developers to learn MongoDB, whereas, you know, every year you have new developers being pumped out learning relational?
Yeah. There's an old Chinese proverb we love to use inside MongoDB saying, "You know, if you tell me, I will hear. If you show me, I will see. If I experience it, I will learn." And what we find is MongoDB is one of those technologies that's well-known but not known well. I'll give you a simple example. We have a large financial services customer two years ago was spending a little bit over $1 million with us. Fast forward 24 months later, now they're spending $22 million with us.
So eight-figure cust, you know, eight-figure account, first digit starts with a, it's not a one. Okay. Pretty good growth. How much upside is there in this account? Doing the account review, I almost fell off my chair when I found out that only 5% of the developers know how to use MongoDB. Now they have tens of thousands of developers, but only 5% know how to use MongoDB. So that eight-figure account could easily be a nine-figure account. And our issue, you know, what we have to work on is being able to get those developers to more easily understand how to use MongoDB, not just know about MongoDB, how easy is to work with and organize data, how easy is to shard or scale out data, how easy is when you natively embed text and vector into your platform.
And then now with embedding models, that user experience is so much more simple and less complex. And when we do that, we see the growth, which is why we're also investing up market because we see a disproportionate, you know, the sales productivity is disproportionately higher in the high end of the market than the mid-market. And because of that exact reason that there's so much opportunity in these accounts, even when we have a large presence in.
Yeah. A key part of your strategy for growth, one of the pillars, I should say, for your growth is your AI Relational Migrator service. I know there's been several pilots going on at some very large customers. What have been the results thus far from the customer's perspective?
And what is the go-to-market and investment plan to scale these early successes to the rest of the customer base?
Yeah. So, so just to level set, the database market's one of the largest markets in software, but a big part of that market is these trapped legacy applications running on legacy databases. That's frankly very, very hard to, you know, move or change. And they're, they're hard because one, there's lots of lines of code, and two, they're crown jewels of an enterprise. And so people get very nervous about messing around with them. However, those companies are facing technical debt. So 90%-95% of the budgets are just spent on maintaining those applications. They're running into end-of-life issues where a lot of those technologies are getting end-of-life.
There's compliance and regulatory issues where the regulators are saying those applications are becoming a systemic risk, especially in like financial services or healthcare and other places. And then when you say, "Okay, how are you gonna leverage AI to modernize your business?" They can't do that on those existing applications. So there's a confluence of events happening where it's saying, "We gotta do something different." So when we approach customers about modernizing these applications, the receptivity is very high. And then when we show them that there's typically two objections that surface. One, are you selling me snake oil? Because customers tend to be a little cynical. So we run typically like a six-week proof of concept where we say, "Pick an app and we'll show you how you can modernize." And the results end up being very positive.
And in some cases, the customers have stopped those pilots or POCs because they've seen enough to say, "You've convinced me." The second question is, "Okay, why MongoDB?" And then we walk them through what I just talked about in terms of our architecture and everything that comes with it. And we get buy-in on that front. And then we start modernizing. We already have a couple of proof points. We already have this in press releases. And what we have realized is that there's so much demand, we want to be able to move fast. So we've actually decided to focus on Java apps running on Oracle. Just to give you an example, we're working with one large insurance company to basically modernize their key underwriting application.
There's tens of thousands of lines of stored procedures sitting inside their Oracle database, right? Typically, the way people built applications and, you know, previously was to write application logic, but also application logic at the app tier as well as application logic at the database tier to get better performance. The problem is that over time it becomes the spaghetti code that becomes very difficult to unravel, and you kind of get locked into the platform. And what we can now show is that we can reason through all that code. We can chunk up that code, peel off pieces of the functionality. And we're doing engagements right now with this in this global insurance company in Asia, and they're going country by country. And as soon as we knock off a country, they bring us to another country.
The pipeline of opportunity just in that account is growing dramatically. That's just one small example of what we see across, you know, other financial services customers, telecom customers, older ISVs who wanna modernize. We had an ISV in Germany that asked us to modernize a financial application, which, you think, a financial application is all structured data, but they said they can get far better performance and be able to add features more quickly if they built it on MongoDB. We've done that.
These sound like super exciting opportunities. If I look at, like, I'm gonna call it your new workload opportunity, including the AI Relational Migrator service. Let's put vector search, RAG in there as well. Let's put stream processing there.
If we think about these as a bucket or a class of opportunities, when do you think that starts to move, benefit growth in your cloud business? Is that something that happens this year or is it more?
I mean, I think part of the reason for stable consumption growth is just starting to show up in the numbers. At least the, you know, search, is starting to show up, you know, in our numbers. Again, it's all part of Atlas consumption. So it's hard to disaggregate, but we, that's part of driving our confidence on the stable consumption growth that we talked about in Atlas. And I think, as I said, this is a year of transition. We clearly have an appetite to grow faster and deliver better margins.
I recognize there's a lot of people in the audience who might be wondering, you know, is this, you know, is this what we expect the business to do? And I can tell you absolutely not. We are very, very motivated to grow much more quickly, and do it much more efficiently.
Yeah. And so maybe, you know, with our last minute or so, maybe Dev, just take the opportunity to, to sort of speak to what excites you about MongoDB today. You, the business is at a $2 billion scale. You got a world-class, cloud business. Looking forward, what excites you and why do you think MongoDB is a good investment opportunity at these levels?
Well, I'm happy that we're a $2 billion company, but in some ways, I'm a little pissed that we're not a $20 billion company, right?
I think that's the long-term opportunity sitting in front of MongoDB. As excited as I was about building Atlas, and again, we were a private company, and a lot of people were skeptical, "Wait a minute, you're gonna partner and compete with the hyperscalers? Who's done that? How can you prove that? You know, why won't they strip mine your product?" You know, all the bear cases against MongoDB, and we were able to prove that. I'm equally excited, if not more excited, by the opportunity that AI presents. I think it'll be. I think you're starting to see the market shake out. We saw DataStax get acquired by IBM. I think a lot of these kind of, you know, single-function databases or point solutions just can't, you know, scale and grow very, very quickly. I think that gives us more opportunity.
Maybe just to leave last one on one of the debates that we've been getting this morning since we started a little bit late. Common question I get, hyperscaler competition. Is there any truth to that in terms of that being their ability to bundle maybe Cosmos or DocumentDB? Are you seeing any signs of that as a potential headwind on the business?
Yeah. That's been a question we've been getting ever since AWS launched DocDB in January of 2019. And our Atlas business only grew faster since then. What I would say is our relationship with the hyperscalers has never been better. It's actually really good. They're actually working with us on some of these app modernization efforts and programs.
And they do that by funding some, you know, credits and so on and so forth to get customers to move more quickly. And then we're partnering in the field from a sales point of view. Clearly there's an air of co-opetition. They do have their first-party services, but we've been in this business long enough. We know how to partner and compete. And what we find is, like, if there's a, you know, hyperscaler who doesn't wanna work with us, there's two others who are happy to go after that opportunity together. So we know how to kind of, you know, use that motion to help drive business.
Awesome. With that, we're out of time. Thank you, Dev. Thank you, Serge. Thank you for updating us on the MongoDB opportunity.
Thank you.