Okay. Good afternoon, everyone. Welcome. I'm Mark Murphy, Software Analyst of JP Morgan. It is a great pleasure to be here today with Olivier Pomel, who is the CEO and Co-Founder of Datadog. First off, Olivier, thank you so much for joining us here today.
Thank you for having me.
Most investors in the room are familiar with Datadog. You're approaching a $2 billion type of a revenue runway rate. For those who are not, could you give us an overview of Datadog, help us understand the types of problems that you're out there solving.
Yeah.
How have you developed this amazing product that's allowed you to emerge from a very crowded field and become the leader in observability?
Yeah. What we do is observability and security for cloud environments. We sell to engineers, to companies that are in the cloud or moving into the cloud, which as it turns out, is just about every single company out there. We instrument every single layer of their stack, you know, be it their infrastructure, their applications, their logs, what their end users themselves are doing on the platform, as well as understanding how to secure those applications. We do it for a very, very broad set of customers that range on the low end, from individuals and students who don't pay us anything, all the way up to the largest enterprises in the world, you know, that pay us tens of millions of dollars a year and everything in between, basically.
One of the, advantages of Datadog that we consistently hear when we're out there doing our field work is they will tell us that there is this very amazing breadth of capabilities that are contained in a single, pane of glass.
Mm-hmm.
You can feel that when you, when you look at a product demo. Customers will tell us that it's a one-stop shop. They'll tell us they can cover a very large portfolio. They'll tell us they can consolidate a lot of different tools.
Mm-hmm.
Into this one, Datadog platform. Can you talk about, talk to us a little bit about what is behind that design ethos, and how do you benefit, how do your customers benefit from having this very clean and cohesive product?
Yeah. When I started the company with my cofounder, we actually didn't set out to build a better mousetrap or a better infrastructure monitoring product or a better application performance monitoring product. The starting point was to get teams that didn't talk to each other to actually, you know, see the world the same way and work together well. In this case, this was the developers and the operations folks. You know, in previous life, I used to run the development team, and my cofounder used to run the ops team, and we were very, very familiar with the conflict and the finger-pointing that can arise in those situations.
You know, as a result, our approach has always been to bring as many different teams, as many different datasets, as many different things under the same roof, in the same platform as possible. As we go to market with that, we end up, you know, formatting it in a way that fits within the existing categories. You know, even though we have a unified platform without any, you know, hard boundaries between products, we're going to displace infrastructure monitoring product, we're going to displace application performance monitoring products, logging products, what used to be separate categories. As far as we're concerned, you know, this is a unified platform. When we build products, when we add to our platform, we focus on two things.
One is getting adopted by as many people inside the enterprise as possible. You know, which is why we don't charge by the seat, basically, that we want as many eyes on the product as possible. That's core to our success. The second thing that we optimize for is to have the maximum surface of contact with our customers' infrastructure and systems. The combination of those two, basically being used by everyone every day, being deployed in every single server, every single piece of infrastructure at every single layer, is what gives us a surface of contact to then expand and build more product and solve a bigger and bigger problem inside of that surface of contact over time. That's been the recipe for adding all these products over the past few years.
That surface area, that ability to have that contact area as you've described it has allowed you to build up to 26,000 customers. You have 3,000 that spend over $100,000 annually at this juncture. We recently spoke to a consultant who told us he has seen Datadog contracts that range in size from $40,000 up to $23 million in annual spend. We know they can get above that. You can cover a tremendous range of budgets. Most of the software companies that we cover are gonna decide, they're either gonna target small, medium, or large customers. Why is Datadog trying to be out there covering the whole spectrum?
We focus on the users. And what's interesting about the move to the cloud and modern software in general is that the users actually look a lot the same, you know, whether they are at small companies or startups or at very large enterprises. It is very different from, you know, what used to be like 10, 15 years, 20 years ago when I started my career. You know, at that time, when you were at a small company or building something for yourself, you were building on open source and, you know, specific set of hosting providers and things like that. When you are at a large enterprise, you would decide whether you buy the Microsoft platform or the Oracle platform, you have all of these enterprise tools that you build on top of.
Today the building blocks are the same open source components and the same cloud components for everyone. You know, if you're a tiny company, you're going to build on AWS and some open source. If you are a very, very large company, you're going to build the same thing. Maybe a few different flavors, a few different providers, but the components are gonna be the same across the whole user base. This is what allows us basically to cover the full spectrum. It's interesting to us. I mean, even though when you look at the numbers, the bulk of our revenue comes from the larger enterprises. You know, at the very low end, we have people who pay us a couple of dollars a month. Obviously, it doesn't pay the bills for us.
I think the bottom half of our customers is 1%, about 1% of our revenue. Clearly, we don't do that for the money. We do that for the network effects it gives us as a business. You know, it allows us to be present in all of those different setups or different situations, integrate our software really, really well, and support everything that is emerging from a technical perspective. It also gives us a, you know, basis of users to expand upon as people who used to work at small startups or who used to be students go work into larger companies, and they can adopt our product there.
Okay. You're able to cover the whole pyramid-
Mm-hmm.
Very, very gracefully and with architectural cohesion. Datadog is also working at a cloud scale, right? Some of these metrics, you're handling over 10 trillion events per day. You have millions of servers that are running at any point in time. It turns out that's all it takes to keep the entire internet running, right? That's all you have to do. What does the engineering project look like? You're able to do this in a way where you're helping companies prevent bottlenecks, and you're actually keeping that large of a surface area secure.
Yes. The, the order of magnitude is actually now is around 100 trillion events a day, that we process. At any point in time, we're going to have tens of millions of containers run by our customers on our systems. We're doing that with a team that's a bit more than 2,000 engineers. Obviously, we have really high constraints in terms of availability, in terms of time to, you know, pressures of data and things like that, as we are, you know, absolutely business critical for our customers. The, the hardest part of maintaining that is actually to be able to keep shipping very quickly and build more product, as we keep that system, you know, up and reliable for our customers.
A lot of our efforts and a lot of the tooling we've built have to do with that. The way we do it is we pretty much lean hard on the cloud. You know, the cloud allows us to ship quickly. It allows us, for example, to stand brand-new instances of our product when we ship new functionality to test different versions of our software in parallel for correctness. It allows us also to change our minds, you know, very quickly when we want to, you know, do something differently or rebuild a system, which we do on a regular basis. Like, we've been rebuilding all of our internal storage systems, for example, on a regular basis. The cloud allows us to do that, to change our minds.
The last thing we do is we actually, we dog food quite a bit. You know, I think we're not quite the largest user of Datadog, but almost in terms of the number of eyes we have on it, the amount of services and the amount of data we send to it.
Okay. Datadog is eating its own dog food. You know, Olivier, if we step back and think about the history of it-
Mm-hmm.
Datadog has been known for monitoring and analytics.
Mm-hmm.
Very recently, you've made this concerted push into the security market, specifically. You have said that this strategy is very ambitious. You already had crossed through 5,000 security customers. This feels like something that's moving along very rapidly. Would you talk to us a bit about your longer term vision, in security, and why would you say that that is a logical adjacency for Datadog when, as we all know, that security market has very established, vendors that have been, you know, working on trying to solve some of those problems for a long time?
Yes. The premise for that is that, yes, there's plenty of technology and vendors for security. The problems that these vendors set out to solve are not solved. We think actually that the problems are getting worse. Basically, the complexity and the and the attack surface or vulnerability surface of applications has been growing faster than the solutions that have been brought to combat that. We think it's only getting worse. Our approach is a little bit different. If you look at the way most of the vendors try to approach security, they rely on the security teams to solve that. The security teams or the security engineers are vastly outnumbered by operational people, ops engineers, for one thing, about 10 to one.
Even more vastly engineered, more vastly outnumbered by the software engineers, who outnumber the ops people by, you know, five to one or 10 to one. Basically, our approach is, if you want to solve the security issue, you have to leverage the teams that have the most firepower, and also the teams that are actually writing and maintaining the software, not just security engineers or security analysts that come further down the pipe and are vastly outnumbered by the other teams. That's the premise. In addition to that, it so happens that we already gather all the data we need for security. We, we are everywhere at every single layer of the stack.
We see everything that is happening, what all engineers are doing, what all end users are doing. We have all the information necessary, and we don't need customers to go and instrument things another time with another vendor to incur more friction to do that, to incur a tax on data or a tax on CPU to instrument everything again. We think, you know, if you fast-forward a few years, it should be irrational not to consolidate your security alongside your observability.
Okay. Maybe security will be an area that will not be subject to this, you know, kinda cloud spend optimization trend, right? Not as subject to it. Let's spend a moment talking about that. It has been a crazy 12 months. Right, for cloud software. If you go back one year ago at this conference, AWS and Azure were growing in the 30%s and 40%s.
Mm-hmm.
Our survey work started to show some cracks there, last summer or last fall. It's a little bit of a plug for our, for our survey work. I think we underestimated the severity of that.
Mm-hmm
As well. Customers started pulling back. We now have AWS growing 11% in the month of April. Azure is slowing into the high 20%s. You know, so we're sure that you have a good vantage point into this. What do you think changed, and where do you think we go from here on the topic of optimizations?
What changed is, look, there's a lot of economic uncertainty. A number of large companies are trying to save money. A number of not as large companies have had their own economics flip quite radically. You know, We've gone from a situation where a lot of the fast-growing cloud-native companies stop growing as fast and were pressured to deliver more profitability to their own investors. I'm sure you're very familiar with that. That's what yielded the cloud optimization we've seen. I would say this is a system working exactly as intended. The promise of the cloud is that you can change your mind, you can adapt it to your business realities.
It's not all sum costs, just because you made a commitment at some point, you don't need to live with it, you know, for the next five years and or 10 years and wait for your data centers to expire.
Mm-hmm.
I think it's exactly, you know. It's very healthy. The way we see it from the inside is we've always seen cycles of optimization with our customers. That's just the way cloud is adopted. We see customers adopt the cloud, typically not have a lot of visibility into how much they're going to consume. Grow, grow, grow. Get to a point where they ask themselves, we need all that. Optimize a little bit, and then start growing again. We see them do that on a regular basis. I think what happens in these situations, what we've seen over the past few quarters, is that everyone is doing it at the same time.
Mm-hmm.
We see also customers, you know, taking several bites at the apple. You've seen that also publicly with the multiple rounds of layoffs, you know, certain companies have had. Where, you know, they're not quite sure where they need to end up or they still have some uncertainties about the environment and optimize on multiple occasions.
Coming off of Microsoft's most recent quarter, the vibe we were getting was their view is the this elevated optimization trend.
Mm-hmm
Is gonna be something that is gonna last for quarters, not years. Is that something that you would agree with?
Yes. I mean, I can't give you the number of quarters. That's, I think we're all, you know, trying to see what's going to happen here. Definitely we're very optimistic about the mid to long term. We're cautious about the near term.
Okay. Maybe, maybe then we can zoom out and talk about the multi-year view on this. Every year, we run a large-scale CIO survey. In the most recent one, it showed that 22% of all IT spend is going to public cloud.
Mm-hmm.
These large CIOs see that rising to 42%, five years forward from today. There's a near doubling, right, in the mix. This is a major kind of re-architecting. I think what investors wanna know is there's a debate out there, has cloud spend reached a maturity level where, say, growth is just going to be in the teens? I'm talking about bigger picture, you know, including the hyperscalers. We would assume Datadog would grow well above that because you always have and there are good reasons why. Do you think that that is gonna happen or do you kind of foresee a bigger rebound coming at some point where we would look back and say, t his really was more of a temporary slowdown?
Look, it's hard to say what the numbers are actually going to be. When you look at all the folks whose job it is to try and forecast the outside in macro trends, I mean, you look, I think there were reports from Gartner recently that they were pegging the growth, annual growth, over the next three to four years at 20% for cloud environments. We think it's a reasonable, you know, estimate. We'll see whether we come under or above that. To your point, historically we've outperformed the growth of the cloud providers, and there's a few reasons for that. You know, the fact that we're increasing scope and distribution as we do so.
Mm-hmm.
If you step back a little bit and if you think about the big drivers for this increase, the main two drivers are digital transformation and cloud migration. I think if you think of the latest developments with the rise of AI, it is only going to accelerate digital transformation and cloud migration. To be able to use AI. Everybody wants to transform with all the new technologies we're seeing, the new innovation in AI. To be able to use that, you need to be digital. Like, you need to have your data. You can't do it without it. You absolutely need that. The only way to adopt it is to actually be on the cloud.
I don't even know how you would build an on-prem AI strategy today. Who would you even know what to buy, what to do? Like, the technology is changing so fast, that you would have to redo everything every six months or every year. I think those two trends are going to be, if nothing else, you know, accelerated and emphasized by the rise of AI.
Thank you for bringing up generative AI without me having to. We're of course gonna get to that topic in just a moment. Before we move on from talking about the optimization trend, does it matter to Datadog which hyperscaler is growing faster? Because, you know, people are noticing that out there today?
Mm-hmm.
it's actually Azure and Google. Right now at this instant, those two are growing a little faster,
Mm-hmm.
Relative to AWS. I think people try to run the math and, they try to understand, you know, do those relative weightings matter to Datadog?
The answer is no, it doesn't really matter. We're equally well-positioned with all of the various cloud providers to capture new workloads. If you look at our business today, we're more heavily weighted towards AWS, that's because that's where the bulk of the market was when we started. That's also where the cloudiest workloads were initially. Like, if you compare AWS and Azure, for example, Azure has a lot more of traditional lift and shift, purely Microsoft environment. For which, you know, we don't bring as much value as for more modern workloads that are heterogeneous in terms of the technology stacks that they are involved. If you look at the newer workloads, what actually is going to drive the growth for AWS, for GCP, and for Azure, these are all very cloudy.
These are all, you know, very heterogeneous. In all likelihood, they all are going to give quite a bit of room to capitalizing on data and AI.
Okay. Now, the flip side of this discussion as well is if we think back, Olivier, to your user conference you had last October, you announced the release of a product that could actually help you capitalize on these cloud cost optimizations. You have a product for this?
Mm-hmm.
That would help customers control their own costs in a cloud environment. That, as we know, has become a topic of discussion and an action that's on the forefront of CFOs' minds. Can you give us a bit of insight into that product in terms of the, e specially in terms of the rollout, how's the customer response, and how is that unfolding?
It's a product that's brand new for us. cloud cost optimization, basically. It is. It has extremely high demand from customers, for all the reasons you've mentioned. The initial focus of that product is to close the loop between the bill and the development and operations teams that actually are at the point of consumption and the point of deployment and management, you know, for the infrastructure. It's actually really, really hard to bridge the two, because there, you know, many, many things happen between the consumption and the actual dollars leaving the bank account of the company. There's a lot of work that is involved in bridging the two.
If you look at the existing market, there's a whole cottage industry of cost optimization products out there. They tend to be more siloed product that are built for the finance or the, or the FinOps team, like FinOps or cloud FinOps, cloud ops teams, that exist within a lot of companies with cloud deployments. It's really, really hard for those in isolation to actually clean up the data, make sure it's right, they can't actually get it in front of the engineers who understand that. That's been the initial focus for that product. Today, again, it's still early. We're still heavily developing it and adding features and integrations with cloud providers and things like that.
You know, just to understand the potential there, we charge, you know, in the single-digit percent of our customers' cloud bill, for the whole of Datadog today. That's the magnitude of what we get. There is tremendous opportunity if we can drive, you know, 5% or 10% worth of savings, for our customers on their cloud bills, for us to have our product basically just pay for itself in its integrity just through cloud cost optimization.
The response is great, and if you run that type of math, you're saying that this can kind of give you a lot of air cover for.
Yes. Yes.
Much bigger footprint. Okay. Now, Olivier, you said a couple of moments ago, you got my attention when you said you can't even imagine how a company would run generative AI on premise. I wanna ask you have a pretty bullish framework on public cloud. How far do you think that this transition to public cloud can progress? For instance, if we, if our survey ends up being correct, and we reach 42% penetration in five years, where do you think it's going after that? Where do you think it is five years after that?
Yeah. We think it ends up being vastly, you know, dominating just about every single other form of IT spend. It doesn't have to be public cloud. It could be private cloud. I think today we see, especially very large organizations, build some private clouds that pretty much look a lot like the public clouds.
Mm-hmm.
You know, It used to be like five, 10 years ago, it was really difficult to build a private cloud. There was no clear technical destination for that. You had to invent everything yourself or use technology that was not widely adopted. Today, with Kubernetes in particular, there are many ways to do that. For AI workloads, not quite yet. I think we're still still very early. You know, you can imagine that in five or eight years, that might be the case. For us, it doesn't matter whether it's public or private cloud. We think everything that's going to be cloudy, is going to be Every single investment made will be in that world, and no other further investment will go into the legacy workloads, pretty much.
Okay. What is your sense of business confidence from your own customers? Part of the reason that I ask is you guided coming off of Q1, you guided for the sequential growth rate in Q2 to be slightly faster. Datadog has a tendency to outperform-
Mm-hmm.
The guidance that it issues. For this audience, do you think we should be reading some optimism into that, into the trending into Q2?
Yeah. you know, it's a bit of a tale of two cities as far as we're concerned. Whenever it comes to new logos, new customers, new products, we see like we are super optimistic about we hear what we hear from customers, and we see great traction with pretty much everything we do there. Which means, digital transformation and cloud migrations are alive and well, consolidation is also alive and well and is working in our favor. The flip side to that though is the growth of existing customers who already have a large footprint, on the cloud is still fairly uncertain, as we don't know, you know, whether those customers are done optimizing just yet. They, and it is. I would argue that it is unknowable because they don't even know it yet themselves.
If you look at our numbers, mid-term, long-term, new logos, new products, the big trends of migration consolidation, are going to dominate in terms of the success of the company. In the short term though, it's the growth of existing customers that has an outside impact.
Right.
On what's going to happen.
Right.
We're very cautious in terms of what happens in the very short term, but extremely optimistic in the mid-term and long term.
Okay. Now, let's take a few moments to get into the topic of generative AI. This has really become the single biggest topic for us. I keep pointing out it's evolving at light speed, and I think some of the capabilities, they're amazing. They're a bit frightening. We look at this, as it relates to the monitoring market, the bulls are saying, generative AI thrives on large datasets, right?
Observability players have some of the largest datasets that we're aware of out there, it's gonna be positive. You listen to a bear, and they would say, well, just take all these logs and metrics and traces and just stream it directly into one of these large language models, ask it which apps are running low on memory, right? Where servers are low, something's low on storage, ask it to actually go and remediate and see what'll happen. Which is a hypothetical discussion. Datadog is known for being well ahead of the curve. What do you think generative AI is gonna mean for Datadog?
First of all, to put the numbers in perspective trying to send observability data directly into large language models would be orders of magnitude more expensive than running the application that produces that data to start with. You know, I think our largest customers would probably have to spend the GDP of California just to make that happen. I as of today, that doesn't, it doesn't even work, right? That doesn't even give you the answer you want. You know, from a technical perspective, the answer is still going to be, you have to gather the data as, you know, smartly and low and in a way as low impact as possible.
You have to understand how to summarize it in a smart way. You have to understand, you know, which models you can train that are very, very specialized and very, very cheap to run, so that you can then maybe combine that with some of the more expensive and more general-purpose models to try and tie things back together. Basically, at the end of the day, it's the job we're already doing of, you know, building a system and integrating all these datasets and trying to be as smart as possible and push a boundary as we go. In terms of building an observability service, you know, it doesn't really change.
I think it's, w hat it does give us though, it does give us more tools that we can use and a very clear direction that everybody's following in terms of what can be done with this AI. Also drives up the expectations from the customers in terms of what can be done and what they want the market to provide to them, you know, from these solutions. More broadly though, you know, we think that the move to AI, you know, I mentioned earlier We're going to accelerate digital transformation and cloud migration in general.
I think we'll also see a number of workloads specific to AI that are going to emerge and that are going to open many opportunities for us, in addition to, you know, perhaps dramatically expanding the rate at which applications can be written, which we see as an increase of complexity and as a transfer of value from writing software to observing it, understanding it, fixing it and securing it, which is what we do, basically do. The equation here is if one person can produce 100x more stuff-
Mm-hmm.
Then they understand the stuff they produce 100x less. As a result, you know, a lot of the value comes after you've produced the application in terms of, you know, stitching everything together and make sure you understand how it works.
Okay. So you see it accelerating the rate of application development-
Yes.
Right? kind of beyond the typical company's ability to monitor, track, and observe it and-
Yep.
And lock down that environment. That's where you see the opportunity-
Yes.
The bulk of the opportunity. Okay. Earlier this month, you also announced an integration with ChatGPT, and I think this was described as it is to help companies monitor their own usage of the APIs for OpenAI.
Mm-hmm.
Could you maybe offer us a real-world example so that the audience can try to visualize what is going on there?
This is what we do for all of the services we instrument. We help our customers understand whether they're using the OpenAI APIs to start with, how these perform. Like, you know, do they respond fast enough? Do we get errors back? Do we get success back? How much it costs them, which is a factor also for large language models as it turns out. In addition to that, our customers can use the integration to track specific interactions. They can log specific interactions, understand the quality control and what's actually happening, and what completions they get from the engines.
Then we also, you know, plug that back into our APM or application performance monitoring, so that when our customers, and which is something they start doing, by the way, are integrating the large language models into their own applications, they can run a distributed transaction. They can understand how a distributed transaction is going to start from their own web APIs or web applications and go back and hit the OpenAI models and go back to the applications afterwards. We'll be able to let them trace that end to end. We basically integrated all into the monitoring we do for the applications.
Okay. Let me ask you one more quick one, and then we'll try to get to one in the audience, and we have to try to do this in less than four minutes. I wanna ask you, there's this growing sense that these the bots and the LLMs are becoming a lot more sophisticated.
Mm-hmm.
There's a sense that in a few years the world might not have as many people, as many humans that work in a call center, contact center, IT helpdesk. How do you view it in your market? Because for your market, typically, I think they're gonna be called an SRE-
Mm-hmm.
A site reliability engineer. You know, they're responding, they're resolving issues. Are we gonna have as many of those roles in five to 10 years?
I mean, our market generally is engineers, right? Software engineers and SREs and security engineers. What's possible is that it changes the ratio between SREs and software engineers. Our job there, basically the part that we provide, is to help leverage those SREs more, give them more tooling so they can understand better the, how to run the application better and, leave more of the work to the, to the software engineers directly. That's one potential future. When it comes to software engineers, I would say it's interesting to try and predict whether we'll see a slower growth of the number of software engineers required or not.
I think for, i f you look at previous waves of innovation, gains in productivity have been mostly for technical innovation, haven't slowed down, you know, the increase of engineers. Basically an engineer can do more, so you can differentiate more, you can build more, and you keep building more. It is possible, although, than with this wave, for certain types of companies, you know, there's a point at which they reach enough. Like, if technical differentiation is only part of what they do, it's possible that we see less of a growth there. As far as we're concerned, it doesn't matter much. Our business model is not predicated on selling seats, and we don't sell to engineers.
Right.
You know, if we are in a position where we can, allow customers to do more with less, in the end, that's better. That's great. It's fantastic.
Yeah.
I would say it's hard to predict whether we'll see a lot more or a lot less there.
You may be glad you don't have per seat pricing at some point. Let's try to take a quick one from the audience. I think we need to run a microphone, but if any hands go up, we can do that. We cannot do any short-term questions because there's not gonna be any business update provided here. Please go ahead and raise a hand if you have a question out there. Okay. I don't see any. Olivier, let's finish on the topic of the market convergence here, right?
Mm-hmm.
I think many investors have had this idea for a long time that you're gonna see a convergence.
Mm-hmm.
Logging solutions, monitoring solutions, security solutions, they're converging into this single platform. The term people are usually using is full-stack observability.
Mm-hmm.
They think about it across a whole range of competitors. Could you speak to us just in terms of how do you think the view differs if we're talking about Splunk or Elastic or Dynatrace or someone else? How do you think the view in the architecture differs in terms of trying to attack that converged market?
There's a few things. I mean, one is so we started from a different place. We started from trying to bring everything together and everyone together, which is more of an afterthought or a reaction for pretty much all of the other players.
Mm-hmm.
In the industry.
Mm-hmm.
We've architected our platform and our product on it. We've also architected the company around it, you know, which is what I mentioned earlier about serving a very wide customer base, and everyone from individuals all the way up to the largest enterprises. We think it's difficult to replicate. The last thing is we optimize the company for the ability to innovate and to ship fast. You know, we're purely SaaS. We've never done on-prem. The reason for that is it allows us to build faster, innovate faster. And we, s orry, I just forgot what I was going to say, but we-
Your point is that it was, this was your original thought from the beginning, right?
Yes.
In architecting the product, that this was gonna be all converged and all cloud, right?
Yes, yes. You know, I mentioned it earlier, actually. I've got back my train of thought. We, you know, we build this product to be as widely deployed as possible, you know, among humans and our customers, but also among machines. Again, we have this very large surface of contact. We can, we get very fine-grained data on what it is we can improve for our customers and other problems we can solve for them. It's very natural for us to expand and to push at that boundary over time.
Okay.
You know, if you look at the proof in the numbers, when we took the company public, we were quite a bit smaller than pretty much everybody you mentioned. We had an engineering team that was a lot smaller. We've been able to innovate a lot faster than everybody else there. I think it's only been a year since we've been bigger than most of those companies.
Mm-hmm.
You can pretty much assume that every single piece of product we've shipped was accomplished with a smaller engineering team than these other companies, which goes to prove the validity of the model, the architecture of both the company and the product.
Perfect note to end on. Olivier, it has been incredible to watch the growth and the success of this company and the way it has been blossoming. I just can't thank you enough for taking the time to be here with us today.
Thank you.
Thank you.