Can you guys hear me okay? Yeah. Thanks, everybody. This is the lunchtime keynote at the B of A Technology Conference 2025. I am.
Does that mean they've eaten already or not yet?
They are eating, in the process of eating, and yeah, some have completed it. My name is Koji Ikeda. I'm one of the software analysts here at B of A, and I'm absolutely thrilled to have the CFO of Datadog, Dave Obstler, here for us for a fireside chat. To kick it off, I think a lot of us in the room know what Datadog is, but maybe for those that aren't familiar with Datadog, tell us a little bit about Datadog. Where does it come from, and what are the problems that you guys are solving today?
Yeah, so Datadog is a platform that is used by production engineers and DevOps to monitor the creation, deployment, and functioning of software applications. Usually, those software applications are customer-facing. They're critical to the business. They also have been delivered principally in the cloud and are architected through modern technology like containers, serverless, increasingly AI. The Datadog platform is used to see what's going on in the environment and to investigate problems and make sure that latency is reduced, uptime is maximized, all the way out to the customer on the website or mobile device as to how the customer is interacting with the application. Over the years, we've expanded from a start in infrastructure or host container monitoring, server monitoring, to a whole range of products, SKUs, which include application monitoring, code monitoring, logs, security. We can talk about AI, et cetera.
We've invested over time in developing the platform and are looking to be the single pane of glass for that customer base in managing and remediating their applications.
Got it. What would you say is the most common pain point today that Datadog is trying to solve for their customers as they come to Datadog? Help me solve this. What is that today?
Yeah, it's essentially you have applications that you're deploying in the cloud that are mission-critical. There are things like, think if you're the customer of a video provider or a credit card company or a car company where you need your OnStar, et cetera. Those applications are ephemeral. They're containerized, and the speed of evolution of the applications has increased. The big thing that Datadog is getting is transparency so you can see everything, everything that affects the functioning of the application. The more that you can see in one place, the more you can understand that service or that application, and the more you can optimize it or remediate if something goes wrong. That could be anything from code problems to the way you deploy cloud instances to handle bursts and things like that. All of that is very transparent in the Datadog platform.
I think that we are in the age of AI, and things are getting faster and more complex. What do you think the problems might look like in the future, and how is Datadog going to help solve the future of monitoring and cloud workloads?
Yeah, I mean, remember, anything that affects an application, our product plan is to be able to see it and help remediate. I think there are a couple of different ways. The first that you're seeing in our numbers most is that a lot of the high growth and what you're reading about are from what we call AI-native companies that are providing services to their customers. Those are modern software applications where the uptime and the functionality is very important. We've been able to monetize those demands, those workloads. That would be similar to what we saw for other types of Datadog customers, but they're innovating very rapidly and growing very rapidly.
When you go to the customers and their workloads, the more that they are putting in large language models in their applications, the more that it gets more complex, and Datadog will be monitoring that. We have products, large language model monitoring, and it's just another thing in the application that needs to be looked at. We also think that if software development, and it's happening already, can become more efficient and quicker, you'll have more applications being launched. You'll have a quicker migration to modern architecture, and that'll help us. Within our platform itself, what we're doing is engineering AI in order to get more information more quickly, have models that simulate what's happening, and at some point along the way, give solutions and, in the future, potentially auto-remediate.
That is what you see, I think, in a number of different software vendors that they are putting that in their applications, and clients are starting to experiment with it. That is the other. It is a multifaceted answer. We think it is going to change a number of different things, and I just went through them.
Yep, yep. I'm glad you mentioned AI. It's a great segue into my next set of questions around AI. Datadog is one of the companies out there that gives an AI number. You guys are clearly benefiting from the AI trend out there, and you give a percentage of ARR coming from these AI natives. I know you guys think of Datadog as a holistic business, but I know a lot of folks in the room think about it as the AI versus the non-AI part. How should we be thinking about the growth potential specifically from the AI, which seems like it's fast, but also, maybe more importantly, the non-AI native? What does growth feel like, look like, and what's the potential?
Yeah, definitely. In this case, AI would be a customer cohort that would be like fintech or neobanks or SaaS software companies. Essentially, it is an end market, and we monetize on our clients' activity and workloads. It is very transparent. The fact that that is growing more rapidly than some of the other sectors is completely a reflection of the demand for their products and the workloads. That should make sense because from a very small base, and it is still a small part of our business, you have had, you know, from all of the companies you follow and everything you hear about, there has been a lot of investment in AI. It makes sense that companies that specialize in it would have rapid growth. That is what is driving. I think to the extent that continues, that will continue to drive rapid expansion.
We also believe that what will happen next, we do not know when, is that there will be more integration of large language models and AI into the non-AI tools companies, you know, the auto companies and the video companies and the fintech companies. They will start to develop applications and put them in production. We are still early. There will be another set of demand that happens from the non-AI natives. There are some big winners you all know about, and they are essentially the infrastructure horsepower behind the investment in AI. As you said, we have been the solution to monitor their workloads, and that has been a good growth driver. That has been.
The non, all the other industry groups, what we've had is after the back of COVID and I would say the bursting of the exuberance, but whatever you want to call it, we were first in a fairly widespread optimization of overuse or overspending and things like that. We had a period where our net retention went down off of very high levels. Since that time, we describe it as a stable market where, and I know we'll get to this in another question, there's a very long-term trend on the migration of applications from legacy technology and on-premise to the cloud and to modern applications. That's driving the business in long term. We've also had a balance against that in cost consciousness, involvement of procurement, look at return on investment and things like that. We have had a fairly stable growth trend.
You know, it's growing around 20%. It's been growing that way for a while. That is driven by the long-term trend in migration to cloud applications and our development of our platform and cross-selling. We have maybe a small but very rapidly growing segment, which are these AI tools. We have everyone who's not an AI tool company who's had stable positive growth that has those two things against each other.
We will definitely touch on the cloud migrations, but I do want to round out the AI conversation. I think I know the answer to this, but I want to hear it from you. Within your customer base, how far along are they within the AI, Gen AI journey generally? Within the customer base, are there certain types of customers or verticals or geography where you see generative AI initiatives maybe coming along stronger than other sectors out there?
Yeah, I think we're early stages. There's a lot of training going on. There's a lot of deployment of AI in internally facing applications, you know, like your search, your email, your sales intelligence, and things like that. We have been very early on in our type of application, which is how you deliver a digital business and having trust in models that will work and not hallucinate, et cetera. We've given a number of metrics showing that the activity is doubling. I would say that tends to be focused again on companies that are cloud progressive. The leaders there would be the same leaders that were the leaders in the cloud-native cycle where their whole business is delivered in the cloud.
Their IT infrastructure is modern, and they're moving a little faster than more traditional industries in putting large language models and AI into their customer-facing applications. I wouldn't say that there's a geographic, I would say, or an industry, I would say it's cloud nativity that is driving that.
You mentioned something interesting there. The activity is doubling. Can you dig into it a little bit?
Yeah, so we have, again, we gave the metric about the AI tools companies. Why? Because most companies are, instead of building it themselves, using APIs to call out to all of these names that we know we see being funded at larger. That is where the activity is happening. That is where the workloads are happening. We also are integrating with a lot of different places and have an LLM product. I think we said in our last earnings call that that, in terms of the number of customers who are using that, it doubled over the last year, but it is still in the low single digits of our customer base. We also said we have been building these integrations. What that means is we can see where clients are sending us data from the sources of large language models.
The use of those integrations is also doubling. That is part of the platform. In other words, we do not charge based on integration. That is the enabling of the platform to handle these workloads, and that is what we are investing in.
Got it, got it. I know customers are beginning to put generative AI experiences into production. You guys tend to, I guess first question is, do you monetize when they're training? When they flip the switch to production, do you see an uplift in kind of workloads and the way that you're able to monetize those workloads for those customers?
We only get paid, I mean, not only, we mainly and really only get paid in production. We monetize production environments. If they're training, if they're R&D center, if they don't have applications, we're generally not monetizing. It's a meter. If they're using our product, then we get monetization as soon as they use it. That's our sort of model. The fact that we're sort of telling you that there's an upward trajectory, but it's still a small percentage of our customer base, would be probably highly correlated with whether those customers have the generative AI in production environments rather than training, testing, research.
When you do see the customers in production, are there a certain set of products? You guys have, I think, 24 products out there. Is there a certain set of those 24 that are the most commonly used tools? I know margins are kind of a big topic for you guys right now. Maybe talk about for those AI-specific products that are most well used, do they carry different types of margin profiles out there?
I think there's a misconception. Basically, in an application, you basically have the infrastructure, the servers, the containers, the serverless. You have the application running, you have the code, you have databases, you have network, you have security. Those are all our SKUs, essentially. Those are all the things that affect an application. The answer to the question would be the same as all our customers. The most common use would be the big three metrics, traces, and logs. Metrics is infrastructure. APM would be code and application and logs. After that, it would be the digital experience, the RUM, the synthetics. After that, we said it's security and database. This is a false distinction. Essentially, AI enabling or generative AI enabling of applications means there's something else that's affecting the application that needs to be monitored. Logs are logs.
I mean, logs of LLM or logs of code, you know, that's all. Those are, it stays the same, which is those big three complemented by the ones I mentioned. As far as margins are concerned, there's also, I think, a lot of maybe misunderstanding now out there. Essentially, our margins vary a little bit, but essentially, our gross margins on products are roughly the same. Why? We price them that way. We have a good understanding of compute, storage, and everything. We've been relentless in innovating on the platform, managing the cost side, and also pricing so that essentially the average price points have not moved. I'll go to where that might be different in a second. There's really nothing about a generative AI-enabled application or a non.
There's also nothing about an AI tools company that's different from our other software companies other than the growth rate. Where we do have pricing, everybody, it's right out there, is we basically price on volume. We didn't invent this. AWS does this. You price based on volume and term. Larger customers will have a lower unit cost per product because it's all off a calculator. And weighted average, our unit cost hasn't moved very much in a long time because we generally have smaller customers coming in. We have a very broad customer base, and it just balances off each other. There's nothing different. Now, if we were all, if we had 10 customers only for $3.5 billion, we would have a different price point because we wouldn't have any small customers. But we have a distribution. So for the most part, it evens out.
Got it, got it. Maybe switching gears a little bit to the cloud migration story. This is kind of where Datadog grew up, right? Tell us a little bit about cloud migrations. Where are we with that with most of your customers? How do cloud migrations sound like today versus maybe a couple of years ago?
Yeah, I mean, I think I said that there's about, I don't know, 20-30% of the applications are now in the cloud. The vast majority of applications right now are in legacy technology and on-premise. We said this trend is upward and long because essentially the customers are either cloud natives, meaning that everything's going to be in the cloud, or they're larger enterprises or legacy companies, and they're going to be on a journey. Most of them are prioritizing the most important applications. It's going to take a long time. Essentially, when you look at the hyperscalers, X, the effect of AI, and you look at Datadog, what you see is an expansion rate of workloads that has been going on for a long time. Now, we have something else that's complemented that.
The reason why our growth rates have been higher than their growth, we're moving AI, is because we only had one product seven or eight years ago, and we're building a platform. What we're doing is cross-selling and consolidating market share. In our infrastructure and some of our products, they're very, very aligned to cloud workloads. Then we've had another growth driver of taking APM from zero to over $750 million, logs from zero to over $750 million, and taking this market share and consolidating. That's the main reason why our growth rates have for a long time been superior to the basic growth rate of workloads. The growth rate of workloads is very important. It's like the underpinning, but it isn't everything. There's a lot of other products on top of that to monitor the workloads.
Yeah. I'm going to ask you the AWS correlation. I know a lot of investors kind of look at AWS growth rates versus Datadog. One thing that I think about is AWS is a revenue number, and you just said you really monetize on workloads. What?
It's the same thing. It's the same. I meant where workloads are. Are we just priced based on workload units? Same thing.
What I was going to ask was, I know a lot of people look at that revenue number, that growth number from AWS, but is there something else out there that we should be looking at or thinking about from a workload perspective or visibility of workloads out there that could help us frame how much new workload is being created out there that's out there for you guys?
Yeah. And it's complex because the AI thing has, I mean, I think some of maybe Microsoft does give you a little clue of what the growth rate is from the AI. AWS doesn't. It's a little hard because we have two numbers out there, right? We have our top-line revenue growth in the mid-20s, and then we have the effect from the AI. I think I just listed them. Essentially, our infrastructure product, which is container, serverless, that type of thing, network, is a cloud cost, is correlated, let's say, to the mix of the business between AWS, Azure, and GCP, of which still AWS is the largest set of workloads we're monitoring. On top of that, we have those other products and the consolidation in the platform.
It's the platform effect and the consolidation and the SKU launch that has made Datadog's growth rate sustainably larger than those for a long time.
You guys have a lot of enterprise customers. I think you got 45% of the Fortune 500, so in it with a lot of them. What is the expansion potential within the Fortune 500 still? It's a lot of green space. How do you go about attacking that expansion potential?
Yeah, that's a good question. I would say that there's two ways to cut this. You have this statistic that you said, and we also said that we have upper 3,000s of customers who are paying us more than $100,000, and that accounts for upper 80s % of revenue. We're doing pretty much similar things with the cloud natives and the enterprises. We're essentially winning use cases. We are consolidating tools so that we take the wallet. We are expanding business units, and we're doing that through an enterprise sales team that has been growing, and that's one of our major areas of growth, growing around the world. I think we said we've been growing in the 30s %. Sales engineering to support that, channel relationships to support that, and sort of marketing support. We've been focusing very significantly on expanding the number of enterprises.
Within enterprises, do not forget, we are land and expand. We land, we establish use cases, we spread them out. We have been doing that for a long time successfully, and that is how we are doing it.
What does the enterprise kind of direct versus partner mix look like for you guys today? Is there some sort of target that you're trying to get to in the future, whether it's 50-50, 60-40, 40-60? Is there some sort of way to think about direct versus partner?
The most effective channel has been and always been the marketplaces for the cloud. There's lots of reasons why that's the case. That tends to be in the 20% of our business. That's the most productive channel. Why? They're deploying cloud, and then they need it to be monitored. We have very strong partnerships. Essentially, it's not one size fits all. In the government, in the Fed government, it needs to be all through channels or mainly through channels. In certain countries it needs to, and other places. We're doing it bottoms up. We're doing it sort of, the hyperscalers are everywhere. We're doing it bottoms up by country. I think it'll depend very much on the country. I don't think we'll ever be like the security business. It's not because of us.
It's because of the way that DevOps buys, which is more land and expand and direct versus a highly centralized, let's say, in security. We are going to go that way. Anybody that can influence our clients to buy, particularly in some of these use cases like the government or certain countries, we are going to partner with.
What is your federal strategy? You mentioned partners. How much are you leaning into it? I guess the key question here is, what are you seeing from the federal, all the questions that we get on Datadog and all that stuff? What is going on with federal?
For better or worse, we are not dependent on the federal government at this point. We have a very small federal business. We started in the last few years. We've moved up from FedRAMP 1, 2, 3, and then I think there's been press announcements. We are still investing in the infrastructure. Channels are not native to us, and the federal government tends to be a little more conservative in their cloud investment. It hasn't been, I mean, it's good and bad. We haven't had it as a major growth driver. If they're cutting back, it's not going to affect us very much. Someone asked me, and I thought it was interesting to think about, is this sort of trend on efficiency in the federal government, might it end up pushing the federal government to modernize their infrastructure and applications?
Because they're on one end of conservatism and not, maybe it will. Maybe it will in the future. Maybe this will mean that in order to really become efficient, you have to modernize your infrastructure like all companies, and maybe the federal government will happen. We're doing, I would say, it's a consistent slow build, or we need some good pieces of business, but we're not in a position where the Fed has been a major driver, and therefore the Fed having whatever it might be happening in Washington is not a major effect for us.
I wanted to maybe move to security. It's been a strategy for you guys for a couple of years. Maybe let's back up a couple of years and origin story of security with Datadog. Where did it start? Where are we at today? How are you investing to really take advantage of that opportunity in the future?
Yeah. The origin story is the belief that the world is going towards, over time, DevSecOps, which is that security has been, and these are for, this isn't for endpoint network. This is for security of digital applications. It has three elements to it. It has cloud security, which is the security of containers. It has app security, the security of code construction and deployment. It has cloud SIEM, which is essentially using logs and the workflows around that to diagnose security threats and remediate. Our theory was that we have all the data, we have the metric traces and logs, we have the client environments, and the world like happened in DevOps will eventually come together so that there will be significant use cases for developers in DevOps to use security. That is how we started this.
I think that it's a build, but there are also some very different things about the security business. Then I'll go to where we are. We have a pretty good size security business. It just is not as big as our infrastructure, logs, or metrics, or APM. Essentially, there is more centralized control that is more persistent in security budgets and security than there was in DevOps. It's probably a number of years earlier than DevOps became. Two, the CISO and that function tends to be a really protective gatekeeper. Three, it's almost all sold through channels. Okay? Those are hurdles that are not just product hurdles to overcome. We've been working on them in a number of ways. We do have a business that has had a patch in that DevSecOps to handle certain use cases.
We're confident that that's a good decision and that will grow over time. We're not resting there. I think we're essentially working on developing the channels, developing the more centralized marketing, everything from being at RSA to having CISO summits. I think what we've realized is that there's a lot of synergies between logs, observability logs or logs, and cloud SIEM. The DNA, and this is one of the reasons we got into it, the DNA that powers SIEMs is really logs. There's been some big disruption in the market with the Splunk acquisition, et cetera. What we're doing is we are focusing this year on how we can really accelerate that cloud and the products a little further on, that cloud SIEM use case. We're getting results.
I think that's a place where the intersection between the security logs and the observability logs is closer together than others. That's a strategy we're trying to deploy to accelerate our security business.
More specifically on the security side, can you tell us as much as you can what that security salesforce looks like today? How do they cross the hallway? What I mean by that is, right, you got your IT ops buyer in one office, and you got the security guy in the other office. Most often when we talk with users of Datadog, we ask them about the security products. They're like, "Love it." I say, "When are you, how are you buying it?" They're like, "I'm not. It's this guy over here." How are you crossing that hallway?
Yeah. I think that the place where we have succeeded is places which would be more cloud native where they're closer together. That, but let's leave that aside right now. What we're trying to do is build from our champions in observability. What we are working on is developing use cases that are tangent. I think the channel investment will help a lot. We've invested in sales engineers and product experts, but we have a generalist sales organization. I'll deal with enterprise now. We have not yet developed a specialist sales organization. There are advantages and disadvantages. I think Ali has always said that once we get the product maturity, like you said, wow, your product's on par. It may be we have to have an overlay sales team or some other influence. I think channels will be pretty important to get to that constituency.
Because you're right, crossing the hall is not easy. We've learned that. It's not easy. We're still working, work in process on figuring out ways to do that. I think we're getting better at it for a number of reasons, but I don't think we've conquered this yet. That may be something in the future.
I'm going to try and get it out of you.
Yeah.
I'm going to ask, what percentage of ARR is security today? I guess from a bigger picture perspective, do you think security could be a billion-dollar business for you guys at some point?
We have not disclosed that, so we are not going to do it here. We have disclosed product tidbits. You have assembled them all. I think we said that we have thousands of customers, I said. I think we said like in the five to seven. We have a lot of customers, meaning we have a lot of users, but we do not have enough comprehensive use cases. You can look at that. When you look at that from a customer perspective, you see that number may be like in the 20s. We have not said that it is 20% revenue. That means the ARPU, the average revenue, would be lower than we would get to for our most build-out. We have customers that are paying us hundreds of thousand dollars and millions, but we do not have enough of them. Your second question was?
Could it be a billion-dollar ARR business?
Oh, yeah. I mean, it's a huge TAM. This is, I mean, you can see so much evidence from not only CrowdStrike and Palo Alto, but also companies that have emerged like Wiz and others. Yes, we entered it because it's a TAM that is very large. When you get to be our size, having products that are in the hundreds of millions to a billion are what you have to aspire to. Yes, the opportunity is there. That's why we're making the investments in it.
Got it. Got it. Enough on security. I want to move to Flex Logs.
Okay.
Something that you guys highlighted a lot on the last quarter call. Maybe why so much of a highlight on Flex Logs on the last quarter call? How are you feeling about that business as a growth driver?
What we tend to do, if you know us for a long time, is that we tend to give tidbits. We give tidbits that evidence our platform strategy, our ability to expand our TAM, and the rapidity of growth in a business. The fact that Flex Logs got to $50 million so fast. What we said was we've been very intentional about that. We are doing that because our existing product and our infrastructure is optimized for the observability use cases, most of them. There are use cases in observability and there are other use cases in security and IT or whatever, where the log pricing and structure has to be different because of the balance between our main business, which is real time. You got to get it quickly. You're not doing compliance. You're not storing it for lots of years. That's one architecture.
We did Flex Logs to monetize an architecture that has some more flexibility. It is working. That is why we are giving evidence that we are penetrating incremental use cases. It is really part of that whole thing about security logs and SIEM, which we are saying we are having good traction in. We think it can be a growth driver, but it is early days. That is why we are excited about it.
Yeah. Maybe a question on CRPO. Very good growth in the first quarter, 30% year over year. Clear signs, things are working pretty well, go to market execution. How do we think about CRPO growth correlating to revenue growth over the long term? How do we think about kind of that new within CRPO between new revenue versus or new logo versus expansion?
Yeah. I mean, CRPO growth, we've said over and over again, it will have variability around when bills go out, and it will weighted average head back to revenue growth. You saw our revenue growth was in the mid-20s. Whether CRPO growth is higher or lower, what we've said to everyone is don't look at that. It's basically a timing thing. Getting back to ARR growth and revenue growth, which are very correlated, I think we put it in about 75% of our business has come from, so it's varied in different quarters, but it's been in sort of the 75%-80% range from existing customer growth and in the 20%-25% from new customers. That's moved a little bit around. It's in our 10Q. That's what it wouldn't, so that is the question on ARR and revenues.
I think you'll see CRPO, NRPO will vary based on bills, but we don't recognize revenue on bills. We recognize revenue on usage. That ARR or that revenue is the metric on usage.
Got it. Got it. I want to go back to gross margins, kind of an area of debate.
Thought you'd never ask.
Coming out of the first quarter, kind of around spiky usage.
Yeah.
Talk to me a little bit about the spiky usage. Where do you see it? How did it affect gross margins? How do we think about that effect going forward?
Yeah. Let me just step back and say that we have since for a long time, we said that our gross margins will vary sort of plus or minus around the 80% mark, which is really good. That they would vary based on workloads and how we are introducing new functionality. That when we're introducing new functionality or new data centers, that would have a suppressive effect. Then we optimize it. That has been in the company forever. We ended up sort of in the middle of that range. That was one of the factors. The other factor is what I would call cloud operations, which is essentially we have to plan capacity. We have to look at where we have to make sure we're available in cloud environments for bursts. I think that we had very high gross mark.
We were at the top of our range. We were at 84 a year ago. I think that we leaned into and we made very clear we're leaning into growth and functionality, which would have the effect on it. Something new creeped in, which was that we had patterns of use that were a little bit atypical. We did not do the best job managing that. Meaning what you should do, and I think we're doing it, right, is you should look at that. You should look at over time and when you see when spikes happen, you turn the cloud instances off. This is what cloud ops is. I think we left them on too long. You might say another confusion has been, what is a spike? Generally, we do credit pricing. You buy $2 million of credits. You use them.
For the most part, it's the same. You don't get it's the same price. It's not like we charge more for a spike. It's just that if it spikes on Saturday night, you would have, let's say, more usage for that couple hours. The real issue here is not the revenue side. The real issue here was just the cloud management and understanding and getting more experienced in that and being better. I think that as you get larger customers, you may have this is not about AI. This has to do with there was a fintech customer. It has to do with the cloud management. You basically have to make sure you're accommodating that. That may have an effect. I think this is more about understanding cloud planning and doing what all our customers do, meaning they vary between expansion and optimization.
I think we were just a little slow on the optimization in that quarter. We're confident and repeated on the call of staying in that range. That's not something we're worried about.
Got it. Got it. You did mention spiky usage. That does equate to more revenue for you guys. This is not really.
Not really. No. What I'm saying is that if you have a restaurant that has 200 average diners and one evening, one evening you have 300 diners, it's not going to affect. It's just that one little moment. If you get a space for the 300 and you leave it up the whole time, your cost structure isn't optimized. It's not really a revenue thing. It's really a cloud cost management. By the way, we're talking on our whole, we're talking something around $8 million or $10 million total on our whole cost structure. We're really good at optimizing. It's obviously that last $8 million or $10 million, $12 million that, yeah, we probably had too much capacity. Overcapacity left on too long. Yeah.
Maybe in the last minute here, I wanted to touch upon operating margins and levers for margin expansion. I think one thing that's been an area of debate is you guys maybe over-earned a little bit on the margin front in the past. Margins had to come down this year because you're investing for growth. Moving forward, how do we think about levers for operating margins from here specifically on the S&M front and R&D too?
It's a great question. Yeah. I think it's in the kind of volatility we've had in the market in this environment. We've always said since we're consumption, we can't plan it perfectly. I think we said about a year ago, we said we think we probably pulled back a little too much. We were probably too much optimization. We're kind of making that up. We see lots of opportunity. I think we're in a period of equilibration. We gave a margin target of 25% plus. We said that cash flow margins are 200-300 basis points above that. We already proved we could get there, right? This is a decision in our hands. There's a lot of scalability in the business. Really, it's a matter of do we have territories that we haven't covered?
Do we have products that we need to develop or not? I think we'll probably, looking at our guidance we've given this year, you can see where we're thinking, which is it's been around the 20, not precisely. There is a lot from there of the decision of do you grow the cost structure pro rata with revenues, or do you let some economies go through? What we're really trying to do is maximize the long-term cash flow. The biggest lever there is revenue. That is what we're doing in the company. I think we've been really good at it. I think, as you said very well, I think we overcooked it. Now we're making it up. I think we'll have a pattern that's more where you might recognize in one of these companies.
Got it.
Thanks.
David, we're out of time. Thank you. This has been great. Thank you a lot, buddy.
Thanks. Thank you.
Thanks a lot.