Feel a little surrounded by ferns.
Yeah.
Not just between two of them.
It might be between channels and Zach Galifianakis.
Absolutely. It's three ferns.
Three ferns.
Thank you. Good morning. My name is Brent Bracelin. Today, we're very pleased to have Co-founder and CEO Bill Magnuson. We have CFO Isabelle Winkles. Really appreciate you guys supporting this conference here for us. Today, three topics that I wanted to go through in this session. One, this idea of data architecture as a competitive advantage in the AI era. I think it's a super important topic to cover here. Two, business momentum. You're one of the few companies that have shown an improving growth outlook for margins and for profits and in growth. Three, you started to talk about a replacement cycle. We're kind of four years into the IPO. Love to understand the nuances between what's changed around those enterprise replacement cycles. On the data architecture, what was the origin story behind building a stream processing engine when you started?
How do you think that plays to maybe your advantage in this AI era?
Yeah. Origin story, we go back to 2011 when we founded the company 14 years ago. Actually, when we first started, we were focused on mobile. Mobile apps didn't have marketing teams yet because they didn't have business models all the way back in 2011 because none of them were making any money yet. That actually meant that I think, and I and my co-founders didn't actually have any marketing technology background. We weren't just held back by the architectures that existed in the space because, A, they weren't relevant. We weren't necessarily just building for marketing use cases. We were building for mobile engagement from first principles. B, our experience wasn't from there. Actually, Jon Hyman, who is our CTO today, and I, I was our CTO at founding, we were the two technical co-founders. We came from the hedge fund industry.
The system actually was architected more similar to a high-frequency trading system because if you think about what a high-frequency trading system is trying to accomplish, you're pulling in, and this was like a high-frequency trading circuit in 2011 before its peak of second latency on FCGAs and everything nowadays, so a little different. You're effectively trying to take in new economic indicators, new quanta of data as they flow through. You then use that to update your model for the current state of the world. You then take a strategy and you apply that strategy to the delta on the model to determine whether or not you need to take an action of a buy-sell. You need to look at how that might move the market because that opportunity to trade may still exist or it may have closed.
You write it all down and you process that reporting in order to evolve the strategies over time. The basic idea is that you need to treat every single piece of data as it flows in very seriously because it may result in an action that you need to take. That action needs to be executed on as fast as possible. There are a lot of different overlapping strategies that might apply based off of a single piece of data. When you think about the customer engagement data processing problem, it's effectively like we're integrated into our customer's products so that we understand the real-time evolving context coming from each individual user that is generating a huge flow of individual data points as those users take different actions. We now call that all first-party data, right?
That flow of data and the ability to derive insights from it immediately has always been an important part of our differentiation. That data flows into the system, and then we take all the different customer engagement strategies that today have been programmed into Canvas, right? Effectively, new data point comes in, we update the user model, and then we take all of the canvases that have been programmed by our customers and we apply them to that new state of the world. That's going to then cause those canvases to take certain actions. They might still be in a delay or a wait. They might trigger a message. It might move you to the next part of a conditional. It might invoke some sort of machine learning optimization or other sorts of prediction that might feed into something else.
Of course, you need to do the equivalent of making the trade. For us, that is the delivery of a message or the personalization of a product experience. I think we were very early as well to just this idea that the value in the data is in the ability to process the flow of it and not in just locking it up in a data warehouse forever. I think even put that front and center so much that we even had some investors as we IPO that misunderstood and thought that we didn't even store data. I was like, no, we actually, we have 10 trillion user profiles. We store data.
The message that we're trying to get across to the market is that the real-time context of the customer as it evolves around them and the ability to make sense of that data flow as it's being generated is more important than just storing the data in some batch like relational data store. Certainly, there is important data in the enterprise that lives in large relational data stores and cloud data warehouses. We have the Braze Data Platform, which does sync trillions of rows of data every single year at massive scale in order to augment that. The important thing that's new here is that there's this flow of data, and we can live in the flow of data.
We can do things that are real-time classification of the flow of data as distinct from running a batch query that generates an audience, which is a static noun concept that is always necessarily out of date. We actually can live in that flow of data and generate those insights in real time. I'll even point out that I've been using the term context to describe this for, you know, since before we ever even IPOed, for so long that my marketing team was always like, people don't know what context is. You should just talk about data. Now, of course, context is everything that we talk about when we look at what it takes in order for AI to be able to make intelligent decisions. You need strong context around customers and problems and context around the world. That's just frankly always how we thought about this problem, right?
You have this design criteria originally optimized for this flow and context applied to really multi-channel engagement.
Yep.
We have this new thing called AI and agents. What does that core tech stack give you in an agentic world? Do the flows change?
Yep.
What is it? Does it give you a leg up? Is it a different process? Walk me through connecting that multi-channel to agentic.
Yeah, a couple of things there. First, I think that if you go look at, for instance, our IPO materials, you'll see the customer engagement stack that we broke down, which has data ingestion, then classification, orchestration, and personalization that lives kind of in the middle. That's where intelligence is being applied. The top is like, let's ingest data and have access to the context. The middle is where intelligence gets applied. The bottom and the action layer is where you actually execute, you know, the message delivery or personalizing the product experience or what have you. We broke that down as listen, understand, and act. You heard me in the most recent earnings call.
I think when we look at the evolution of that, it's time to move from this listen, understand, and act, which has great connections to the humanity of the problem, to looking at it as context, intelligence, and then interaction, right? We're moving to this new vernacular of context, intelligence, and interaction. The investments into the intelligence component of it are, of course, we're benefiting immensely both from the compounding investments that we've been making over time in things like model reinforcement learning, as well as the advances in the large-scale language models or the large-scale world models as we look at them now.
That ability for data points to get more rapidly contextualized because these world models can derive more semantic meaning from them, and the ability of the modern reinforcement learning to be able to make more compound decisions on behalf of the marketing strategy and be able to do so at a higher layer of abstraction where you can trust the system to actually autonomously be responsible for more compound sets of decisions. What that implies is that when you're trying to execute on the customer engagement strategy, you can actually rely on machine learning optimization to take responsibility for a wider and wider swath of the decisions that need to be made, right?
That has implications for the role of the marketer where you get out of the drudge work of babysitting individual campaigns and doing manual A/B testing, and you actually get up to the level where you are effectively looking at, you know, you're managing a stable of agents and operators that are being fed by models on your behalf to be able to execute these strategies. That has implications both for the customer experience because you're getting higher levels of relevance, and there's a stronger understanding of the data that you are providing to the system. It is doing a better job of adapting to the things that you care about and driving relevance. That, of course, translates into better performance and results for brands. From a marketer perspective, you're operating at a higher layer of abstraction and doing so with higher levels of productivity.
I think that an important evolution that's happening alongside this, though, a year ago, we were talking about the importance of marketer productivity being enhanced because they have a variety of, you know, what we're now all calling agents or operators that are there to help them with copywriting and with, like, variant generation and, you know, translation, checking for cultural appropriateness. There is a whole bunch of small tasks to do that we surrounded the marketer with these more intelligent helpers that were making them as an individual more productive and effective. I think that we're now at the next stage and we're really excited to share more about this at Forge.
To give you kind of a preview, where we are going, I think that it's also important to then zoom out from that and recognize that actually these individual units of intelligence, the models, the operators, and the agents are being imbued not just with, they're not just being leveraged by an individual to make the individual more productive. They are actually being imbued with the creativity and the strategy and the knowledge of our teams and of our brands, right? Then being able to actually operate with an independence and an autonomy within these systems that actually survives and is a concept that exists above the individual marketer and the individual user. It actually becomes an intelligence asset for the brands that then leverage those.
We're looking at that with a new concept that we're introducing, which we are referring to as composable intelligence, where much like a lot of the last kind of decade has been about composability within your data universe. Obviously, the Braze Data Platform is an important entrant into that space to allow for more composability around this context. This idea of composable intelligence is where you're able to imbue the models that you're training, whether those are models that understand your brand or they understand your business priorities or they understand the kind of semantic meaning behind customer data and the journey that they're on, so that you can then use that semantic understanding to be able to prioritize the different campaign actions that you could take and drive your business strategy and how it's prioritized.
It goes so far beyond just the guardrails for my brand, actually imbuing it with the creativity and intelligence of it, and then having that persist as an asset that can get plugged into all these different jobs that you have to do across the customer engagement universe. A very simple example would be, we take the cultural appropriateness checker and the brand guidelines tool as an example. Those are two tools that a year ago in Braze, you as a marketer who was writing your own subject lines or your own copy for a message could utilize in order to be a team to double check your work or maybe provide you a little bit of inspiration to get you through writer's block.
Now, as you interact with that over time, you're actually imbuing that system as it gets more intelligent with an understanding of your brand and how it expresses itself and what its creativity looks like, right? Actually inspiring it to go and produce other things. Now that production, a simple example is I can go be like, okay, I'm going to start a new canvas and I'm going to start from a template and it's like a forgotten cart canvas or what have you. I'm going to be able to say, and then use this model to be able to personalize absolutely everything in this template, right? The template doesn't just show up as a static thing. It actually comes into existence using the creativity and intelligence that you've imbued into this representation of your brand inside of the system.
Of course, that's also still kind of a manual state of the world because then we jump ahead to decisioning products where, with decisioning products, we're trusting them to make compound decisions around the way that we communicate with customers: when, how, why, what the strategy is, what kinds of content tactics we're going to be able to use in order to get their attention and follow through, and do so in a way that minimizes the cost to the brand in the forms of promotions or negative reactions to the content or what have you. That content generation is something where you can imagine the need to deploy those types of decisioning products in a lot of different parts of the customer journey.
A modern customer journey is highly complex, and you're going to want to be able to actually use composable intelligence to be able to take these different models that you've imbued with this understanding of your brand and then be able to combine that with these other techniques that live in the stream processor to be able to drive more intelligence into the decisions that are being made for things like orchestration and relevance optimization. I think that requires real-time context and understanding that requires investments in first-party data, requires the install points. It needs that high-scale event-driven stream processor, which we have scaled to reliably perform in a way that's secure and cost-effective and high performance even under intense parallel load up to the trillions of data points a year, like in excess of 10 trillion data points a year.
Then flow that into intelligence that will continue to deliver in a way that is more autonomous and at a higher layer of abstraction to be able to make more compound decisions over time, which provides even more leverage for the individuals that use it. As we're able to achieve higher levels of intelligence, it provides more meaningful interaction with the customer. You have to pair that with comprehensive investment in all the different channels. The fact that Braze continues to build out new channels over the last couple of years, we've added landing pages, we added Line, we've greatly expanded the functionality around WhatsApp. We continue to expand across our in-product messaging and surveys capability. We're going to be launching Kakao a little bit later this year. We actually are also really excited to see some of the advances from large language models as well.
When we first started seeing kind of chatbot responses that could start to get in the middle of a conversational flow, the ability to, when you think about the latency and the cost of invoking those, it didn't really match up with volumes in the hundreds of billions and trillions, like when we look at message volumes. Over the last year or two, you look at something like Gemini Flash. In our testing, actually, Gemini Flash is faster than the median connected content call that we've been making, that we do hundreds of billions of a year. We are now at that part of the cost and performance curve in large language models where we can start to inject that right in the middle of a Canvas program. That has benefits for both the quality of the interaction as well as the intelligence of the flow in between.
I think historically, when we have released new tools like new Canvas blocks and new generations of the programming language and new channels, our customer base always surprises me in a delightful way around how they leverage those tools. We're going to share more about this at Forge, but this next generation of intelligence-imbued tools that we're going to bring them, I'm really excited for them to surprise us again.
Absolutely. You seem passionate and excited about what's coming. I love the fact that you have this differentiated data architecture now adding a ton of intelligence that's going to really change how people use the platform. Isabelle, for you, it's clear something's resonating. You coming out of Q2, very strong quarter. You actually raised your outlook in both growth and margins. What's changed as you think about a traditional software growth market where we've seen slowdown for a couple of years? You're seeing things reverse.
Yeah.
Help us understand and connect the dots.
Yeah. I think we're seeing things kind of come together across a couple of different fronts. I'll walk through what's helping us on top line, how that's filtering down to the profitability. On top line, there are two aspects to talk about. One is the overall productivity of our sales force. We've seen two quarters in a row here where we've seen an increase in the overall productivity of the sales force. We had been talking about over the last couple of years, actually carrying a little bit of excess sales capacity relative to the productivity that we were seeing. We were okay with that, with line of sight that, you know, we thought things are going to normalize. We are going to improve the overall enablement of the sales team. We're going to do better here in the near term. That's really started to come to fruition.
It's working across a number of different categories. We're doing better across verticalization. Our international strategy is becoming more focused. I think we're just becoming more efficient across a number of different dimensions, and the overall productivity of our sales team is improving. That's on the new business and the chasing of net new dollars. There's another aspect that we're seeing benefits come at the same time, which is on the downsell. Over the last six or seven quarters, we've put efforts into effect here to try to improve the outcomes at renewal.
Right from the beginning, when it comes to implementation and onboarding, we've just done a much better job of making sure no implementation left behind, rapid time to complete implementation, ensuring that by the time the customer is up and running with the tool, they can really maximize their ROI in ways that maybe there are components that were not fully utilized and not benefiting the customer to the maximum potential in prior periods. That takes a little bit of time given the length of our two-year average duration. It takes at least a year to see that come to fruition with new renewal cycles. We're now at the point six quarters in where we're doing a better job with the original implementation, so there's less risk on the downside.
We are through a lot of that sort of ZERP cohort that experienced some levels of overpurchasing and has sort of worked itself out of the system. Those two things work together to diminish the overall risk profile, which means that the resources that we have are better able to sort of swarm the pieces of downsell that do need a little bit of attention as, you know, we come up to renewal. That all combines together to give us lower levels of this downsell risk. We're actually seeing it not only in Q2, but also the forecast for that.
The overall posture and energy and sort of effectiveness of the sales team is just sort of, it's a little higher, which is what led us to be able to kind of play through some of that performance in Q2 and raise not only based on the overperformance in Q2, but actually raise even above and beyond that. Obviously, some of that is filtering all the way down to the bottom line because if you think about downsell, the investments that we made historically, they are coming to fruition today. I'm not having to add a salesperson in order to get the better outcomes on the downsell today. You're getting that overall improved efficiency on top line without having to necessarily add resources in the now timeframe. That's all great. We've been making progress on our globalization strategy, leveraging cost-optimized locations.
As it relates to just our overall cost structure, that is something that has been continuing to improve as well. There were a couple of, you know, things that happened recently. We brought on Ed, our new CRO, and we just closed the OfferFit acquisition. Those two things had some element of, you know, hey, we want to be mindful of investments that are necessary to kind of make both of those two things successful. Ed has sort of come in and is really just furthering initiatives that we've already sort of put into motion. He actually has his eye keenly on the ball of greater efficiencies in the overall sales team. We talk about, you know, fewer folks in the box. We want to just be able to service our customers in ways that don't require, you know, more and more folks to kind of swarm these.
There is just a lot of, on OfferFit, we have found now that we've brought the humans on board and gotten to know them and been able to figure out exactly where they're going to sit within the organization. We're finding greater efficiencies, not because they're going to leave the organization, but because they fulfill requirements of headcount that we were going to add in the future anyway. I think all those things combined together are improving the top line and improving our ability to filter that top line down to the bottom line.
You have a SaaS business model, but pricing is very unique as you think about how you price on a customer engagement standpoint. Very different than most traditional CPA models.
Yeah.
Correct?
Yeah.
Offer fit.
Yeah.
Walk me through the unit economics as you think about OfferFit. Bill's framed OfferFit as the strategic high ground for you and a potential door opener and cross-sell potential. What's the unit economics of OfferFit look like?
Yeah. OfferFit sells today, there's sort of two SKU types that exist today. The original SKU sells at $250,000 to $300,000 per use case.
What's kind of a platform fee?
Yes, it's like a platform fee. It includes the setup and ongoing maintenance to optimize a particular high leverage use case for a brand. It's going to typically be purchased by an enterprise, maybe a slightly more sophisticated enterprise that has a use case that can benefit from, you know, even small tweaks in the performance of the outcomes can lead to big dollars. The ROI there can be very, very significant. We have customers who come in with one use case and then upsell to two, three, four use cases. There are customers who have, you know, four or five use cases and continue to kind of utilize those and upsell. We're really excited about the cross-sell opportunity that exists. OfferFit came in with about 10 customers that already overlap with Braze, and then 17 or so additional customers that really only have the OfferFit SKU.
They don't currently buy customer engagement from Braze, but obviously we're hopeful that that will transition as well. There's lots of potential opportunity for the cross-sell. There's an additional SKU that is a little bit more down market that sells at about $100,000. That is a bit of an OfferFit light version that has more scalability, but a little bit less sort of customizability and therefore can be sold at a slightly lower price point. Our expectation over time, and you've heard us talk about Project Catalyst, is that there will be a broad spectrum of available product and capability that we will monetize in various ways that go all the way from the highest level of customizability and adaptability to the particular use case, all the way down to highly scalable and lower cost to run and implement. We're really excited about the potential there.
Yeah. I would just add one thing, which is that I think that when you look at the expansion of Braze's product portfolio over the course of the last four years, most of the large expansions from a revenue standpoint in the last few years had structural margin challenges because they were premium messaging, SMS, RCS, WhatsApp. Those are message volumes that do trade, you know, they're challenging in the market, right? You've got carrier monopolies at the bottom of them, etc. Decisioning is, from a unit perspective, higher gross margin potential because it's only the infrastructure costs, which we already know how to operate at massive scale in a cost-effective way. It's selling performance. Even better than selling consumption, it actually sells differentiated performance in a way that's provable. We think that has an enduring ability to value sell. 10 second last question, Bill, close us out here. What are you most excited about for next year?
You should come to Forge. We're really excited to share the future of our vision around AI-centric customer engagement. You heard some previews of it today, talking about the context intelligence and interaction and composable intelligence, and the big pushes that we're making in the decisioning space. It's going to be exciting.
Cool. Thank you so much.
Yeah, absolutely.
Thank you.