SoundHound AI, Inc. (SOUN)
NASDAQ: SOUN · Real-Time Price · USD
8.19
+0.37 (4.73%)
At close: Apr 24, 2026, 4:00 PM EDT
8.22
+0.03 (0.33%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

Barclays 23rd Annual Global Technology Conference

Dec 11, 2025

Moderator

Thumbs up, and we're ready to go. All right. Thank you. All right. Thanks, everyone. I'm here with the CFO of SoundHound, Nitesh Sharan, and delighted to have you with us today. We'll have a little conversation.

Nitesh Sharan
CFO, SoundHound

Thanks for having me.

Moderator

Trying to get a little spicy here, see what you guys are up to. But let's talk a little bit first about SoundHound and what growth has been looking at and looking like, especially this year, and if you think about the last quarter. What are the kind of highlights, and what's kind of driving that as people think about what SoundHound's doing as a platform?

Nitesh Sharan
CFO, SoundHound

Yeah. We continue to show strong growth. Last quarter was 68% up for the year. We continue to be, actually, for the last several years, we've been growing north of 40%-50% CAGR. And with some of the acquisitions we've added, we've been crossing into the triple-digit range. So the growth is pretty broad-based. So we are seeing a lot of traction within the customer service pillar, voice-enabling services, in particular around restaurants and some of the enterprise verticals. We're seeing just a ton of customer interest of, "How do you do more, expand more, contain more of the conversation set?" We're also continuing to see growth and traction with automotive, albeit there's been, especially with the tariff backdrop, some dynamics playing out there. But we're innovating by bringing voice commerce into a new way of engaging and getting coffee on the way to work or ordering pizza.

That's rejuvenating a lot of conversations with those OEMs. And I think more broad-based, clearly, AI as an enabler for enterprises has just been the conversation for the last couple of years. And we have a sort of unique position led by voice AI, but just in natural language conversations of how we can help enterprises serve their end consumers. And so I think that traction has been pretty palpable.

Moderator

Voice agents aren't new. We've had Alexa and Siri. Why is it that you've been able to get into all of these markets where there hasn't been penetration from a lot of those larger incumbent agents? What do you think? What's the differentiator that allows you to do that?

Nitesh Sharan
CFO, SoundHound

Yeah. It's not new for us either. We're a pioneering company that our co-founders started the company in 2005 out of Stanford, did their PhDs, pre-iPhone days, and machine learning and speech recognition. We were ahead of the curve even when we launched our platform after being in stealth for a number of years. At the time, the ability to handle complex compound conversations to basically operate speech recognition concurrently with understanding and intent identification and then having a conversation that was much more complex, we were well ahead of, and this is 10 years ago now even, where you'd probably interact with Siri today or Alexa. So from a technological standpoint, we've been differentiated.

When we got our first sort of entry and traction commercially with the automotive space, it was because all the OEMs were looking for an innovator and a better differentiation on the technology versus the legacy provider, which at the time was Nuance, and we're seeing that across the board. When we're penetrating into restaurants, we win on the footprint of great technology, and we have benchmarks when we do either public benchmarks or our own references against the biggest LLM guys out there. We outperform 20%-35% in terms of accuracy, at 4x better latency at a much smaller cost footprint, so we could build these on a model that's one-tenth the size of what some of the LLMs are building theirs on, so technologically, we're differentiated, and our architecture is also differentiated.

We are the first company to bring integration with OpenAI into the automotive with a partnership with Stellantis in Europe. They've now scaled into many brands where they're deploying integration of OpenAI in the vehicle. We do that where we can use our own models, our proprietary models that we're building, and we're training on our large data sets. We're doing it by integrating with LLMs such as OpenAI and others. And we can kind of partner with best of breed, and we can compound with our own state-of-the-art architecture. And I think it's all of that that's really allowing us to continue to differentiate ourselves in the marketplace.

Moderator

And when we think about the restaurant use case, where you're rolling up to a drive-through and you're right there at the point of the consumer, when you get inside of the restaurant, where else do you show up? Where else do we see, either from your own development or from acquisitions that you've made, where else do you show up in that kind of environment?

Nitesh Sharan
CFO, SoundHound

Yeah. That's one of the exciting things for us. So restaurants is a huge opportunity. We've now communicated publicly how we're growing up to now 15,000 locations, which in the grand scheme of opportunity is a very small penetration. And by the way, we're in dozens and dozens of languages, so this is absolutely a global opportunity. We're already live in three continents around the world. But to your question, it's not only about the front-end customer-facing voice order taking, which is something we've really got a lot of penetration in. We have a couple of other products that we're expanding the sort of price per unit, if you think of it that way. Number one, we have a solution we call Voice Insights, which basically enables the restaurant operator to kind of see how the throughput is going in voice ordering.

They can understand what's working, what times of day are better for upsells, or when do you really just need to get the speed of service faster, so that's a solution that is providing interesting operating analytics that'll help restaurants operate more efficiently, and number two, we have a solution called Employee Assist. We're now, imagine you're a new employee that has to learn how to make that specific coffee or how to clean that machine or how to operate certain procedures. It's sort of like your own AI operating procedure training manual, and that Employee Assist capability is deployed across many, many locations, and then we also have just non-order taking front-end smart assistant, sort of our Smart Answering service.

So now, for a lot of restaurants that have phones, over 50% are just like spam calls and just even have, a lot of times, they don't even pick up the call. But handling some of that, what hours are you open? Is there parking around you? We can handle and help and support a lot of those things.

Moderator

And you can also, and I remember when you first told me about this. I thought about my favorite sushi restaurant that I call constantly, and they never answer the phone when I'm trying to order, and they don't have a web presence. But for them, you could just be the. I can provide my order, and it just immediately goes into the POS system, and there you go, right?

Nitesh Sharan
CFO, SoundHound

Yeah, and you can do it from what hours you open, is there parking around there? Do you have vegetarian food? It could be all types of things that can. By the way, we have a seamless interface where those things could be coded. We have a template that goes out the gate, and then if it's something specific to a restaurant and a particular location or a particular menu structure, that can all be custom developed.

Moderator

Got it. And then when we think into the enterprise and we think about customer service and that, where do you play there, and how has that been? If we talk about Amelia as a great example, how has that kind of supercharged that effort?

Nitesh Sharan
CFO, SoundHound

Yeah. In enterprise, we're getting, I mean, so first, I'll maybe break that down by industry, and then I can talk about the product suite a little bit. So historically, we were heavy in automotive up until 2023, where 80% of our business was in the automotive space. We then have, we talked about just now growing aggressively and rapidly in restaurants. And in enterprise now, we're seeing great traction across financial services, healthcare, insurance, hospitality. And so our revenue base is now much more diverse, whereas one industry was 70%-80% a few years ago. We now have five industries that are double-digit percentage contributions of our total revenue.

So the traction, again, with seven of the top 10 money center banks, the ability to have a conversational agent that can handle, "I need to find out what my bank balance is," or, "I need to do money transfers," or, "I'd really like to figure out a more accelerated way of paying down my loan." Those are things that can be handled through a natural conversation interface. The traditional IVR infrastructures where you have to wait for the phone tree to say, "Press one for this, press two for this." By the time you got through it, you're at an agent, and it says, "Well, an agent's not available. Can I call you in 45 minutes?" That's what's being disrupted. And Amelia played at the conversational AI layer.

We added now advanced voice capability and integration with that architecture where now a lot of the large money center banks have their own large language models they're developing, right? Security and privacy are super important to that ecosystem. So we have, again, this capability where we can integrate with their LLM. We can bring our own models. We can, for certain use cases where it makes sense, we can bring in the other large language model providers. And that's sort of the advanced feature set that we're, and everybody now, it's moved to agentic, and we're leading the charge there. So the types of use cases we're seeing that are working with either healthcare, financial services, insurance, it's just widening the aperture. So what used to almost automatically go to a human, the AI can handle much more of that.

Moderator

Got it. And when we've heard you talk about Polaris a little bit and what that means and what that is, can you just explain what that's all about and why it's differentiated?

Nitesh Sharan
CFO, SoundHound

Yeah. So one of our major differentiators, I think, is a general thesis is technology, as I mentioned, and we've been pioneers in at the front end. So Polaris is our sort of latest-gen multimodal, multilingual speech foundation model. And so this is our own home-built engine where on the speech recognition capability of voice. So if you think of voice AI as multiple parts: speech recognition, natural language understanding, generating a response, and then speaking it back through text-to-speech. This is sort of that first part, speech recognition. OpenAI Whisper is one of the competitors. Google has its own engine, Microsoft onwards. And we benchmark against all of them. So we see outperformance on generic models where we're 20%-35% better accuracy. We do it with much lower latency and a much smaller cost footprint.

We've then further done benchmarking at domain level, sort of at the financial services level or retail or just in general customer service. And we see benchmarks now that are 70%, 50%, 60%, 70% better in accuracy and capability. Now, a lot of this is because we're training it off of production use cases, real data, how actual customers call in in different acoustic backgrounds. So sometimes you have noisy environments. Sometimes you have a noisy call center, or if you're driving in the car, there's a lot of background noise. We are able to train it in all these different environments, which enable better accuracy. If you can't understand it upfront, you can't really do transactional work on it. So the fact that we have this advanced capability, and we're constantly innovating there.

So I mentioned it's multilingual too, which means we can handle dozens and dozens of languages. Some people flip in between different languages mid-conversation. We can handle that. It's multimodal. So it's not just voice. It's sometimes voice visual. We have a vision AI capability we're now bringing. So we believe the world's going to be omnichannel, multimodal, multilingual, and Polaris is leading the charge on that.

Moderator

You talk about language models. We think about LLMs. You mentioned OpenAI a little bit. How do you interact? Are you competing on some fronts, or how do you operate within that construct, and where do we see you?

Nitesh Sharan
CFO, SoundHound

Yeah. Most of it is competition in general in tech, but I'd say we're partnering with a lot of them. So I mentioned we're integrating OpenAI into the vehicle with Stellantis, for example. We have integrations with other large language models. And again, it's an architecture that can work across any model, even if you build your own model, and we bring our own. So there's a lot that you don't need a trillion-parameter large language model to do if you're really looking for a use case on food ordering. You may not at the same time order your cheeseburger and then ask about the theory of relativity, like pretty specified use cases. So we can work and partner with the large language models. Also, our specialty in sort of voice gives us differentiation. So for us, we're a complement to even some of the capabilities that they're building.

Moderator

Cool, and I think we've all had a situation where we're driving the car and we're trying to do something commerce-related, and if we can stay on the road, that's great. If not, we just underperform. What does voice commerce look like for SoundHound? How are you enabling it? What new partnerships have come out recently that are interesting for us to think about? How is that working?

Nitesh Sharan
CFO, SoundHound

Yeah. Our founder, Keyvan Mohajer, our CEO, co-founder, his vision was always inspired by Star Trek and how do we voice-enable the world and talk to robots and the coffee machine and just get through life through natural conversations rather than touch, type, swipe, and all that. So that was a vision and pioneering vision. And last year at CES, we sort of launched our commercialization of voice commerce. And we've now—we're in partnerships with half a dozen OEMs, a lot of interest, many restaurants to try to bring basically this vision of you're driving in a car, and you could just seamlessly order coffee on your way to work, or you're watching football on a Sunday, and you could just—you see the pizza advertising come on, and you can order pizza. We announced just yesterday a partnership in that voice commerce ecosystem with OpenTable.

Now you can get reservations for dinner on the weekend. We also announced recently a partnership with Parkopedia. If you're driving and you're in a city where parking is limited, you can actually start to plan where you can park more efficiently. All of this is what's more seamless to the end consumer. How do you make it more tractable, to your point? If you're driving and unfortunately, too many people are still fumbling around with their phone and trying to order something, this is deep integration. The model's also innovative in the sense that we're actually trying to build this flywheel where the manufacturer, in this case, the car, if you take the example of somebody driving and ordering pizza to pick up on their way home for dinner, there's a sort of commercial economics on the transaction side.

So first, the pizza, you would search. You say, "I want pizza." On the way home, it'd say, "Oh, there's these four places on your way home. Would you like me to select this place?" And you say, "Yeah, I'd like a large pepperoni and a Pepsi, and please place that order." The restaurant's happy because they got a lead generation. We would have a convenience fee associated with that transaction. We share part of the economics with the car manufacturer. So now the car manufacturer is actually incented because they're getting economics on this. They're now generating revenue. It's a whole new revenue stream for them. And imagine them or a TV manufacturer. Now, device makers can make money on this new ecosystem, and consumers, they're happy because they actually very seamlessly got their dinner that they need.

Moderator

So the auto manufacturer wants to have you in the car, or Vizio, as an example, wants to have you in the remote. The restaurant doesn't really care that way. They're getting lead gen. They're just going to pay it. How does that compare versus some of the other - I mean, how big is that convenience fee and how much friction does that cause or doesn't it?

Nitesh Sharan
CFO, SoundHound

We think there's a lot of economics that we can do in a very attractive manner that's a win for everybody. So I mean, our delivery services charge pretty hefty fees for lead gen, and we could be a small fraction of that and still make it economically really attractive for us and the device maker. And to the point, restaurants just want traffic, right? They want somebody who on their way home wants pizza, and they got that unique customer. So we believe we're certainly seeding this. So there's an approach to seeding it to make sure that there's traction. And over time, we'll see how that model grows. And by the way, you can add, it's not just commerce transactional. There's advertising opportunity in there as well.

Moderator

So the big strategy here, and I think this is your three pillars that you always talk about, where you get the agent into the device, and then you get the agent into the commerce end, so the restaurant, and then all you do is connect those, and that becomes the third pillar of the business, right? Is that how we should think about it?

Nitesh Sharan
CFO, SoundHound

Exactly. Ultimately, the vision is voice-enabling the world with conversational intelligence. And we believe natural language conversation, so we've learned how to type or text with our thumbs really fast. You don't have to learn how to talk and get things done. And so the idea that we've kind of shifted from the QWERTY keyboard to mobile devices and texting, and the next horizon is natural language conversations and LLMs and everything we're talking about generative AI, like that is one of the major unlocks that this technology is providing. And that's why we believe customer service as a vertical is such an attractive opportunity in the near-term horizon. To your point, our business model is set up that way. So we voice-enable products, cars, TVs, IoT. We get royalty economics. We voice-enable services like restaurants, financial services, healthcare, your appointments, your reservations. Those are sort of more subscription SaaS economics.

And then, yes, the third pillar is tying it together. Now, while you're driving, and by the way, we're getting royalty economics, you can connect with a restaurant where we're getting subscription economics, and you can add on transactional commerce or, again, advertising. So it is bringing this ecosystem together led by natural conversation, of which within that, voice is the killer app. It's the most natural way to get things done. But we do believe the world is going to be omnichannel. So if it's best for you to interact with WhatsApp or text, those are capabilities we also support.

Moderator

If we fast forward a couple of years, do you think more of your revenue is going to come from the connection revenue, or more of it's going to come from the other two sides of the pillars?

Nitesh Sharan
CFO, SoundHound

We think there's huge growth opportunity in all of it. So historically, we're heavily in the pillar one, royalty. We've seen a lot of traction last year, this year. I'd expect even in next year from our services pillar. And then, yes, over time, we think the third pillar, voice commerce, is going to contribute a strong amount as well. The pacing of all those, I mean, they do interact. And so I think over time, if you ask me five years out, I think we'll have really nice balanced contribution from all three pillars. I think in the immediate, medium term is really pillar two where we're seeing a ton of traction.

Moderator

Got it. And in your earnings calls, you talk about the path to profitability. I think next year, you're projecting EBITDA positive. How can we kind of think about it? What's the driver behind that?

Nitesh Sharan
CFO, SoundHound

Yeah. I mean, I indicated even Q4, there's a range of revenue outcomes, and there's a range of profitability outcomes that'll tie to that. But yes, as we move to next year, we're moving towards a break-even zone. And so, I mean, ultimately, we're driving hypergrowth, and we want to continue to service that growth because we're underpenetrated on the market opportunity in front of us. The service of the market is hundreds of billions of dollars, and we can go after that. And we have a differentiated competitive moat that we can leverage. On that journey, I do think moving to the break-even zone makes sense, that's where we need to get to, and that's what we're marching towards. And at scale, as a software company, I do think the profile long-term is a 30-plus% EBIT margin business.

But for the next few years, I think that break-even zone is probably the right place because we'll continue to, every incremental dollar, the return on that dollar, we believe is well in excess of the risk-adjusted cost of capital. So it just makes sense for us to continue to fuel this growth. Now, that largely will come through continuing to build in the ecosystems we've established where M&A makes sense. We'll look at those individually, and we'll continue to sort of consider M&A if it makes sense for us. We'll go after those. But yeah, we just got to keep executing and driving this growth, and I think that'll get us to the right profitability profile.

Moderator

Yeah. And I guess that was just double-clicking a little bit on M&A. You talk about this massive TAM, but your served addressable market is relatively small vis-à-vis the TAM, and it doesn't even seem like there's a way you could possibly build into that fast enough organically. Are there a lot of companies out there that are trying to make solutions that are technology add-ons? You found a lot of them. Do you feel like there's a big universe there, or is this a—what does the market look like in your mind?

Nitesh Sharan
CFO, SoundHound

I think that, like I said, I think technologically in voice in particular, but also sort of the omnichannel and architecture, the algorithmic and data, I guess I'll say, we feel great with what we have. We think there's a lot of scalability with the technology and the products that, in fact, one of the things we're really trying to focus on is providing more a unification of our product suite so that no matter which customer you are, you're looking at one common platform. So that's an internal effort. A lot of what the acquisitions have provided us, it's almost like customer acquisition cost efficiency. So existing legacy companies who have established deep roots with customers but need an injection of innovation effectively, sort of taking to this next horizon of AI, we've found great opportunities.

So when we're diligencing some of these deals and we're talking to a large money center, too big to fail bank, and they're like, "Voice is really what we want to move into," we're like, "Great. That's exactly what we have." That's sort of an affirmation of an investment thesis that we're playing out. So I think companies that provide deep customer relationships may be thirsting for more innovation. That makes sense. I think as we move more omnichannel, could there be adjacent technology that makes sense? Certainly, we're open. I mean, I guess the biggest thing is this market is moving rapidly. We appreciate the partnership of banks, and we also have our own dedicated team that's always taking the pulse of who's out there, whether that be for partnership opportunities, just to know who the new competitive threats are or if there's potentially an acquisition target down there.

We're constantly pulsed on it. It's rapidly moving, but we feel really good about what we have. And just even in the spaces we're playing, we know there's a lot of runway to go after. So we just got to aggressively keep going after it.

Moderator

So at this conference, there's been a tremendous amount of discussion about infrastructure, power, chips, everything that's building the underpinnings of AI. Does that matter to you guys? I mean, how much more computing power, speeds? Are you competing for the same resources as the LLM companies are? Where do you fit in that ecosystem, and how does it impact your business?

Nitesh Sharan
CFO, SoundHound

Yeah. I think that AI, we believe that it is as transformative as whatever analogy people want to use, electricity or so forth. It's probably bigger than the internet, probably bigger than mobile, and so it's very early days, and there's going to be a lot of opportunity, so yes, there are infrastructure players out there who need to be thoughtful about ensuring we're scaling these massive models on this pathway to AGI and so forth. There's a lot you can do with even what's out there right now, the capabilities of the current models.

When we move from generative models to agentic reasoning models or large language models to large reasoning models, like I said, in certain spaces where we're playing, we're sort of at that application layer of all of this investment where people are saying, "Okay, well, how are customers going to use this?" and that's what enterprises are working with, so I'll give you one use case. We work with a large telecommunications company who get inquiries, and they use our engines to sort of handle a lot of these calls. It could be, "My Wi-Fi is down," or, "What's this billing issue?" or so forth, and there were traditional sort of containments of those calls before it would go down into a human to handle the sort of tail complicated stuff, well, now these reasoning models and our agentic solutions are just expanding what's possible.

Our embedding of these latest capabilities are what used to be handled at 50% containment is moving to 80% containment, and the reasoning models, like as an example, a telecommunication provider where somebody calls in and says, "My bill went up this month. What's going on?" Traditional models used to say, "Oh, shoot, we got to move that to a human," but now the AI itself can handle it. It'll say, "Well, let me look back into your historical bills. Oh, you had a discount going on. That discount has lapsed. That's why you went up $10 this month." Those things, that application layer is sort of where we fit. We sit on top of all these infrastructure investments. Obviously, it supports us when we can integrate.

We use a cloud provider for a lot of our cloud services, but we're also unique in that we provide edge solutions where we know more and more people are trying to see what can you do with smaller footprints. So when I mentioned earlier about our benchmark of Polaris, where we not only do it faster, more accurately, but we've done it on a model that's one-tenth the size. That's a huge differentiation. When everybody's saying bigger and bigger and bigger, we can do a lot more with less even. And the way in application people are talking about how inference costs are going down really rapidly, that's a benefit to our gross margins. So when we're deploying this in the application layer, how customers are using it, customer service is an easy one for people to understand. We're ordering your food at a restaurant.

That's sort of the application layer where we play.

Moderator

Got it. And going back to Star Trek for a second, so I saw Star Trek, a newer one, recently, and there weren't any iPhones on that at all. So when you think about the vision of where this all goes, right, are we talking a deviceless world where everything is essentially in a cloud and the primary HMI is verbal? How do you think about what is the way down the line vision? How do you think about that?

Nitesh Sharan
CFO, SoundHound

Yeah. The vision of this company was about this sort of more ambient computing where there'll be robots doing amazing things, and you'll be just talking to your coffee machine, and you'll tell your elevator what needs to do. And I do think there's going to be a compounding. So it's never a move from generally, again, we're still working with those clunky keyboards, and IBM still makes plenty of money on mainframes. And these things compound over time. We believe that natural conversation will be that next major inflection. And when you're driving, it's natural. It's safe to actually not have to fumble around with something. When you're watching TV, the biggest screen and where you have the most entertainment in your house, to be able to interact naturally makes sense.

Customer service is an easy application where most people today are still frustrated with the traditional architectures and legacy infrastructure. Having a natural conversation interface for that, whether it's through a phone or through your web device or whatever, the median doesn't matter. We'll be multimodal. We'll be omnichannel. Yeah, absolutely, that's the vision is that we'll be very ambient, pervasive.

Moderator

How far to the edge does this eventually go? Is it always going to be a central interface, like, not exactly, but kind of like an Alexa that then speaks to devices? Or does this actually go all the way to the edge? A light switch could be enabled because you don't need a screen, right? Where do you think that eventually, how far to the edge does it get?

Nitesh Sharan
CFO, SoundHound

Yeah. We had a deal this last quarter with a robotics company in China, and the idea of just even a light bulb. The light bulb goes out, and you can just replace your light bulb through voice, natural language conversation. We are working with smart appliance operators today. We do believe the sort of device side is a very underpenetrated opportunity. One of the real distinguishing features of voice AI, you don't need the economics or the GUI interface. You just need a very cheap, sort of inexpensive microphone to unlock the power of voice AI. So it absolutely, ultimately, can be pervasive through any modality. The ones where we're going at it and approaching it is drive-through makes a lot of sense. It's a captive driver on the way in ordering their cheeseburger and French fries.

And to convert that sort of busy human recipient of that order to AI makes a lot of sense. Or a call center capability. That's where we think the immediate horizon is, but long-term, certainly, the sort of pervasiveness of it is what the vision is about.

Moderator

Awesome. And I have one more question, but are there any questions from the group here? Okay. Let me hit you with this one. It seems like the voice ordering at drive-through makes a ton of sense. It's a use case we can all get on board with, having all sorts of experiences that have been less than fruitful. It's taking a little while. What do you think is taking so long? Is it the CapEx? Is it the technology? What's the holdup that kind of keeps us from seeing this everywhere all at once? What is it?

Nitesh Sharan
CFO, SoundHound

Yeah. In restaurants in general, there is a clear need for AI, both for cost containment and offsetting of other commodity costs or inflationary pressures. But there's also a consistency of service element that AI provides, and there's also possibilities of revenue uplift that we provide. On the drive-through specifically, there's also a hardware element sometimes to restaurants. So when we work with certain restaurant customers who have to retrofit their drive-through, they might need to put a digital order confirmation board or upgrade their microphone or headset speaker system. So sometimes that's the gate. So we've done some things in partnering one of our earliest drive-through partners with White Castle. We've now scaled into dozens of their locations around the country.

And we were able to work more quickly with a sort of quick seamless POS with a small screen that was able to unlock and go faster through their permitting process. And that enabled us to scale. So there are sometimes implementation requirements like that that govern it. More broadly with restaurants, sometimes it's sort of the architecture of a restaurant. You can get an MSA at the corporate level, but you have to sell into franchisees. Sometimes a partner like Jersey Mike's, actually, they work in partnership with us to try to do that and incent that sort of franchise adoption. So there are a couple of gating items, just the complexity of working with a fragmented restaurant space that have different point-of-sale systems and infrastructure. But clearly, the demand is there. We're growing. I mentioned this year, every quarter, we've been adding 1,000 locations.

I do believe it can inflect even greater than that, and we're pushing hard, and we have a dedicated team that's doing a great job on the ground trying to fight for every incremental expansion of that market every day.

Moderator

Awesome. Well, Nitesh, thank you for your time. You continue to amaze us with how much progress the business makes, both financially and as the reach just gets further and further. I think we're all looking forward to where it's going to be for our own convenience sake. And thanks for being here. We really appreciate your time.

Nitesh Sharan
CFO, SoundHound

Thanks for having me, Rob.

Moderator

You bet.

Nitesh Sharan
CFO, SoundHound

All right.

Powered by