Hello, everyone, and good afternoon. Thank you for joining us on the second day of Needham's 27th Annual Growth Conference. My name is Nick Doyle, and I am an analyst on the semiconductor team here at Needham. It's my pleasure to host this fireside chat with Cerence. Cerence is an innovative partner to the world's leading automakers and mobility OEMs. The company is advancing the future of connected mobility through intuitive AI-powered interaction between humans and vehicles, connecting consumers' digital lives to their daily journeys, no matter where they are. Joining me from the company are Brian Krzanich, CEO, and Tony Rodriquez, CFO. Brian and Tony, thank you for joining us.
Thanks.
Thank you.
Pleasure to be here.
Some background questions to start. For those new investors in the room, Brian, could you start us by just giving a high-level overview of what Cerence does?
Sure. So if you think about when you're driving a car, there's the most basic features. Most of us have the ability to either push a button, or sometimes in the more advanced cars today, you can just talk to the car. And it'll do simple functions, right? A text message comes in; it'll read that text message for you. It'll ask you, do you want to respond? And you can respond to it. You may be able to do simple functions in your car. But it may today require very specific words: turn seat heat on, connect to Bluetooth. That is all powered by our basic models that are out there today. In the future, all of that becomes large language models. And all of that becomes just natural language. So you can just say, my feet are cold.
It'll know to turn on the heat, hey, would you open the charging port so I can charge? It'll do that. So we're building the voice and language interface that allows you to interact with your car and the outside world. Our goal is that you no longer need to use your phone in the car, like you do today, but that you can do it all within the vehicle itself using just natural language.
Great. Brian, you're just about three months on the job. Can you give us a brief background of yourself and then give us a sense of what attracted you to Cerence?
Sure. So a brief history of myself: 42 years in technology. 36 of those years, I worked at Intel. And then I went and worked at CDK for four years, roughly. And then I took about almost two years off. And then, as Nick said, three months ago, roughly, came to Cerence. What attracted me is I knew a couple of the board members that were on the board just through other business relationships. They asked, they called and said, hey, would you consider coming and looking at this company and possibly coming and running it? And what I saw in Cerence, I knew Cerence from Nuance. Cerence was a spinoff from Nuance, which is a company that was sold to Microsoft. Great technology, really leading edge in voice and architecture and around large language model implementations.
But that was undervalued and really could use strategic management, leadership, and good discipline around how they ran their business. And so to me, that was a very attractive, undervalued, required better leadership, but great technology. And that was a combination and a board that I really enjoyed. I liked my board, and I knew about half of them pretty well.
OK, and Tony, you're relatively new as well. Can you give us the same brief background and what made you want to take the full-time role, which was also about two, three months ago?
Yes. Thanks. Yeah. So, Tony Rodriquez, my background is public accounting KPMG. And then after leaving KPMG, I've been the CFO of publicly traded and private companies over the years. Two companies ago, we actually sold a business to Nuance. So that was my brief connection to Nuance is that we had sold a technology business. The company had a transition of CFOs in our fiscal 2024. We had two previous CFOs, one that lasted, I think, only two months. So I came in, parachuted in to help a company, public company that did not have a seated CFO. So I took the interim position. And similar to Brian, saw the opportunity with the business. So at the time I came in interim, we had not yet. The company pulled back guidance for fiscal year 2025 or anything beyond 2024.
So, I was able to help the team put together the 2025 forecast budget and guidance and saw the opportunity there that we could restructure the business and get it to cash flow positive, grow the top line, and then looked at the underlying technology as well and saw the opportunity. And then lastly, really wanted to work with Brian as well.
Tony, you just mentioned a sale, and I know, Brian, you had worked with CDK, and that successfully sold, so how do you answer investor questions when they ask, "should we expect a sale?
The first answer is there's absolutely no work being done on a sale whatsoever within the company. There's nothing in the works or it's not starting up next week, and I'm just not telling you about it. There's nothing on the roadmap right now. I can tell you how I think about sales. When I think about selling a company, it's really because either you're approached from the outside and somebody just walks in and says, hey, I really want this technology, and it's unique, and so you have to consider that as a public company, and you have to consider that against what you think the price can be for the stock and the growth and all of that and do the best for your shareholders.
Or you've run the company growth and market value to a point where you think, ok, my growth rate's kind of stagnant. Here's about how fast I can grow. The market's already captured most of that value. And now, in order to do the next thing, I either need to do some big acquisition or some big internal investment to grow inorganically. And I weigh doing that and taking my shareholders through that process versus selling and letting somebody else do that, maybe a strategic or maybe a private equity or something like that. That's how you have to, I think, about how you go through when you would want to sell a company. And I just don't see Cerence in either one of those boats right now. But you never know who's going to knock on your door. So I won't discount that.
OK. And then last kind of background question. Brian, what has surprised you the most about the organization since taking the helm? And what relationships have required the most attention in the first few months?
So they're somewhat in the same. Definitely, the relationships that have required the most attention have been with the customers and the OEMs. The OEMs are really going through their own transitions and kind of trials as they deal with really a very dynamic and very fast-moving environment. I think what surprised me the most is that I just don't see them moving. The traditional OEMs, in my opinion, aren't moving fast enough relative to, I'll call it, the upstart competition. Whether that upstart competition is from China, where they don't have the traditional infrastructure and structure within the organization, or whether it comes from western companies like Tesla and Lucid and Rivian. Those companies are moving very fast. They're adopting new ways and new methods of doing things, new features, new capabilities, whether it be large language models or just how the user can interface.
And I see the Western OEMs, whether they be Western Europe or North America, much slower in that uptake. And it's often the dynamics within the company. And I just had hoped that had changed much more than I think it has. And so in some ways, that's good for us because we are bringing that technology. We can help them. And in some ways, it's slowing us down because we could do even more inside your car than what they're willing to accept right now or able to accept right now.
And then a couple of questions we're asking all our companies. First is just looking at the auto market broadly, what trends do you see playing out in the new vehicle market over the next three to five years? And how are you positioning your business to benefit from those trends?
I mean, I can start and add in the trends. Everything's going to large language models. Everything. If you look at by the end of this year, our full stack, even just how we train new words, is a large language model. So if they come out with a charging port that needs a different name, everything is large language model, from the simplest task we have to how they do calls on what you can do within the car. And what that provides is a much more freer conversation within your car. But I also see the competition, as we said, between China and those emerging markets and the emerging players, even outside of China, and the traditional OEMs. That is increasing. And then there's always competition. The competition is continuing within our environment as well.
Yeah. I think it's really how consumers interact with the car. And we're positioned well for that. We're all comfortable talking to our phone now and having it do something. Within the car, it is now either bifurcated. You can do something outside the car, or you can do something without large language models that are very rudimentary to talk to your car. Combining that feel as a consumer, just to be able to get in your car and have the command be voice within the car and command outside the car, whatever you want to do that you would normally do on your phone, in a branded situation. So if you're in the cockpit of a Mercedes or a Porsche, that you are that branded customer, not beholden to outside big tech.
And then, second, if the Trump administration increases tariffs gradually, is the word of the day on Chinese or really any other country's imports, how would that affect your business?
So that's interesting. I think, A, we don't know what's going to happen, right? And so we'll kind of see. But if you take a look at our business model, we have China. We are in many of the Chinese cars outside of China. We don't have any real business on Chinese cars inside of China. We have pretty good business of, I'll call it, the Western cars, whether Western European or North America, in China. So if I look at that dynamic, if tariffs are applied against China on either geography, it's not going to really change our distribution really that much. We weren't in China to any large extent with the Chinese vehicles. And the Chinese vehicles aren't really a big presence in North America or even Western Europe for the most part right now. I don't think we'll see, and that's my opinion.
I'm hoping that we won't see a lot of tariffs between, say, North America and Western Europe. Even still, so many of those Western European cars are now built in the U.S. with European design. Right now, our model is assuming for 2025 that the tariffs have minimal impact on our business. That's why, right? We're not really in China heavily, except for the Western brands, which we've already assumed the decrease in their volumes that we've been seeing. The Chinese volumes aren't big in the U.S. and even in Western Europe yet. We didn't see a big change based on tariffs.
Brian, maybe you could give us a sense of the competitive landscape. Why does Cerence have a swim lane servicing automotive OEMs? And why isn't this something that big tech should dominate?
So I'll start with a lot of what Tony just said about us bringing that branded experience. So our goal is, and there are several OEMs and car models that you're starting to see this presence in. And into 2025, into 2026, and 2027, you'll see it more and more. Our goal is when you get in your car, you can leave your phone in your pocket. Right now, nobody leaves their phone in their pocket. Everybody lays their phone somewhere in the cockpit so that they can have access to it. But there's really no need for that because you're connected usually through Bluetooth to the car. And really, the only function that phone should need to do is a phone call. But we can call for that through the vehicle itself and that Bluetooth connection.
To Nick's question, why you versus why doesn't Google do that or why not Apple, right? It's because we can provide that branded experience. We're seeing more and more of that. We have a great example with Renault right now. They launched a car. It's mostly in Europe right now. It's working its way across the world, where they launched a virtual assistant called Reno. It's all based on our technology and the first of our large language models, mostly up in the cloud. They're not that much embedded. But that assistant, and really the future of large language models, is becoming agentic or what we call agentic, or basically agents that can solve problems for you. What we do is we have the flexibility that the OEM can choose what those services are and who provides them.
So when, for example, you want to do a simple internet search, we can send that search to Google. We can send their Gemini large language model. We can send that to Llama. We can send that to ChatGPT. We can send that to any of the searches they want. If that search is about a restaurant, they may want to go to Yelp or the Western European versions of Yelp or some other localized search engine. We have that ability and that freedom. When you do a search, rather than doing the top sponsored responses, we can customize that response to what the OEM wants. Suppose the OEM has certain relationships locally with restaurants or charging stations, or they have their own network of charging. And they want the charging answers to come up with their answers. We can send that request over to them.
So that customization, wake-up words, what it's called, the avatar, we have that ability and willingness to be agnostic and allow customization. We even have, at the end of this year, the ability to let the end user, you, customize the system such that you could use specific wake-up words, call your car a certain name, do facial recognition. All of these kinds of things can go in and be agnostic. That's the differentiator that we provide over the OEM or the large, I'll call it, the big tech guys. They tend to want to have you locked into their LLMs or their search responses or their business model, right, if it's Amazon or Google. And so the OEMs, as they are worried about building or losing that relationship with you, the end user, they want that customization. They want to own that relationship.
We're willing to just build that customization to allow them to do that. That's where, when we go in and talk to them, that's where the real value comes from. There are some technology values as well. For example, we've invested heavily in an embedded, so basically shrinking the large language models to a small enough size, both in memory and compute, to allow much of the large language model to ride on the car rather than be in the cloud. Some of the big tech guys want everything to be in the cloud because that's where they make their money and that's where they reside. That's a lot more expensive for the OEM. The more they can get of the workload on the car itself, the better it is for them because it's cheaper.
We have invested heavily in that and have quite a bit of more capability embedded than the competition. That's another place where we add technical value as well to the OEMs in reducing their costs. They reduce their costs by using us, by allowing them to compete the LLMs against each other and choose cheaper versions, to reduce their cloud expense by moving more onto the car itself, and allowing them to own the end user personality and experience.
A couple of follow-ups there for that long but good answer. The Renault win that you talked about, that's important because this was a win back from Google, correct?
Yes, it was, and we have several examples of that where we've won back business from Google and others.
And then the side note is they built this avatar, this Reno. And I think that launched like this month or very recently.
Yeah. It launched in November of last year.
But the Renault win back, I mean, how much of that discussion was because of this customization factor versus price?
Oh, it was almost 100%. Two things. The real-time customization, what we could provide. They literally have this little avatar, this little guy they call Reno, who shows up on your screen and walks you, and you can talk to it. And then our roadmap of further customizations, moving everything to LLM. And then so price kind of plays into that. But they really wanted to have this experience that you have now. If you went to CES last year or last week, Harman had another really good example. They called it Lumi. And it was a great example of this avatar-like experience where it's all large language models, and you can just talk to this assistant or agent. And it really walks you through whatever you want to do with the car.
And then, on the topic of the edge models or the SLMs, is this almost exclusive with NVIDIA? You're doing this across the board, but you are partnering with NVIDIA a little bit closer. And my understanding was that part of that was because of the discussion about the hyperscalers would prefer, the big tech would prefer to run this business in the cloud where guys like NVIDIA or maybe other players that are trying to get in have compute that they're able to provide, and they'd prefer it to run on the edge. So the question is, how much are you working with NVIDIA to develop this edge solution? Because an important point was that this edge model, the cars today aren't exactly ready for it. They'll need kind of this next generation of hardware to run. And that's where this opportunity is, is my understanding.
Yeah. So you're right. So a lot of the work is around shrinking the footprint of those large language models. A lot of people call them now SLMs or small language models. Because the way a large language model works, right, is it looks at all the possibilities of the next word, and then it picks what it wants to. Sometimes it's the most likely, or sometimes it's the one for some reason it likes the most. But those typically use a lot of memory and a lot of compute. And how you shrink those down is very complex. NVIDIA has given us access to their engineering teams and their tool sets to really help us shrink down these models and get the footprint to be on the car. At the same time, we're working with them.
They know they need to reduce the cost and the footprint of their hardware as well. And so we're working with them and some other partners that I'll let them talk about in the future that will give them more access to the hardware space. And then we have the relationships in this space with the OEMs. And so we're bringing them into OEM discussions about the future roadmap and where they could play a bigger role moving forward. So it's really a good win-win collaboration. You also saw an announcement last quarter with Microsoft. That's also to help our large language models. And they're providing additional tools and engineering teams to help us improve the model.
So we're getting a lot of support from, I'll call it, the other big tech guys who may not necessarily be competing with us to help us improve our models to really compete in this space, whether it be in embedded or in the cloud versions.
OK. And then one more on competition. I mean, I really view the competition as primarily Google, a little Amazon. But I'm very surprised in how many investors come and just assume that SoundHound's a real competitor. And it's interesting because you guys have such a different view. So I guess, is SoundHound a competitor?
Sure, they're a competitor. We take every competitor seriously. We don't discount any competitor. We see, if I looked at the last 10 deals that we had to compete for, maybe they were in one or two, whereas probably Google and Amazon were in seven of those 10 at least. We see the Google and Amazon guys in much more of the deals. SoundHound's out there. We take them seriously. They're a good engineering team. I know Keyvan, who's the CEO and founder. He and I talk. We have a good relationship. We're going to take them seriously. Our goal is to just, I think, in this world, in this kind of technology world, you have to compete by staying ahead of the guys. We're on a yearly cadence of revision after revision after revision.
Every year, we have to bring new features at a better cost to our customers, or else whether it's SoundHound or whether it's Amazon or it's Google, somebody will eat us. It's just the way it is.
OK. I'll pause there just before we go into product questions, billing, stuff like that, if the audience has any questions.
Oh, we've answered them all now.
All right. For products, can you go into a bit more detail about what kinds of features within the car that Cerence software can control? From what I understand, this isn't just focusing on infotainment. You guys can control the temperature, the seats, and other functions within the car. Is that right?
Yeah. So depending on what car you guys have and what year it is and even what model or make and all, you're going to find different levels. I have an old Ford F-250. I have to push a button, and I have to say just the right word for it to do anything, right? It's built on Cerence technology, but I'll call it ancient and cryptic compared to today's world. My wife has a Mercedes from 2024. You don't have to push any buttons. You can say, hey, Mercedes, my feet are cold. My butt is cold. Can you crack the window, and it can do all of those things, and it'll turn on the seat heat or turn on the heater for your feet or crack the window 10%. If you say, hey, could you crack the window a little more? You can do that.
You can interrupt it. So we can do most features within a car, in a modern car. If you go buy a car, and it's based on Cerence, and the OEMs put our latest version in it, almost everything, including, hey, can you open the charging port so I can charge? I'm at a charging station. It can do almost everything. And we could absolutely do anything the OEM will give us access to. And it's just a little bit dependent on what they're ready to give access to external based on what their confidence on the hardware itself is. And then, like I said, even things like, hey, can you change the lights in here from red to blue or anything like that? You can do all of those kinds of features within a car today.
And then, like I said, in the future, we'll go well outside the car.
OK. And then, Tony, this will kind of help answer almost a business model type question. But of these applications just listed, which are licensing versus connected services?
So yeah, if you look at those two line items within our SEC reporting, we have a license line item, and that's the embedded product. So as a car ships with an embedded product, that's a drop-down revenue in that period for that billing. For the connected pieces, that's our connected services business. And that is a little bit different. We still bill for that as it ships, but it amortizes over a period of time.
So if I am in my car and I say, "my feet are cold," is that billed as connected services, or is that just part of the overall embedded solution that's already been shipped?
Yeah. There's no transactional billing for that interaction. So we would have charged our customer for that connected services subscription over time, regardless of the amount of how much usage within that time frame. So it's not usage-based. It's time-based over the subscription.
OK. So what, I guess, is the attach rate between those two line items? Every time I bill a license, I'll also always have a connected service?
So every connected services has a license, but not every license has a connected service. And so part of the growth will be twofold. One is to increase the price per unit of the connected services that we're charging over that we'll recognize over time and then get deeper into those cars that have connected services.
OK. Brian, at CES, we talked a bit more about your second, I'll call it, second-gen Chat Pro type product. Can you tell us in layperson terms what these LLM-based products are? What do they do? What's the difference between the core assistant and these Chat Pro Gen 1, Gen 2 products?
Yeah. So, I'm just trying to think through also a little bit, not to confuse you with all of this. But what you're moving to with the first- and second-generation Chat LLM products is that more and more features and more and more callouts to LLMs, so more and more capability. So, I can now say I can get in my car. I can say, "Hey, car, drive me to this address on Main Street." And then I can say, "and by the way, can you find a Starbucks closest to my route and have me stop there?" And by the way, can you text Tony to tell him, "Hey, I'm going to be at work at 8:05 A.M., and I'd like him to come by my desk," right? So, I can do all that. It will then route me to the start.
It'll say, "Hey, here's the three Starbucks all on your route. Pick one." I pick that one. It'll route me there. It'll then route me to work. It'll then say, "Here's my text I'm going to text to Tony. Is this OK?" I can get that text, and I can go, "Hey, I'm kind of in a funny mood today. Can you make it funnier? Or can you make it more casual? Or can you make it more stern, right? I'm mad at Tony, and I want him at my office. Damn it." And I can do all of that. And it will re-edit the text for me. I don't have to touch it. I didn't have to say anything or do anything else. And it'll say, "Here's my new option for your text. Are you ready to send it?
And I can just say yes, and it'll send off that text. So now I've come into the car, and I've routed to my Starbucks. I've routed to work. I've sent Tony a text. It's in the framework and emotion that I want it to be in. And I'm off and going, right? And I'm driving along. That's where these next generations go. And then also, our full stack becomes an LLM. So our cost reduction, our effort in training these models and providing new services and new capabilities goes down. So it has a cost advantage for us as well. Everything we do becomes an LLM, how we train everything, how we bring new words in, new features.
We're even asking LLMs now to write some of the code for us and actually replace our software developers such that some of the basic code. I can say now, "Hey, in this spot, can you write a callout to ChatGPT? Or this one, write a callout to Llama." And it will just automatically write that code for me. And I can reduce workload that way. And that's all coming in our second generation, which will come by the end of this year.
Will it say, did you ask Tony if he wanted a coffee?
No. I forgot to ask that one.
I think we had talked about kind of this second generation kind of integrating the application layer. In this example, I mean, can it also go to the Starbucks app and buy the coffee?
Yeah. So depending on either what relationships we have, we're trying to start to build some of these relationships. You see us announce some of them with companies like 4.screen and all, or what the OEM may have, or what just comes in naturally, right, that Starbucks or third parties, Yelp or whoever. If that restaurant or company or whatever allows online ordering and we have that ability to make that connection, they'll offer up basically an API that lets us go do that or their own LLM. Some of these restaurants are starting to build big ones like Starbucks and all LLMs itself as an agent to be on the other side. Then we can connect to that. We could have ordered your standard coffee and have it waiting for you when you show up and pay for it. We have the ability to do payments as well.
So again, I want you to just leave your phone in your pocket. Phone's important. It's critical. But I want you to just get in your car, drive, and talk to it. And that's our model.
Tony, how should we think about these new LLM-based products impacting the model? How do the prices compare versus the corporate average?
Yeah, so one thing I will say is that it's been a little confusing to try and do a price times quantity model within our organization. There's been a lot of noise, whether it's legacy contracts that have no cash or the amount of prepaid licenses that are in there. We're going to get to a point where we can guide to what is our effective PPU, so that value of the license that went out and was recognized in revenue, but also the billing for that connected service, and we'll be able to guide to how that is the trajectory of that PPU, but what I would say is that given that these new products are higher value to the consumers and therefore higher value to the OEMs, they do ascribe a higher PPU.
And that would, as part of our growth strategy, is increased PPU within the connected products. And then that combined product that will be both connected and embedded together will have a higher PPU.
OK. I'll stop there before I have two more questions I'd like to ask.
You're asking for questions for the audience, right?
Yes.
No. There's one.
How much penetration or opportunity do you see in, I'll call it, developing world, which isn't India, Brazil, which you spent time there, these have utility value in cities?
Yeah. I'll tell you, South America, Latin America, actually even parts of Africa, I see the Chinese. So a lot of our business with guys like BYD.
They are serving that, and we're in there, right? Because we're much stronger in those languages and those localizations, right, the vendors that are there. India with India, so the Indian automakers, we're in. We've been in several of them. We just haven't really seen big volumes and the product materialize. So our India business tends to be either the Western Europeans in there or some of the Southeast Asia, like the Hyundais or Japanese. It's more traditional, and you're right. I think your implication was they tend to be more the lower-end applications, and absolutely true. Yes. Yes.
What goes into the quarterly billings number, and what gives you confidence you'll hit the $290 million billings number for 2025?
What I'll say is, again, we're in a quiet period, but we'll talk about some of our guidance that we've already put out. One of the things that it's always been asked is you've got revenue guidance and EBITDA guidance, and your free cash flow is outpacing your EBITDA. Part of that is that the billings are anticipated within that guidance to outpace revenue. What goes into quarterly billings? It is really two things. It is, as we ship a car, the license, which drops down into revenue that period, the connected service billing, which will get recognized over a period of time. Then if we do a prepaid deal, that billing would happen in that period as well.
OK.
And.
And professional services.
Yes. Right.
Your guide for fiscal 2025 is for $20 million-$30 million in free cash flow. But it seems like there's some one-time headwinds embedded in that. Can you talk about that?
I would say we've guided $20 million-$30 million. I'm not sure about the headwinds in that. Again, we believe that the billings will outpace revenue. So we'll get a positive working capital benefit from our cash flow for the year. And as that connected business grows beyond. The biggest thing in that positive cash flow for this year compared to 2024 is the fact that we did a lot of hard work in restructuring in Q4 of last year. We saw actually some of that benefit within quarter as we announced our Q4. But we'll get the full, well, we'll get more of that benefit in 2025 and then the full amount of that benefit beyond that. That's with a good solid billings and revenue and decreased operating expenses. We'll see our way through to that cash guidance.
OK. What I was referring to is the fixed contract balance and kind of the difference between the consumption and the new payments.
Yeah. Well, that's actually a benefit for us this year. So within mid-range guidance, we've got $20 million of fixed license or prepaid license deals this year. That's lower than the last three years. The last three years, last year it was $30 million. The year before that, $36 million. The year before that, $60 million. So two things on that. One is to achieve that cash flow, we only need $20 million. And we've got demand for more of that than $20 million. But what is also happening at the same time is that previously recognized prepaid licenses are expiring, if you will. And as we go through the year, as we then ship a license, it will actually go into a billing and then eventually cash in year. So we could see really two benefits, the fact that the previously recognized prepaids are now being consumed or have been consumed.
And so we'll have a lower amount of that in fiscal 2025, and we see fixed license revenue that is a reasonable number.
I think moving forward, that kind of fixed contract balance will continue burning down.
Yes. Yes.
I think that's all the time we have today. Thank you so much.
Thanks, Nick.
Yep.
Thanks, everybody.
Thank you.