C.H. Robinson Worldwide, Inc. (CHRW)
NASDAQ: CHRW · Real-Time Price · USD
185.73
+2.70 (1.48%)
Apr 27, 2026, 12:06 PM EDT - Market open
← View all transcripts

UBS’s 2025 Global Technology and AI Conference

Dec 3, 2025

Seth Gilbert
Software Analyst, UBS

All right. Thanks for joining us today at UBS Tech Conference. My name is Seth Gilbert. I'm one of the mid-cap software analysts here at UBS. And today we're joined by the entire C.H. Robinson crew. Thanks for joining us. We got Dave Bozeman, CEO. We have Damon Lee, CFO, and Arun Rajan, Chief Strategy and Innovation Officer. Sorry. I'm sure everyone's heard the C.H. Robinson name, but maybe you could give us a brief overview of kind of the company, the business model for some of our tech investors. And it is the AI Conference, so maybe at the end you can touch on how you're using generative AI, maybe give us a tangible example.

Dave Bozeman
CEO, C.H. Robinson

Yeah. Very good. Well, pleasure to meet you, Seth, and happy to be here. Just real quick, an overview of Robinson. We are essentially one of the largest logistics providers. And at our core, we essentially move the products that really power the world. And we do that every day at scale. 37 million shipments annually. We have over 83,000 customers. And we also interact with over 450,000 carriers. So really big scale from a logistics platform. But we also do solutions. We think we have the best logisticians in the world. And so it's a really big scale play. The way it works is really in a two-sided marketplace. On one side, you have shippers that want to ship goods. On the other side, you have carriers that want to move those goods.

We sit in the middle and we broker that transaction between the two. Why is that an advantage? It's an advantage because we provide to the shippers a vast amount of carriers, giving them all types of access to various carriers with price. And for the shippers, we give them access to carriers to move their goods all around the world, and we do that every day with some of the best people in the world. We have been undergoing a transformation and it's been quite successful. It's based on a lean operating model, which is really a continuous improvement type of culture change that we have in Robinson. That has been and has really supercharged our people and our technology. Arun and team has done a really nice job going from using machine learning.

We moved into generative AI, which has been really successful, that we'll talk about and even into the advent of agentic AI, which is where we're moving now, but if you think about just a tangible example, and we will go further in this, how have we kind of moved the needle and why is it a game changer? It's a game changer because the order to cash process in our industry and in our company is fairly manual. It's fairly frictionless handoffs. This would be quoting and appointments and things that require human interaction to do that. This lent itself well to really the advent of generative AI and going in and automating those processes, and that's actually what the team has done is attack that process. What's been the end result of that? A 40% productivity increase since the end of 2022.

One tangible example would be on quoting. At our scale, we get thousands and thousands of quotes. And those quotes are, "Hey, C.H. Robinson, can you please move these goods from point A to point B? We would like a quotation on what it would take to do that." As that came in, that was humans processing that quotation yesterday. Today we have an agent that is built, that is very mature, that is at scale. And that agent goes in and it takes what used to take about 15 minutes-17 minutes on a quote, and now it does that in about 30 seconds. And it sends back in a very conversational manner to the customer, the heuristics of that load. What's been a benefit of that?

Obviously, productivity, but it's also allowed us to get 100% of those opportunities, whereas before we could only get to about 65% of those quotes. So that's been a game changer. And that's but one example. And at the end of the day, we have a lot more grass to cut. We feel really good about where we are. Happy to be here. We are an industrial company that are getting AI benefits. And I think, we're. We've been under the radar. And today we're no longer under the radar.

Seth Gilbert
Software Analyst, UBS

You know, the marketplace model is actually one of my favorite business models as a tech investor. So I'm happy to have you guys on stage. You mentioned one thing I wanted to follow up on, which was productivity. You threw out a few numbers. You gave us a tangible example. And I believe you guys define productivity as shipments per person. So how are you thinking about the uplift from generative AI? And then as you move to agentic AI, I assume there's a greater uplift.

Dave Bozeman
CEO, C.H. Robinson

There is, and I'm gonna have the team really jump in here. We define it as shipments per person per day in our freight brokerage business, files per person per month in our global forwarding business as well. We're a bit unique in that we have both, right? We move goods from China to North Carolina every day when it comes to our global forwarding business. Then we move domestically in North America a scale amount of freight brokerage that happens. Generative AI obviously has played extremely strong within our North American surface transportation business and has benefited in that productivity. As we start to move to agentic AI, we're super excited because it starts to really have the effect of getting at data that's off system and applying reasoning to that that can have extremely strong benefits for us.

But I'll ask Arun and Damon to weigh in on that.

Damon Lee
CFO, C.H. Robinson

Yeah. Arun, why don't you talk about kind of the differences between gen and agentic in our company?

Arun Rajan
Chief Strategy and Innovation Officer, C.H. Robinson

That's a great, great point. So, you know, on the back of Dave talked about 40% productivity improvements. That's been on the back of, you know, traditional software engineering, traditional classic machine learning, gen AI, and agentic AI. So let me kind of walk through a little bit, how gen AI's worked. So that quoting example that Dave gave, is a really good one. So the first pass at it was gen AI, which is a customer called me, let's say I'm your account manager, customer calls me and says like, "Hey Arun, can you quote me for Chicago to Dallas like you did last week?" Now imagine, there's a lot of context that's missing which a human can make sense of because I've worked with you before. It'd be like, "Okay, I know, sets from Coca-Cola. He wants this thing moved, moved from Dallas to Chicago.

It's from this warehouse to that warehouse. Here's a commodity. Here's my pricing strategy for Coca-Cola. This is how I respond, right? So now the way gen AI was working was the way we set it up was, "Okay, well, you gotta parse that email, interpret it." And then now with agentic AI, we were just returning quotes in the past, but now with gen AI, with agentic AI, what we're saying is, well, effectively if you build the entire context that a human knows for the agent and give it access to all the tools that the human uses, right?

Which is, "Oh, I go look up order history to see what happened last week from what, you know, which warehouse to which warehouse did the goods move, what the commodity is." So you give it the context and then give it access to the tools that the human would. And now you start to see sort of like exponential productivity. And Dave talked about our NAS business and our global forwarding business, right? Our NAS business is a one-shot response, right? So email comes in, you interpret it, call our dynamic pricing engine, and you respond. Our global forwarding business is a little more complicated because, you know, imagine like a China to an L.A. routing option, right? You might get 50 responses, kind of like you get a response on Travelocity or Booking.com.

Now you gotta reason which one to pick, right? So think about all these scenarios where a human is able to reason because they have context. And what we've built is the appropriate context for the agent to take what gen AI's interpreted and then give it the reasoning and the context and the tools to go actually respond.

Damon Lee
CFO, C.H. Robinson

Yeah. And I'll just put a bow on what both Dave and Arun said is in the world of AI, I mean, it's hard to find anybody, person or company that doesn't say today they're using AI, right? I mean, it's almost synonymous with their mission statement. What I would say is different than us, you mentioned the productivity statistics, greater than 40% productivity across the enterprise since the end of 2022. That's without any footnotes. That's without any exclusions, right? That's an enterprise number, point blank, right? So that's pure productivity. And what we like to say at C.H. Robinson is, look, the ultimate scorecard on are you getting value for your AI investment is your P&L. It is your earnings. And for us, productivity gets probably the most attention, but we're getting revenue growth benefits from AI use.

We're getting gross margin expansion from AI use, and we're getting operating margin expansion from AI use, and as Dave mentioned earlier, AI has been a big piece of our transformation, but our lean culture and our operating model is equally as important, right? And so when we get asked, you know, how are you differentiating yourselves versus others in your industry versus others in the industrial space, the combination of the lean principles, the operating model, and our cutting-edge technology, what we call lean AI, we think the power of those two together is exponential versus them being separate.

Seth Gilbert
Software Analyst, UBS

Got it. In the software world, we have a metric called RPO, Remaining Performance Obligations. And a lot, most, all software companies report it. And it's a measure of backlog. So we can see as some of these larger AI deals come in, it might not be hitting revenue quite yet, but we can see the backlog growth and that anticipates future revenue growth. I'm wondering if for you guys it makes sense to define some sort of AI KPI, or is the business changing too much? Would investors just kind of see it in the revenue growth and the OpEx savings?

Damon Lee
CFO, C.H. Robinson

Yeah. Well, what I would say is, externally, I think we just talked about it, right? The real key metrics on our usage of AI and is it benefiting the company? It really is in the productivity metrics we share. You know, it really is in the P&L metrics that we report every quarter. I always joke the easy job for Dave and I is I don't have to convince you if AI is driving benefit in C.H. Robinson. All you have to do is look at our results for the last two years, right? And you can see bright line results and revenue growth and margin expansion and earnings growth in what's almost a four-year freight recession.

So I think the true metrics for us are productivity, outgrowth of our end markets, and then ultimately our earnings performance as dictated by the P&L performance. What I would say is internally, certainly we have a catalog of every process that exists between quote and receiving cash from our customers, right? So that's thousands of processes, and Arun's team knows working with the businesses, what's the human intensity, what's the human touch for each one of those processes, right? And then what's the cost to convert that, you know, manual process today to an automated process tomorrow? So that's an internal tracking mechanism we do to understand the funnel of opportunities that still exist for C.H. Robinson. And look, we're pleased to say we talk about our transformation as being in the early innings, right?

Call that the third inning of a nine-inning baseball game, and certainly on the operating model, we call that third innings. On the tech deployment side, we call that probably third inning for NAS because that was the first application of our tech stack. Global forwarding is now starting to be indexed with that tech stack. But when you think about what I said earlier, there's thousands of processes that are available for automation. We've only automated a fraction of those processes today for NAS and even less for global forwarding. So even though we've had great results the last two years, we truly believe the next two years and beyond are gonna be more exciting for C.H. Robinson than the last two years.

Dave Bozeman
CEO, C.H. Robinson

I think, Seth, for this room for investors, it's really important coming off of what Damon said. We're building, you know, I've been doing lean for 30 years from working at Harley to Caterpillar to Ford and Amazon. But we're also building sustainability. We're builders by heart, and we're gonna continue to do that. We're gonna build things that last. We go into that with a high P level, you know, P90, P95 confidence on when we build these solutions that they're gonna be sustainable. At our scale, when markets shift, we win at the bottom and we win at the top. It's important for investors to know just how sustainable these solutions are.

We're bucking the trend in an industry that said, "Hey, you guys shouldn't be able to grow and expand margins." We're doing that. We will continue to do that, because of the systems and our logic, what our operating model and our technology and our people.

Seth Gilbert
Software Analyst, UBS

Got it. Just a reminder, if you have questions, we'll take questions at the end, but should be instructions at your seat. So type it in and I'll read it up on stage. Maybe a question for you, Arun. We'd love to dive in. This is a tech conference, a little bit about your tech stack. Do you own data centers? Do you, you know, purchase compute from the major hyperscalers, Amazon, Microsoft, Google, Oracle? And then as a follow-up, NVIDIA chips are all the rage right now and a lot of customers are having trouble getting NVIDIA chips. So, I'd be curious, you know, if you enter in data centers, do you, or even if you don't, do you need to run on the latest NVIDIA chips or is that, is that not necessary for your, your workflow?

Arun Rajan
Chief Strategy and Innovation Officer, C.H. Robinson

No, great questions. So first I'll start with, you know, Microsoft's our partner. And so we use Azure as our sort of cloud partner. The enterprise-grade LLM, which is essentially that Microsoft offers, which is a version of ChatGPT, that's sort of our core, primary provider. But that said, we have optionality and we do use Gemini in some cases and Claude in other cases, right? So the way we think about it is that we are abstracted from the LLMs. So we can choose whichever LLM fits, is fit for purpose, gives us the best price-performance ratio, right? So that's step one. But largely, a lot of our workloads do move to Azure as our primary LLM.

So, then in terms of like the question around chips, ultimately what we look at is token usage for any given problem that we're trying to solve, or trying to automate. And so in that context, what the LLM provider does for us, in this case, you know, if it's Azure or AWS that's hosting Claude, it doesn't matter to us. The price-performance ratio that we look at is token costs to get a certain workload done. So, we're abstracted from the chips because the LLMs, we don't really care what the LLMs use underneath, so long as we get the price-performance ratio from the LLM. And we'll switch to one of different LLM providers or different versions of LLMs to get the price-performance ratio that we seek.

Seth Gilbert
Software Analyst, UBS

Got it. There was a follow-up I had. It's tough to keep track, I think for me and probably for investors, of all the new models coming out. We had Anthropic up on stage yesterday. It's really tough to keep track. I've probably got a few updates in my email that I need to get through after this week, so you know, how easy is it for you to switch between model providers when a new model comes out? Is it cost-prohibitive? Is it easy? Is it, you know, does it cause workflow disruption? Maybe you can comment a little bit on that.

Arun Rajan
Chief Strategy and Innovation Officer, C.H. Robinson

Yeah, it's a great question. So right from the beginning, we design and architect our systems. So we have an abstraction layer or a gateway that then makes it very simple to switch between different LLM providers. So we have an R&D and an innovation team that all they do is look at performance of different LLMs. And because our core technology stack is abstracted from the LLMs and we have a testing and regression testing framework, when we switch to a model or switch to a different model, we can backtest and make sure we're getting the repeatability and the predictability out of the model that we need for the things that we need. Because you can imagine that if we can't have it hallucinate, like we need, we need a certain amount of predictability based on the context that we provided, the LLM.

So therefore we need that repeatability. So A, we're abstracted from the LLMs and B, we have a test harness that allows us to backtest when we switch models, without disrupting any of the upstream kind of workflows.

Damon Lee
CFO, C.H. Robinson

The great thing about where we sit in the AI ecosystem is, as Arun just walked through, we can be agnostic to the LLM, right? We can actually pick the LLM that best fits the problem we're trying to solve, right? And we don't have to have the latest generation of the LLM to get the required benefit that we need. So we can be using a generation-old, two-generation-old LLM, therefore getting, you know, optimal cost while getting the same level of, of business performance that we, that we desire. You know, we often say, look, we're, we're in the sweet spot of the AI ecosystem, right? If you think about, you know, the hundreds of billions of dollars, the soon-to-be trillions of dollars that's being spent to build capacity and processing capability, we're the beneficiary of that.

We don't have to spend any of that money to get the ultimate benefit and cost curve of that set of scale. You know, Arun shares a statistic a lot that our token usage is up 10X year over year. Our cost is down 25%, right? And so when the question is always posed to the broader market, who is benefiting from the hundreds of billions of dollars that's being spent in the AI universe? We raise our hand and say the answer is C.H. Robinson.

Dave Bozeman
CEO, C.H. Robinson

That's been just in the last year, that 10x increase and the cost going down. We're essentially at the end of the bullwhip, and that's why this competition war that happens. We are the beneficiary of that at the end of the day, Seth.

Arun Rajan
Chief Strategy and Innovation Officer, C.H. Robinson

It's worth adding some color to that.

Dave Bozeman
CEO, C.H. Robinson

Yeah.

Arun Rajan
Chief Strategy and Innovation Officer, C.H. Robinson

You know, there are two vectors to driving that price performance, right? So 10X usage and cost going down, obviously there's, you know, by picking the right LLM for the right job, right? We get benefit. And the LLMs kind of competing amongst themselves gives us benefit. So that's great. But equally, you gotta engineer things right and architect and design things right. You know, just like, you know, you remember like when the cloud happened and, you know, engineers were deploying things to the cloud because it was like elastic compute and we can start to, hey, you know, compute's free and, you know, you can have runaway costs when you don't have a disciplined way of using a new technology like what happened with the cloud.

The same thing happens here too, which is you need a disciplined architecture and design to say, well, you can't like create this monolithic agent that has so much, you know, such heavy prompting that it's like, it's super expensive to run, and it'll probably hallucinate, right? So this notion of engineering the agents correctly, to kind of manage our costs is equally important, as the sort of competitive environment.

Seth Gilbert
Software Analyst, UBS

Let's actually shift to the competitive environment. I would assume that you're not the only company in your space that's using AI. What, what sets you apart? Do you have a bigger engineering team, more data, more, more historical data, a differentiated approach? Maybe you could talk a little bit about the competitive position.

Damon Lee
CFO, C.H. Robinson

All the above. Let's go right there.

Dave Bozeman
CEO, C.H. Robinson

Yeah. I think to just to kind of start it, and it is all of the above, but to put it in context, you know, what, what separates us is number one, it's that domain expertise, which Arun has been talking about. These are engineers that grew up in the business that actually know freight. So, so that I can't tell you just how important that is. Building our own internal systems that Arun talked about, that is huge when it comes to ideation, discovery, experimentation, and velocity and speed. Super, super huge, in, in doing that. Our operating model, how we operate really drives a cadence and it, and it allows us to discover and go fast, I think faster than, than competition, and discover a, a number of different things that, that, that separates us apart.

And then, you know, just our logisticians, we think are the best logisticians in the world. So we're not just a freight broker, we're a solutions provider for customers. And we think that separates us out. And that scale and that speed, there are a number of different modes that Damon always talks about. We have deeper, wider modes, and it's not just one. Competition really has to cross three to five modes to really try to keep up with Robinson. And I can tell you, as we sit here today, 12 months from now, we'll create something that's not been created today because we're always trying to build and ideate. And so you really have to keep up with us on where we're going there.

Arun Rajan
Chief Strategy and Innovation Officer, C.H. Robinson

As I was saying, clicking into sort of the build part of what Dave said, I think the industry is kind of dominated by people who buy and cobble together things. So if you kind of look at our approach, C.H. Robinson has always built its own software. And so it's seen generations of, hey, you know, I built this monolithic thing two decades ago. I've evolved it. It's evolved to a services-oriented architecture and then a microservices architecture. So all the stuff has been built. So you got like the fundamentals of infrastructure, security, privacy, and we got an engineering team that builds fit-for-purpose things. So, right?

So you, those engineering teams are effectively so we've got this builder culture versus a buy and integrate culture, which is in my mind, you know, I grew up in tech companies and Amazon and, and Travelocity and Zappos and, and companies like that. So we're, that's a distinctly different culture than a, than a traditional IT culture where you buy and integrate software, right? And I think there's a, there's a lot of competition that does it that way. So you can imagine, you know, you build the software and once it's built, once our fixed costs are covered, the marginal costs for serving any additional volume is near zero, right? To, to Dave's point around scalability of our model is super important.

Owning the technology and building it such that we have a scalable model is super important versus a buy where you'd have to, you know, A, cobble together multiple solutions, B, pay by the drink because you've got multiple SaaS providers that are charging you, which is not a scalable model.

Damon Lee
CFO, C.H. Robinson

And the last thing I'll just double-click on is the speed, right? So if you're using third-party providers, you're using multiple third-party providers. Your ERP is a third-party-provided ERP. Ours is custom-built, right? The ability to adopt a customized, very fit-for-purpose solution at pace, very difficult, even if you leave the cost aside for a second. So because we control, you know, our book of record is Navisphere, our system of record's Navisphere. We built that. We control the code. As Arun mentioned, you know, we own our application layer. We build all of our own agents. We integrate those agents into Navisphere. We control the speed, right? And I think that speed is a clear differentiator because once we ideate an opportunity, we operationalize it, we scale it, we control that entire timeline, and we do it really quick.

If you're depending on third-party providers, you're not going to get near the timeline and the pace that C.H. Robinson is able to deliver to these solutions that are driving revenue growth and productivity. Then on top of that, as Arun mentioned, the fact that we develop our own tech, you know, our marginal cost of ownership, once that agent is built, is very close to zero. It's just token cost, right? Versus having to pay by the drink, you know, perpetually, for, for a lot of our competitors that, that use an outsourced model. So as Dave mentioned earlier, when we talk about the competitive landscape, it's not just crossing one mode. We feel like the competition has to cross four, five, six, seven modes just to get to where we're at today.

And then two years from now, Robinson will be in a completely different position than we are today, right? And so our operating model drives continuous improvement, right? Arun doesn't get to plateau on his technology evolution, right? The expectation is every day, every week, every month, the tech's getting better, the company's getting better. We think that mindset really differentiates us from the competition.

Seth Gilbert
Software Analyst, UBS

Got it. Arun, you're in the hot seat. We've got a few questions. First question, I think we answered a little bit, maybe you can touch on a little bit more. I'd love to hear about what foundational models they are using. Do they switch between models? Do they use any open-source models? Touch on it a little bit, but maybe you could expand a little bit.

Arun Rajan
Chief Strategy and Innovation Officer, C.H. Robinson

Yeah. We're not using open-source models. We use enterprise-grade models from ChatGPT, from Microsoft, Google, and Anthropic. Those are the three that we use. What was the other question?

Seth Gilbert
Software Analyst, UBS

Do you switch between models? We touched on it a little bit, but maybe you could give a tangible example.

Arun Rajan
Chief Strategy and Innovation Officer, C.H. Robinson

Yeah, we switch between models all the time. So let's take that quoting example that Dave gave, right? So let's say a new model comes out, and like our R&D team might say, well, hey, we think you might be able to get a better price-performance ratio out of this new model, right? So they go to the engineering team and say, okay, quoting team, use this model, right? So it's pretty easy because of the way we're abstracted. There's a Gateway pattern, which is a well-known pattern in software engineering, right? Where you say, I talk, I always go through a single layer to talk to any different LLM provider, right?

So let's say I'm talking to ChatGPT today and Claude came out with a new model and the R&D team says like, hey, you ought to check out Claude to see if you get better price performance. They go to the engineering team. The engineering team can pretty quickly do a test by saying, okay, I'm going to switch over. I'm going to backtest like the last six months' worth of requests that I processed to the old LLM. I'll run it through Claude and see if I get a better price-performance ratio, right? And so that whole, because of this gateway pattern and because of all the testing harnesses that are built into here, we can switch models and backtest them pretty seamlessly and see if it's worth switching to a different model. So we switch all the time.

And I think Dave or Damon might have said this frequently, we might actually go back to a different version of the model because we're finding that it's costing us way more than we need, and we don't actually need the latest model. We might even fall back to an older version because we get a better price-performance ratio. So absolutely, we do that all the time, and we're intentionally architected and designed to enable the optionality to switch between models.

Seth Gilbert
Software Analyst, UBS

Got it. Maybe next one we have, I think I know the genesis of this one, but a question about which database vendors do you use?, and the question is probably someone trying to figure out if you're increasing your usage, maybe what players to be more bullish on. So maybe you could spend a minute on that.

Arun Rajan
Chief Strategy and Innovation Officer, C.H. Robinson

Yeah. I mean, like we use Microsoft SQL Server. We use, you know, Snowflake as our sort of like underlying data platform, data warehouse. But again, like the way we work is a lot of the heavy lifting is built. So for like, take those two vendors, right? If we're serving high-volume traffic, it's mostly cached. So, right? We're not going to like increase our licenses on SQL Server. Equally, we might use Snowflake, but a lot of the compute and a lot of the number crunching is happening in a machine learning model or somewhere outside of Snowflake, right? So, the way we engineer and architect ourselves is like we're not going to spend. There's not going to be runaway costs with like underlying database vendors and underlying data platforms.

Seth Gilbert
Software Analyst, UBS

Got it. We're just about out of time, but I want to give you the chance to close it out. You have an asset-light model, a marketplace model. You're using AI. You're showing tangible benefits. It's probably not a name that was on a lot of tech-focused radars, but maybe you could just give a few parting words as to why it should be.

Dave Bozeman
CEO, C.H. Robinson

Yeah. A few parting words would be this. Number one, Damon and I and Arun would say the next two years in Robinson are going to be more exciting than the last two years. And why do we feel that? Because we know who we are. We are in super early innings in this journey. We love the results. They've been demonstrable results. That's only going to continue because of the things we talked about today. Our operating model is a differentiation. Our technology is a differentiation. Our people are a differentiation. And so we have solved for the operational layer, and we continue to solve. There's more to do. But when we look at the customer-facing side of it, there's just so much more ideation and discovery that this company is going to do that brings even more bottom-line results.

So to me, this is an exciting play, and that's why I call it an undervalued, AI industrial play from where we are.

Seth Gilbert
Software Analyst, UBS

Well said. I think we'll end it there. Thanks for joining us.

Dave Bozeman
CEO, C.H. Robinson

All right. Seth, thanks for having me. Thank you.

Powered by