And then we'll open up to audience questions. And if you have a question, please raise your hand, and the mic will come to you, and you can ask your question. Welcome, Sanjay.
Thank you. Thanks for having me.
Yeah. Sanjay, if I were to rewind, maybe back to January, how are you guys thinking about SoC test market? You were being kind of more conservative and kind of, more and more kind of, guarded on that, but then you kind of started to improve your outlook for this year. Can you walk us through what changed from January up until now?
Sure. From a test market perspective, overall ATE, we break it up into SoC and memory. You asked about SoC, so just I'll get memory out of the way. It's $1.2-$1.3 billion, and our TAM assumption for the market, and I'll go through the segments in a bit, is $3.6-$4.2 billion. And in that, it's made up of several different segments we track. Compute. So in the $3.6-$4.2, a midpoint of $3.9 billion dollars in SoC market, and I'll just decompose that of our current estimate, and I'll talk a little bit about how it's changed through the year.
But in that, in the compute segment, we, at the midpoint, we see that at about $1.6 billion. In mobility, we see that at about $0.8 billion. In automotive, we see that at $0.5 billion. In industrial, we see that as $0.3 billion, and then service at $0.7 billion. So that sums to $3.9 billion. And what we've seen throughout the year is we've seen, again, at the midpoints, we've seen compute get a little bit stronger, and we've seen mobility, auto, and industrial get a little bit weaker. You know, a couple hundred million in compute and maybe down a hundred million in the others, and one to two hundred million. There's some rounding in there.
And so we, you know, obviously, AI is a big tailwind in our compute estimate, as well as, and you know, geographically, so there's a strong China component as well.
Great. And then, how do you see the second half?
So my estimates were for the full year, but we see continued momentum in the compute market. You know, really tailwinds through data centers, driven by AI. Mobility, we continue to see weakness, you know, or softness. We've talked a little bit about utilization of testers at the OSATs, and so what we're seeing in the marketplace is, given the softness in the mobile market, testers are being upgraded or altered to enable high-performance compute testing. I would say we're seeing, in the first half in automotive, we saw a strong, what we call a VIP, not very important person, but vertically integrated producers. We used to call them hyperscalers.
So those companies that are designing custom chips, either by themselves or through a design house like Broadcom, Marvell, or Samsung, we're seeing in the first half strength there. But in the back half of the year in automotive, we see softness really tied to the end market and similar in the back half of industrial. And so what we are seeing strength in is in memory: continued capacity rollout at the Chinese fabs, significant HBM tailwinds, and we are seeing from a market perspective DRAM wafer sort come back in one of the large Korean providers.
Right. Let's peel the onion on some of these end markets, starting with the compute market. You've increased your compute TAM this year. Makes sense. AI has been somewhat of a tailwind. But for next year, I believe you guys are expecting similar level as this year, and can you talk about, first of all, correct me if I'm wrong, and what are the puts and takes of assuming compute is more flat? Is that a reflection of a certain end market like China or otherwise?
Yeah, I think this question came up on our earnings call, and the context is that, you know, we're still in early stages of analyzing next year. We go through our strategic planning process that'll culminate in November, where, you know, we'll then use that as the basis of our earnings model update in January. But further point of, you know, why we see it as, you know, our view back in July was a little bit more balanced. Of course, there's tremendous tailwinds with AI drivers and, you know, AI accelerators, network switching, in data centers, and so we do see that as a tailwind. We see that tailwind continuing into 2026 and driving kind of the market size up.
Where we were a little bit cautious was the China component of the compute market, where, you know, we're trying to—our models are trying to analyze, hey, the wafer outs, what is really happening in the marketplace tied to the investment of test equipment going into the market. And it was hard for us to rationalize the significance of that market versus the wafer outs. So we thought as a headwind, as you calibrate the market, there's some real signs of tailwinds that are driving the market up. We were unsure about why the accelerated purchases in China, and if that market continues, then that's great.
And I'll just remind the audience, and, you know, we shared this on the call, is that our exposure to that specific customer and segment, we're, you know, it's tied to the regulations. We're not participating from a compute standpoint. And, you know, our overall revenue in China year to date is about 10% of our revenue. And if you double-click on that, about half of that is from indigenous customers, and half of that is from multinationals. So our exposure to that market is fairly low.
Yeah. I'm gonna come back to China. But just kind of staying on the compute topic, it's no secret your biggest competitor is the incumbent on GPUs and has benefited the last few years. But you talked about, you know, vertically integrated providers and VIPs, people call them titans or hyperscalers, whatever you wanna call them. But you are very well positioned in some of those custom ASIC programs. And again, to everyone's knowledge, most of these custom ASIC chips are being manufactured by one large foundry in Taiwan, which you're very well positioned as well. So can you help us understand, as we look at the proliferation of these custom chips, you know, where are you? You have 50% market share?
What is your positioning, and are you also winning the networking piece associated with this compute?
Yeah. I'll start with the last question. We have a very strong position in networking. And we expect to continue that. And as that, you know, as the network switches within the data centers goes from one to many and proliferates significantly, we are gonna enjoy that tailwind 'cause we have the large designs, and as the market expands significantly, we'll be participating in that very well. But from a market dynamic perspective, I'd just like to provide a little bit of context for the room. You know, back in twenty nineteen, the market size for compute in the SoC market was about $600 million, give or take. This year, I talked about a market size of $1.6 billion. So significant growth, obviously.
But, you know, our incumbency wasn't very large, as you noted in key merchant silicon providers. And so as that market grew, our share overall in SoC declined. Several years ago, we started a strategy, 'cause we recognized that the market dynamic was changing. People were starting to or organizations were developing their own internal chip teams or, building chips or like, designing chips through other providers, like chip design houses like Samsung, Broadcom, or Marvell. And they were doing that because inherently they could create their own IP and inject that into a chip, so that they would have differentiation, performance differentiation in their market. And so several years ago, we pivoted to a strategy where we focused on hyperscalers, now we call them vertically integrated producers.
That market size in the compute segment, we believe, is a couple hundred million this year out of the $1.6 billion in compute. We view in 2026, that number is gonna be 500-600 million-ish. Why is that important? In computer mobility, incumbency is very critical, and you typically don't see share shifts between us and our number one competitor until there's a discontinuity of technology or a new customer. With new customers designing their own custom ASICs, we are now competing against our competitor for these new design wins, and we are winning our fair share. When I say our fair share, think 50% plus of these design wins.
We're not only just winning kind of the design win. We're winning the designs that are actually make a difference and are driving volumes. And so, like, we track not only the design wins, but we track the number of testers testing VIP chips in the marketplace. And by our counts, we see our, the chips that are being tested in the marketplace at north of 50% on our testers. And so while this market is growing, there's a dynamic going to custom silicon that would be, you know, arguably 30% or a third of the market going to this custom silicon. And so that is how... So we're coming from a situation of lower market share in this segment to there's competitive shootouts and a position of higher market share in this growing market. So we feel pretty good about the strategic pivot.
We feel good about our execution and our opportunity that's ahead of us in compute. There's, just for clarification, you know, there's VIPs in compute, there's VIPs in auto as well, or other segments, but, and when I talked about kind of $200 million to $500-$600 million, that was just tied to the compute segment.
All right, so you sort of the Broadcom of the compute world. On the compute side, the question I get asked a lot, and if you were to wear your fabless customer hat, you know, you were a fabless customer in the past, what prevents fabless companies to kind of switch equipment suppliers, you know, at the next generation platform like Blackwell or, you know, MI350X? What is preventing them to qualify you guys?
... Yeah, so, in SoC there's a very strong incumbency as compared to memory, where every speed interface change, there's a competitive shootout of who gets the initial designs. But in memory, both us and our competitor can test the same product lines at a customer. So there's this fungibility, the ability for the customer to use our testers or our competitor's. In SoC, it's incredibly sticky, and the barriers to change are very, very high. You'd need, like, a 30%-40% economic rationale. Like, recall that at a merchant silicon provider, you know, their cost of test is less than 10% of the silicon, their variable cost. And, you know, for them, they've got design protocols, they've got engineering teams, and they're on a cadence.
Every half year, every year, the train's leaving the station with the next design, and they're under immense pressure to hit schedule and execution of performance. And the least they deviate from that, from that chip design, the better it is. So the more they reuse makes it much more efficient from a design perspective. So these are engineering teams that are working sixty, seventy hours a week under tremendous duress to innovate and execute and tape out a new chip and create revenue. And so in that environment, the test protocols and the test specification is really made up front in the design stage. And these cadences, it's like a waterfall, boom, boom, they just keep happening. It's very challenging to inflect and change unless there is a new technology.
Millimeter wave or different technologies that say, "Hey, there's a jump ball." And when there's a jump ball or an opportunity, that's when we really compete and make inroads. And what we found is, because of our execution, once we enter a customer, we try to delight them with the execution, and they tend to come back for more and more.
Let's switch to mobility market. You talked about 800 million HBM this year. It's probably at a trough level, given you were at two billion a few years back. There is quite a bit of optimism around, you know, AI driving an upgrade cycle in phones next year. Can you talk about how you're looking at the mobility market recovery, and particularly in terms of the content growth for you guys, if there are any, you know, major changes?
Sure. So let me. You know, back in 2020 and 2021, I call it the sugar rush. Everybody upgraded their phone, and I'm generalizing, but there was this huge upgrade cycle with smartphones and PCs, and that had an impact on the consumer. A three or four years later, we're now coming up to another opportunity for a key upgrade cycle. In that time, as the supply chains were at 52 weeks plus, there was a lot of inventory in chips. There was a lot of inventory in smartphones that actually helped prolong this downturn in the mobile market. And now, you know, looking into 2025, that's gonna be behind us. The second thing, from a supply chain perspective, is that there was an underutilized capacity of testers in the mobile market.
As OSATs who test both mobile and compute, what they're doing is that they are repurposing mobile testers into compute to test high-performance compute chips with some slight upgrades, what have you, but there's testers being taken off the market for mobile and testing compute. And so that's kind of the supply environment. Now, on the demand side, as you alluded to from an AI perspective, edge AI, you know, I'm very curious to see the applications and the uniqueness of these applications to drive an upgrade cycle. And what I would say, that those applications require certain functionality of NPU as well as higher compute, GPU, a lot of different functionalities that are on these smartphone chips. And, you know, there is a limit that merchant silicon providers can provide before their chips become too costly, and it's under 100 millimeter square.
But think about it, like, as a rule of thumb, a hundred square millimeters, maybe it's a little bit lower. But then, when they have new functionality, new MPUs or more things they want to throw on it, they really would like to get more transistors per square millimeter. So you're seeing gate-all-around that are densifying the transistors per square millimeter. You're seeing the encouragement of going to three nanometer as well as two nanometers, 'cause what it does is it allows the merchant silicon providers to add more content on the device while maintaining their margins. So this end market drive of edge AI is requiring more stuff to be in the hardware, which makes the chips more complex. And I say complexity, it's tied to the number of incremental or the percentage of incremental transistors per square millimeter.
And so what I hope it's a trough this year, by the way, of the mobile test market. But, you know, from a supply chain perspective, with higher utilization and testers being moved to test high-performance compute, incremental demand is going to drive incremental test market for us. So we're well positioned to see a growth in 2025, really driven by end demand and because the supply side has inherently been pruned.
... All right. And just, and staying on this topic, if your large mobility customer that has historically been more than 10% of sales decides to build in-house inference chips for their in-house servers for AI in the future, hypothetically, will that opportunity go towards compute market? And will that change your view on compute being flattish, or will that be going towards the mobility side because of the cross play you talked about?
Yeah, it depends kind of where it would reside, but I'd suspect it would go to compute. The good news is it will be potentially incremental. But by the way, we don't really talk about our individual customers, but generically, if there is you know a customer that goes and develops a compute-based chip that we test, we would put it in compute, and it would be a tailwind.
Moving the memory market, and I think there's still a bit of confusion in terms of, you know, where Teradyne plays versus where your Japanese competitor plays. You guys both have, you know, I assume, 50/50 kind of market share on the HBM. But can you help us understand where are your testers being used to qualify HBM versus your competitors?
Sure. Just some market setting first, and I'll get into the specifics. So in 2023, we viewed the market as just over $100 million, where we participated in the leading HBM supplier or with the HBM supplier at wafer. As that market continues to grow, we view 2024's market size as just over $500 million. So a 5-1, you know, kind of growth. And other key memory suppliers are engaging. And we specifically launched with our initial design, but we weren't qualified at, you know, a stack die performance perspective until... We didn't have the capability until this year. So I would say that our share this year, because of our lack of qualification, is a little bit lower than you commented as a 50/50.
However, we are very well positioned to continue to grow in the HBM market, which we see as being quite significant. We haven't done all of our estimates, but assume it's large, and we do see share gains coming into next year as the buying continues. Okay.
And then just on the automation, the robotics side, I still get a lot of questions from investors: "Why are you still in this market?" You know, breaking even on profitability, what is the strategic rationale? You know, according to our sum of the parts, the stock is assigning very little value to this business. So help us understand why, you know, you're still in this market.
Sure. I think we need to do a much better job because I do the sum of parts analysis, and I get a different number than common thinking. So first of all, why are we in robotics? The opportunity. The opportunity is hundreds of billions of dollars in bathtub TAM, not in annual, but in bathtub TAM. We are sub 5% penetrated, and we have a market-leading position with our collaborative robots, which, as a rule of thumb, is about three-quarters of our robotics revenue. And we're one of five or six players in AMRs, Autonomous Mobile Robots, with about 5% market share. So first and foremost, the opportunity is there. And when you talk about break-even profitability, in 2023, we lost money. Under the covers, UR, or collaborative robots, is profitable.
MiR, or autonomous robots, is still an engineering investment place right now, where it's losing money. And we believe, and we have strong conviction, that we'll be both profitable next year, but also, you know, as we continue to have strength and penetrate different applications and address different SAMs in UR, the investment will convert to a going concern, profitable entity with MiR because we have like, the competitors there are much smaller companies that aren't tied to a multi-billion dollar company. We know what quality looks like. We know how to scale operationally. We have kind of a big business that is helping these entrepreneurial business significantly hard on their product. We're going through a go-to-market kind of change, where we're focused on large accounts and OEMs. It's taking us a little bit longer.
I would say that in large accounts with our MiR business, we are doing very well, where large customers with automation teams are specifically requesting to work with us to help solve their problems because of our reliability and our capability. And so we see this not only as we're sub 5% penetrated, growing in double digits over the midterm, we believe that it will start to. And by the way, we have above average gross margins, above our corporate average gross margins in robotics. This is a profitable company, and we're choosing to invest, or it can be a profitable company, but we're choosing to invest because we believe in this huge market to address.... and what you should expect to see over the midterm is clear profitability and free cash flows.
If you really study our company, we have a very disciplined approach to where we put our investment dollars to create very good gross margins and very good operating profit. It's true we're investing in this, but we're investing because there is going to be a very, very-- there's gonna continue strong growth. Like right now, the competitors in robotics or large automation, year over year, they're down double digits. We are up double digits year to date in growth in very challenging market. And when the market returns and we continue to address incremental SAMs, there will be more tailwinds on the top line, and you should see improvement on the bottom line as MiR grows to scale.
So just staying on the topic, you know, every time Jensen gets on his earnings call or some GTC event, he mentions Teradyne and the opportunity in physical AI and robotics. Can you just talk about your partnership with NVIDIA? Have you seen an impact from those announcements?
Yeah. So we have a kind of a twofold relationship. First is we have a product platform that is a global market leader in collaborative robots. There we have AI applications or trainings that are incremental to our platform. Because we have a global footprint, there are many people kind of using that to help design and address various applications. At MiR or in our AMRs, we actually utilize the NVI DIA chip, and we really have enabled it to train. So let me give you for example. In our pallet jack that was introduced in Q1 that comes out in Q4, I had the fortune of working at a warehouse in high school and university. I didn't always do this job back in the day.
And so what I noticed in these warehouses is that the pallets are sometimes broken, they're not where they're supposed to be, there's plastic covering things, and our pallet jack has had over a million images in training that it knows, like, "Oh, there's a piece of wood that's broken. I've got to adjust my forklift. Oh, there's a piece of plastic blocking the wood in the pallet. I've got to adjust my position. Hey, this thing isn't at these geo coordinates, it's slightly off, it's turned." And think about, you know, that last. I used to call it in telecom, the last mile, but now you think about the last couple of centimeters, and the ability to, for it to be trained and AI to help guide it in and through.
So you're gonna see that in both the implementation and the ease of implementation of an application, but also the use cases.
Great. Let me pause here and see if any questions in the audience. Yes, on the back.
Hi, Sanjay. I was wondering if you could talk about sort of a little more the competitive dynamics in HBM, given that it seems like and maybe like the upgrade cycle dynamics, given that it seems like your tester might work for future HBM generations when your largest competitors might not.
Yeah, great question. What's interesting about our latest HBM, our Magnum 7, solution, is that it has two very strong competitive differentiation points, versus the competition. First, very strong performance on throughput, which enables a lower cost of test for end customers. And then secondly, our new tester enables HBM3E, as well as forward compatibility to HBM4. And so why that's quite unique, in SoC, testers kind of never obsolete themselves. They're around for 10, 20 years. In memory, when you have a new speed interface, when you go from LPDDR3 to LPDDR4, you need a new tester to test LPDDR4.
And so we believe with this, competitive differentiation on performance, enabling lower cost of test and the forward compatibility, it puts us in a very unique position that, you know, when a customer, just think about it, a customer's making a choice, if they have the ability to test HBM3E and HBM4, and our competitor can't, that means that we have the ability to test what they have today and tomorrow. And it makes the tester more fungible, depending on which HBM flavor they're building, 3E or 4. And that's important during transition time periods. It makes it much more efficient deployment of capital for the customer.
Questions? Yep.
We could talk about the upcoming basketball season, if you'd like, hockey season.
You can do that in Goldman Sachs conference next week.
There are more hockey fans there or Celtic- Celtics fans?
So just a quick question on just double-clicking on the ASIC, I guess, the VIP segment of your business. I guess we have seen on the merchant side, kind of NVIDIA particularly accelerating the cadence. So the question is, I guess the VIPs or the hyperscalers respond? I guess the first part of the question is: are you seeing an increased cadence on that side of the business, and what would that mean? I think earlier you discussed, on the merchant side, it's more sticky because you have pretty much one year. If that's happening on the ASIC side as well, what would that mean for Teradyne?
Yeah, I think when you see very fast innovation in a market, you're going to see the pace of innovation accelerate. And you can think about, you know, the PC industry or the smartphone industry as it, as it's evolved. And when innovation occurs, the faster you are with the next best thing is a greater market share gain and greater profitability. So it doesn't surprise me 'cause that's a pattern that's occurred over the years. I would say that I'd say that from a VIP perspective, you go back to their first principle of why are they doing this? They believe that they can invest to get a competitive differentiation in their performance, whatever that may be, versus going down the merchant silicon and adapting software or other chips to have a merchant silicon solution.
And so, as long as that differentiation in their end solution is still there, and it could be reliability, it could be performance, it could be functionality, whatever the differentiation is, that will continue. Now, the pace of it is an interesting question because a lot of these companies that built very large teams, trust me, hiring a bunch of engineers, technical engineers to go build ASICs is not an inexpensive venture. And a lot of these companies went, and there were some that were very successful. Some struggled, and they had to regroup, and they started to work with design houses like Marvell, Broadcom, or Samsung to help take their specific IP blocks and have those companies complete the chip that would then go into their data center or do whatever it is they wanted to do with the chip.
And so, how I see it unfolding is that, these companies are getting better, and where they may have struggled and went kind of the outsourced and specific IP, they, I think you're going to start to see them mature, and then they'll have their teams build some, maybe continue with the design houses, and then they'll make a choice. Do I work with a design house? Do I work internally? But I do believe that eventually, that pace, depending on what their roadmaps are, will probably start to accelerate. And what does that mean for test?
You know, when you have new chips, you typically have, if it's a new process node or they introduce a new architecture of Gate-All-Around or what have you, new chips typically have a yield learning, and that is an incremental test seconds that drives, that will help drive the test market. Excuse me.
What do you think the SoC test market is growing, and how do you see your share shifting? Because if compute is bigger, and your share could be increasing dramatically there, memory appears to be a little bigger. Do you think the SoC test could be growing a little faster than what we've seen in the past-
Yeah
... and your share shifts more in your favor? Thank you.
Yeah. So, when back in 2019, when the compute market was about $600 million +/- , and now it's $1.6 billion, we didn't have a strong share. So what happened is our share diluted. Several years ago, we made a strategic pivot, and from kind of the current environment, we do see an incremental share shift up as we continue to win in the VIPs. I would say on the memory side, you know, funny, when I first came here in 2019, you know, our memory share was about 4% going back 15 years. You know, we're now kind of neck and neck with our competitor, and I would say in the way of dollar value share, you know, I'd encourage people to look under the covers of what that profitability pool is.
'Cause we specifically don't compete in certain areas where the profitability pool or the complexity of the market is much lower and been commoditized. We have not necessarily invested in what I would say historical invested significantly in DRAM wafer sort until HBM kicked in, where the complexity was much higher and, you know, we could engineer solutions to drive profitable solution. But overall, from today, I would say we see a share tailwind really tied to the VIPs and winning more in the VIPs and our position in network switching and then continued momentum in memory. Yeah, our expectation is that the question was for the benefit of the room do you see the SoC test market growing, I'm assuming over the midterm, like in the future? Yes, we do.
We see continued tailwinds in going to, you know, the new architecture of Gate-All-Around, going forward, you know, advancing to three nanometer and then to two nanometer. You know, edge AI is really driving that. Along with and there's been multiple discussions on AI and data center growth, but really edge AI, once the core is built out, should really be a tailwind.
Yeah, hi, two quick questions. You had mentioned the forward compatibility on the testing side for HBM. Does that cannibalize your future projection in terms of sales growth? And then also, with the AI component on the robotics, you had mentioned that it eases deployment. Presumably, it increases functionality. Are you envisioning the AI component to have an uplifting effect on demand?
Sure. So, on the cannibalization, great point. You know, historically, if you have you sell a tester and you then need a new tester for the next interface change, if we were 100% market share, we may not take that approach. But we are kind of less than half market share, and we see that differentiation as a way to enhance our market share and deliver value. So we see that as a net, net positive, so we made that choice. Excuse me. From a automation perspective. Sorry, this happened last year too. From an automation perspective, we do see the implementation becoming much easier, which will help drive the penetration and revenue.
Right, we're almost out of time. Thank you, Sanjay, for coming to Citi Conference.
All right. Thank you.