My name is CJ Muse, semiconductor equipment analyst, with Cantor Fitzgerald. Very pleased to have Teradyne, and with us is Greg Smith, President and CEO. Welcome.
Thank you.
Always good to see you in New York. I guess near term, wanted to start off with, you know, how we ended the last quarter, which was very strong visibility first half. You know, some uncertainty around the second half. I'm curious, you know, how that visibility or the dynamics there have evolved over the last 4 or 5, 6 weeks.
Yeah. One of the things that we've been trying to help describe is that we feel like we're in a new era in the way that our market works. That for a very long time we were working against a sort of a calendar sequence. You know, that there was you know, like Q2 and Q3 were always strong and that was driven by primarily like mobile and consumer. The new pattern that we're in doesn't really obey year boundaries. What we're in right now is like a very strong period that started, the leading edge of it was in the third quarter of last year, and we have visibility that it's going to continue at a really good pace kind of through the second quarter this year.
We're cautious about the year after that, not because we think there's going to be any kind of a pullback or that there's like a market turn or anything, but just that many of our largest customers are heavily investing during this four quarter boom, and we think that there's going to be a brief period of digestion before they get into next generation programs and ramps and everything else. You know, part of it is that we can't see all of the short cycle business that will occur in the second half of the year, and there certainly could be strengthening in some parts of the market. Like, the place that I'd be looking most for, you know, like a second half boost tailwind would be like in the memory market.
A lot of the bullets that were in the gun are getting fired in the first half of the year, so we don't have as many opportunities to overperform in the second half.
Makes sense. Maybe moving to AI compute, want to start with VPN. You know, you've talked historically around 50% market share for Teradyne. You did a while ago talk about a $800 million plus kind of framework for revenues by 2028. I think since then, things have obviously improved. Not asking kind of for an update on that revenue number, but how we should be thinking about, you know, the relative growth contributions and whether that, you know, share is still attainable.
Yeah. I think the plus is doing a fair amount of work in that sentence that we definitely think that there's upside to the $800 million by 2028. The thing in, you know, in 2025, we think we held that sort of 50% share in this space. There are only a very small number of hyperscalers that are driving a lot of volume. Depending on the timing of program ramps between those two hyperscalers, we'll see the share swing. You know, we've got majority share in one, our competitor has majority share in the other. As those programs take turns ramping, we'll see share move around.
I think that we're going to continue to see over the midterm, additional hyperscalers ramp, and we think that we have a reasonable shot at capturing share in those hyperscalers as they ramp. I think we feel pretty good that the TAM is probably trending above the $800 million mark, and we think that over time that we're likely to maintain that 50% share. You know, it's going to be pretty erratic because it's very lumpy.
I think historically the business has been dominated by one large customer.
Mm-hmm.
I think in our conversations in the past you've talked about, you know, good breadth of XPU attach-
Mm-hmm.
As well in there. Curious, you know, is there potential to secure, you know, other hyperscalers that are currently using your competitor's tester or are there other potential large players entering this market?
Yeah. There are a couple of things that are going on. One, that we think that hyperscalers are going to continue to show a limited amount of loyalty to the design partners that they're working with. You know, so you know, just because a hyperscaler is working with a particular ASIC company in this generation doesn't mean that they'll continue to work in that pattern. Those are not. I wouldn't call them jump balls, but they are a much better opportunity for us to gain share than if it's just the next generation of exactly the same supply chain. The next thing that's going on is as these programs ramp to scale, the amount of tester capacity that they need tends to go up by a lot, and that makes supply chain assurance more important to them.
The idea that if they are struggling to get capacity from one supplier, that they can get capacity from a different supplier, gives them multiple shots on goal. I think that's both a positive and a potential negative to Teradyne that, you know, like when a customer is sole source on our competitor, we think dual sourcing is a great idea. As the sole source, we don't think it's such a great idea. Overall, because of our share position in compute, we think that this trend towards dual sourcing is a net positive overall. We also feel pretty confident about our customer satisfaction and the value proposition for our testers, so we're not really worried about significant share erosion due to dual sourcing.
You know, are you seeing dual sourcing on the ASIC side or is that something you see in the future?
Oh, no, it's going on right now. I mean, our strongest hyperscaler account is dual sourced. They leave a fair amount of the decision to the OSATs and the foundries, and we do really well in that environment.
Gotcha. Maybe moving to the GPU side of things. You've talked at a very high level around initial win kind of this year.
Mm-hmm.
Then the hope for incremental share, you know, next year and beyond. I guess, you know, what can you share with us today in terms of kind of customer feedback and-
Mm-hmm.
You know, what is the kind of roadmap for success?
I'm trying to figure out like we are in a like no news is good news kind of a mode. Nothing has gone particularly wrong in the process of achieving qualification. We still expect to achieve qualification in the first half of this year and be in production in the second half of this year. We believe that will start a long process of share gain in the account. You know, in 2026 we're talking about, you know, low single digits of share of the GPU in this merchant account. The other thing that happens once qualification is achieved is you get follow-on projects. You begin to start on new parts that you are going to bring up on the tester and we'll be in what I refer to as a fast follower mode.
Like the engineering to introduce a new part to production would happen on the incumbent platform, but we would be able to work on a fast follower strategy from a much earlier point in that product's life cycle. That is going to start as soon as we have qualification and those parts when we are finished with our solution would still be in a much earlier part of their volume ramp. You know, you know, that is kind of the story that what will happen in 2027 is we'll have more parts earlier in their life cycle, which will drive our share higher.
It's probably going to take us, you know, three or more years to get to a mature, competitive dual sourcing environment, where, you know, what we've seen in other accounts is that's like two competitors duking it out and the share is managed in a 30%-70% range. The company that gets the 70% is the one that delivers the right solutions for leading-edge technologies, delivers the best reliability, best time to market, best customer satisfaction. It's not, you know, like when I say 30%-70%, I want to make sure that people understand that we don't see like 70% as the lowest that the incumbent goes and 30% is the highest that we go to as a challenger. It's that the share would get managed in that range on the basis of customer satisfaction.
Makes sense. If I look back at mobility, you know, where you would have typical kind of dual sourcing, it was, you know, not for the same part but for different parts. Are you expecting it to be the same or would one actually, you know, let's make up an example, a logic dielet, would there be an opportunity for two different testers for that part or
For sure. I mean, one of the big differences between mobile and the AI space is the capital intensity against one specific SKU, you know. Also when you are building up to this competitive dual sourcing, since we will be doing this fast follower approach, we will not have any solutions on our tester that don't already exist on the incumbent tester. By definition, those are going to be one part hosted on two different platforms. Do we expect that there are going to be some parts where we have much higher share than other parts? Yeah, we do. It's not like in mobile. It's like you get RF, you get digital, you get power management. It's going to be much more intertwined than that.
It may end up being, you know, like this dielet goes to competitor A, this dielet goes to competitor B. The final test is a competitive decision between the two platforms. That kind of thing could certainly happen.
Gotcha. You know, if I think about the future roadmap from front-end manufacturing, you know, high-performance compute is like catching up to mobility, and it certainly-
Mm-hmm.
Feels like with backside power delivery that they're gonna leapfrog mobility. Curious as we see that transition, what impact does that have kind of on test intensity and test insertions? You know, how are you thinking about performance compute impact or the growth rate for semi test.
Yeah. It's interesting because for like a decade, mobile was the place where the latest and greatest process technology was used in SOC. You know, like the first into 5 nm, the first into 3 nm, and still the first into 2 nm . The hunger for compute, you know, sort of like anything that you can do to increase the tokens per watt, there's a very strong proposition to try and do that. We think that despite the fact that these advanced nodes typically have lower yields and the die size in compute is huge, that these customers are willing to accept that proposition in order to leverage that technology in the data center.
That's great news for tester companies because if you need to test four parts to get one good part, you need a lot of wafer sort capacity to try and pull that off. That, you know, so that's a positive to us. The other is, in order to try to achieve the levels of compute per Watt that they're looking to do, a lot of that has to do with die-to-die connectivity, right? That, you know, it's a scale up. The basis of the scale up, the most efficient scale up that you can do is by co-mounting as many compute die into the same package as you can before you even get to network-based scale up. That means that these packages are going to go from having two accelerator dies to four accelerator dies.
That puts an amazing amount of pressure on die quality. Not only do you have lower yield, but you are testing out to a higher level of coverage, which also expands the test time. You know, like this trend towards using the latest node for compute is a really good thing for the overall TAM.
Perfect. You mentioned scale up. A good segue to AI networking. Historically, kind of the bread and butter for Teradyne. Are you worried about this market dual sourcing or, you know, how are you thinking competitively?
I think it's pretty clear that dual sourcing is going to be a thing for most important high volume sockets. You know, we have to look at the networking business as something that needs to be defended with, you know, product differentiation and customer satisfaction. I don't think we have a monopoly on, you know, like we're being the sole beneficiary from dual sourcing. We have a very good product for this space. I think the other thing that goes on in networking is if you look at the ratio of big networking die to accelerator die in a rack, it's like 10 to 1. One of the key things that enables dual sourcing is you have a high volume, right?
The volume mechanics are different in the networking space than they are in the accelerator space. That's one of the things, that's one of the levers around like when would a company say, "This part needs so much capacity that I need the supply chain assurance of dual source." I think with networking there's a pretty big order of magnitude difference in terms of the amount that you need to get there.
I think part of your networking growth rate was kind of muted a bit because of excess mobile capacity and kind of reallocation. I imagine that, I believe that's now behind us. I'm curious, you know, with that no longer as a headwind, how are you thinking about kind of the relative growth rate here? As part of that, would love to hear as networking becomes more complex, how are you seeing kind of the attach rate of networking dollars relative to compute dollars?
We came into 2024 with a belief that networking was kinda, you know, like 15-ish% of the space. What we saw in 2024 and 2025 was a really rapid expansion of the TAM for the accelerators, for the GPU itself. Later in 2025, we are now seeing networking catching up again. Moving forward, I think, we're at kind of a new normal in terms of the size of all of these markets. Our assumption is that they're gonna grow relatively proportional in terms of the core technologies today. The X factor is how rapidly CPO and optical technology will enhance the networking TAM. That's like on top of that sort of 15-ish% ratio.
Makes sense. You acquired Quantifi Photonics, building out your portfolio for silicon photonics. I guess, how do you see that kind of ramping as they focus on scale up?
Yeah. I feel like I'm like, Mr. Cold Water at this conference when I'm on silicon photonics, even though it's something that I'm incredibly excited about, right? If you look at silicon photonics right now, there's really no large scale deployment of CPO, in any, you know, in any way in 2025 and at the beginning of 2026. By the end of 2026, I think we will see the beginning of commercial scale deployment of CPO for scale out applications. I think it could grow geometrically from there just in scale out like 2026 to 2027. Just scale out could be like an order of magnitude difference in terms of the size of like the number of CPO ports that are delivered. Then scale up would be another multiple beyond that.
That's just one factor in terms of the size of the test market. If you look at how CPO devices are tested today, it is very early days for that market. All of the technologies around optical alignment, the tests that our customers are performing are very characterization-oriented. The instrumentation required to do that testing is more sort of laboratory-focused instrumentation versus production-focused. When we've seen new technologies like this introduced into the ecosystem, what we see is a really rapid improvement in test efficiency. You know, between more efficient test equipment, reduced test lists, improved efficiency for alignment, and eventually testing more than one device at the same time, the test intensity for these devices could go down by a factor of 10. Both Teradyne and Advantest are leaning into developing production-oriented optical test equipment.
That's why we did quantify. That's really around what testers are going to look like in 2027. Those testers are going to, for both Teradyne and for Advantest, their ASP is going to be lower. We're going to be competing with each other, but we will be all native content or mostly native content. You know, right now we're buying a lot of instruments from other people, putting them in a tester and then selling them again. That's a lower margin proposition than for natively developed testers.
What I think is going to happen, to directly answer your question, what I think is going to happen is that this initial deployment of scale out is going to drive a rapid increase in the size of the TAM at the end of 2026 and into 2027. That TAM is probably going to reach somewhere in the mid hundreds of millions of dollars. As the volume in the end market scales, we're going to simultaneously see this reduction in test intensity and reduction in capital required. Even though we'll ship 10 times as many CPO ports in 2028 as we do in 2027, the TAM is only going to be incrementally larger, not multiplicatively larger than what it was.
Makes sense. Maybe moving to memory. Teradyne has done a great job securing wins at two of the leading DRAM players for HBM, and obviously the third does internal testing. Maybe if you could update kind of, you know, where you've had success. Perhaps maybe more importantly, as we start thinking about HBM4 and start moving kind of higher layer counts, what that means for test intensity and perhaps incrementally more test insertions.
First of all, let's just sort of talk about the insertions involved in building an HBM. An HBM has multiple layers of DRAM die and then a base die. Each of those layers needs to be tested at the wafer level, and the base die needs to be tested at the wafer level. Then you stack the DRAM die on the base wafer, you test it again for performance as a stacked assembly. Then you singulate it, and right now some of that volume is being tested again in a singulated form, and then you ship it off, and it gets put onto a substrate to be used with an accelerator. There's three major test insertions.
We are essentially getting like our biggest impact is in the stacked wafer test, performance test, and the singulated stack test. The wafer sort has always been that we have a strong presence in some manufacturers, not a strong presence in others. You know, there isn't a lot of share movement there, whereas we have been gaining share in the post-stack tests. The question is, okay, what happens to that strength as you go from HBM3 to HBM4 to HBM3E, and then, when you go from 8-high to 12-high to 16-high? As the layer count increases, that primarily drives test intensity at that first step. You need to build more wafers to build one HBM stack.
That's kind of normal by customer share, not a lot of movement. The technology change is what drives the later insertions. We have a tester that has higher data rates and higher signal fidelity. As we get to like HBM3E, we believe that we're going to be able to continue to increase our share in that space and differentiate with our customers around letting them get to higher yields.
You know that you have the one company that does in-house. I think they've publicly stated that they're going to build a logic die at TSMC. Does that, excuse me, represent an opportunity for merchant test player?
Yeah. I mean, logic dies already get tested, and in most cases, sometimes they get tested on our platform, sometimes they get tested on an Advantest platform, and sometimes they get tested on memory testers. As we go to HBM4E, the complexity of that base die is increasing, and it is an incrementally larger part of this whole value chain, but it really is dwarfed in comparison to the memory investment, the memory ATE investment to support the whole chain.
Got you. Makes sense. Maybe conventional DRAM, obviously a place of strength that we're seeing today despite shortages. Curious if you've seen sort of any movement there?
You know, 2025 was a remarkable year for DRAM manufacturers because they were able to remarkably increase their revenue while shipping basically the same number of DRAMs. You know, so not a lot of unit volume change in that space, but much higher ASPs. From a test perspective, it was kind of normal capacity increases, not like proportional to the kind of revenue increases that you're seeing. The thing that we're looking at is there are big foundry investments going into memory for stuff that's gonna come online in 2027. I think we're kind of in the memory market that we're in right now is probably gonna be stronger in 2026 than it was in 2025. I think the breakout year for memory is really 2027 when there's a lot more wafers that need to get tested.
That's kind of the way I'm looking at it, is that we've got a little ways to wait before you see a huge bump. The one short-term thing that I think is really interesting is, there's more of a reliance on TCAM in the next generation of accelerators, so it's content addressable memory, and that is often being built out of LPDDR, LPDDR5. Because we have a really good position in DRAM final test for that kind of a technology, that's a net positive for us. The more TCAM that's going into the market, the better it is for us.
You said foundry investment in memory. Is that your way of saying kind of new greenfield investment?
Yeah.
kind of memory guys?
Yeah, new fabs.
Okay, perfect. Maybe on NAND, are you seeing any sign of life from the enterprise SSD side or elsewhere? Do you have any kind of view on KV cache and what that might do?
We believe that we should be seeing those signs, and we are not. That right now the memory market is wafer constrained, wafer capacity constrained, and the allocations of wafers towards flash have not been increasing. The equation doesn't balance though because the amount of flash required per rack in next generation data centers is like multiples of what's required in current generation. It's possible that that flash is going to turn into a significant enough bottleneck that you'll see some changes in behavior. Again, I think the 2027 is when there's gonna be enough incremental wafers to really make that flash market grow.
We have been seeing more strength in the HDD market, and I think some of that has to do with the inability to source enough flash to be able to do what they really want to do with flash.
Makes sense. It's surprising, two minutes left and hit on your more cyclical businesses. I think it speaks to how great AI is today. You know, in the last few minutes that we have, would love to hear kind of your latest thoughts around mobile, auto, industrial.
I think like one thing that is becoming clear to me, and I think it's important, is the next generation of AI accelerators are gonna need next generation equipment from both us and our competitor. That's because you need more power, you need more pattern depth, you need things like PCIe 6.0 capabilities to test those generations of parts. What that means is there's a lot of really good testers, you know, testers that have been introduced into the ecosystem even this year that are not gonna be useful for leading-edge AI accelerators anymore. I think mobile's gonna get the hand-me-downs. I'm expecting that the mobile TAM is gonna be somewhat suppressed because of this really high buy rate for next generation equipment and compute.
We've never been like we've been pretty cautious in terms of a recovery in mobile, and I think that's another reason why we're kind of staying cautious about how much that could pop because there's gonna be capacity coming available. Like when you look at the other segments like automotive and industrial, I think we are kind of at the dawn of a cyclical recovery. Inventory levels are down. We're hearing more optimism in the way that those companies are talking about their results. I think we'll see a modest recovery there.
Perfect. Well, I think we've run out of time. Thank you very much.
Awesome.
Appreciate it.
No, it's always great to talk to you. Thank you.