Aehr Test Systems, Inc. (AEHR)
NASDAQ: AEHR · Real-Time Price · USD
82.43
-6.05 (-6.84%)
At close: Apr 28, 2026, 4:00 PM EDT
82.98
+0.55 (0.67%)
After-hours: Apr 28, 2026, 4:57 PM EDT
← View all transcripts

28th Annual Needham Growth Conference Virtual

Jan 13, 2026

Gayn Erickson
CEO, Aehr Test Systems

So I'll just get through some slides. These will all be posted as well. So my intent is to just quickly go through this, assuming a bunch of new faces, which is actually what I see out there. This will be posted. You can take a look, and then just we'll have a few minutes at the end for questions and answers. Aehr Test Systems has been in the semiconductor test business for almost 50 years now.

We have a line of products that are basically doing what's called reliability or burn-in test. And right now, we're really being caught up in, like a lot of the semiconductor folks, in all the data center and AI processor activities where reliability, security, and safety are critically important. We have both a what's called package-level burn-in systems where we're testing the devices that look like this.

We're also testing devices in wafer form. Doing a wafer-level burn-in is quite a trick. We're really the only ones that are doing that. If you were to read into the news that's going on, that's what's really kind of the key activity driving our business, is both the package level and the wafer-level burn-in for our processors. We're in California in Fremont.

We have a very large facility there for doing final assembly and tests. People are a little bit surprised by that. We're doing contract manufacturing all over the world. We just do the final assembled quality test in the Bay Area. It's actually serving us well because if you look, at least these days with data center and AI, most of the decisions are being made from Seattle down to San Diego on the West Coast anyhow.

We're able to rotate people through here and see our facilities, touch and feel these pieces of equipment, which is really helpful. People, when they come and they can see and imagine the kind of capacity that they would need, they can actually see these tools on the floor. We've described to people a production capacity well in excess of any of the forecasts that we have given out there.

We can ship upwards of 20 systems a month of wafer-level and 20 systems of package-level burn-in systems per month, which is an order of magnitude or two higher than our current revenue levels. We have enormous capacity. There's some historical reasons why we have that to be able to serve these very large markets. Market drivers, a lot of things that are driving the whole semiconductor industry are clearly driving us.

We tend to be a little bit at the forefront of that because where people are using reliability burn-in tools like ours tend to be the latest processes, the latest applications, the highest performance applications where they're using our tools. We have a very large installed base around the world. It's candidly the who's who.

Pretty much every one of the AI processor companies, all the automotive guys, all of the people at data center, Silicon Photonics. We have one of the biggest or the biggest cell phone manufacturer supply chain devices as well. So we've got a very large installed base that is, in fact, growing this year. One of the things that people just to understand, what does it mean? What's this burn-in cycle?

If you look at a processor flow, for example, as the wafer is moving through test, it has a wafer test before it gets packaging. Then it actually has a final package test. It goes through a burn-in step, another package, an SLT test. And now you can see lots of test steps along the way.

We offer this solution with our Sonoma, but we also offer a very unique FOX product that moves that burn-in to wafer level, which allows people to actually screen out the devices before they're put into these very advanced packages. Because when they fail here, you take out the package and all the other devices. It's a fairly simple value proposition. To do that, we use a lot of proprietary IP.

It's a full turnkey solution from us, from the equipment and the software to the consumable aspects, all the automation, and also services support, so in doing this, it provides us with a long-term sustainable business, with the contactor business being 50% of our revenue in a typical quarter, so as the install base grows, our ability to continue to grow with these consumables kind of locks in.

A lot of patents. We talk about this all the time around the world, and we have them from China and Japan, Taiwan, Korea, Singapore, across Europe, and of course, in the United States that lock up the IP related to this wafer-level burn-in IP that we have. These are our systems. People kind of get a feel for the size and scale of these. When I talk about a single wafer-level burn-in system, it can test 20 wafers at a time.

You walk up to this machine and you could be processing 20 actually different flavor wafers at any given time. This is how you get the cost to test down to be able to cost effectively do the multi-hour burn-in, sometimes up to 24 hours on these wafers. It includes full automation. This was a key thing we introduced a couple few years ago. We're seeing that.

Our lead customer for AI wafer-level burn-in actually started without the full automation. We announced in earnings last week that they're shifting to a full automation. What that means is they'll take these aligners, bolt them onto the front of the automated systems, and then it's a full turnkey test. You walk up to it with 300 millimeter wafer foops, hit a button, and go. They're all processed without any kind of handling.

This is what a test cell would look like. We've actually given people a heads up. We're engaged in a wafer-level burn-in benchmark right now with one of the leading AI accelerator companies. It's one of their multiple devices, that particular device that they have forecasted a capacity production need for. We estimate is about 20 systems.

That's what 20 systems looks like on a floor. ASP of these is about $5-$6 million apiece. So it drives kind of a hockey stick revenue implications as you enter into this AI application. You know what? This thing right here, this picture, this thing called a wafer pack, it actually takes the wafer, something like this 300 millimeter wafer outside of its shell, of course, and we put it onto what we call a thin chuck. We put it on here and effectively a probe card on the top.

And those things are actually assembled together with a vacuum process. This wafer pack, this portable carrier, is what's patented and a lot of the technologies around it. And that becomes unique per design. So every time an AI processor or any wafer, the design is changed, you would need to buy one of those from us.

And then you would buy as many as needed to hit your production capacity needs. They don't really wear out. So saying consumable sometimes implies the wrong thing because they have several hundred thousand insertions lifetime. And if it's a 12-hour burn-in time, that's a really long time. So the devices tend to last, say, so many years. And then at the end, they would throw all the wafer packs away and buy new ones. Okay? Oops. Next slide.

So this was the next slide that I just talked about, that wafer pack contactor. Okay? And you can either insert them by hand, manually, or with full automation. In reliability testing, one of the things you do early on in engineering, it's called a high temp operating life. You actually evaluate the failure rate of semiconductor process and devices over time. And it creates this what we call a bathtub curve. The likelihood that it fails when you first ship it is here.

Over time, the likelihood that it fails drops and then it levels out. And at the end, it wears out. You do use tools like ours to develop that curve. And then depending on if this infant mortality is too high for your application, you'll apply a production burn-in to screen those out. And the key is to weed out these devices without bringing that in.

You don't want them to wear out sooner, and you don't want to increase their likelihood of failure. That is one of the tricks that we do within our production burn-in systems. This is a Sonoma system. This was big news for us last year. We acquired a company called Incal. They were doing high temp operating life with a bunch of the AI guys.

We approached them and got into a conversation with one of the major hyperscalers who said they would like to use this tool in production, but Incal had no capacity either to support them in Taiwan or to actually build them. They could build maybe two systems a month. Aehr said we could do up to 20 right now. So they chose. They actually part of the deal was, "Aehr, if you buy them, we're very interested in moving to production.

We've actually gotten those orders. They've actually done now multiple orders, and they've just recently given us a very large forecast that's going to drive our business in the second half. Our second half is kind of off by six months. So our second half ends in May. And we just announced that in that second half, we're going to go from about $20 million in the first half bookings to between $60 and $80 million in the second half. So that's our big news out there.

That's what's driving a lot of discussions around that. Where's it coming from? A big chunk is actually coming from the Sonoma, but also the wafer-level burn-in systems for AI, silicon photonics, gallium nitride power semiconductors, silicon carbide, and also for hard disk drives. We've then taken the Sonoma system and added full automation to it.

This is something we'll start to see revenue shipments on this year. This is a really big deal because it allows you to cycle a wafer or a device with almost like two minutes or less of overhead time. So if your burn-in time is four hours, we can cycle it in four hours and two minutes. By contrast to burn-in systems historically have burn-in boards that are put into chambers.

The chambers heat up. They start their test. Then they pull them out. They're manually pulled these things out after they drop in temperature. So you can have multiple hours of overhead between burn-in cycles, which is, so this is a key contribution to our product lines. Okay?

Just as we had not only caught our first wafer-level burn-in customer for AI, but we've also announced that we've got a paid-for production by one of the leaders to actually validate their wafers for production burn-in of their processors before they actually are inserted into these complex packages. This deal, as I said, could drive 20-30 systems in production capacity if we were able to win that business.

We've also announced in the earnings that we had two more of the leading suppliers of AI processors approach us and ask us if we could evaluate doing wafer-level burn-in of their processors too. Lastly, we made an announcement with ASE ISE. They're the largest OSAT in the world. ISE has been getting inbound requests to use wafer-level burn-in not only for production, but also for high-temperature operating life calls. So we partnered with them.

They're physically about a mile away from us where we have tools on site, and they can provide engineering services and capability to basically seed the market for high temp operating life of wafer-level burn-in that would lead towards production quantity volumes. This is actually a pretty big deal for us. I showed one up here.

This happens to be an NVIDIA one, but this one is AMD. But you can actually see in this case, it actually has eight core processor chips in there in addition to high bandwidth memory. The idea of actually burning in that device before it fails at a 1% failure rate when it's all together is the intuitive reason why people would want to consider to do wafer-level burn-in of that processor rather than wait until it's in this form, which is the way they're doing it today.

Another statement we talked about is Optical I/O is coming. We have six customers that are in Optical I/O today. Our lead customer, the largest in the space, has told us just recently that they're planning to add capacity. It originally was planned for this in the first half of this year. It's now going to be in the second half of the year. That was something we also shared last year, and then we completed the initial phase of our benchmark.

One of the key market opportunities for us is flash memory. So we've been engaged with one of the leaders for the last 18 months or so where we actually brought up their wafers on a FOX test cell with our proprietary WaferPaks to actually show them the capability for doing whole wafer testing of 300-millimeter flash memories.

This was a critical thing for them as new technologies like hybrid bonded flash and high bandwidth memory, interesting, the same three acronyms, are driving a technological change that doubles the parallelism and quadruples the power per wafer. One of the key things with us is we can handle more power per wafer and more wafers, more than one wafer at a time, and so they approached us about that roadmap, and we successfully tested their parts.

They had the wafers in hand, and stay tuned for more information related to that. People always ask how big are the markets, what's that look like? We'll be putting out more detail. We give you some specific examples, but just in relative sizes.

If you look at the device markets and their dollars, a pretty good rule of thumb is somewhere between 2%-5% of that is spent on overall test between functional test and burn-in, and so you can see the markets like silicon photonics, silicon carbide, gallium nitride that we have historically, our revenue has come from, is being dwarfed by these new markets with memories and AI processors. Two years ago, our revenue was over 90% silicon carbide.

This year, I think it's less than 10%, maybe 5%. We've completely shifted on approximately the same revenue entirely to new markets, so if you're familiar with the silicon carbide excitement, it was all about the electric vehicles and the inverters, and for the same reasons that we're being sought, people want to move to wafer-level burn-in because the inverters had so many devices in one package.

They were driving to wafer-level burn-in. And customers like onsemi, who is our lead customer, everybody knows, did a very good job of actually garnering market share by shifting all of their tests to wafer-level burn-in on our platform. It's a tagline, but it's really true. Turns out one of the things that differentiates us as a company compared to some of the "ovens" that you hear in Taiwan and all is that our tester, we put real testers into our machine.

They functionally test and validate, and every single time can guarantee a valid production burn-in. Why that matters is if you have a 1% failure rate of a silicon carbide device, for example, and you happen to think you burnt it in, but you didn't, it actually has a 1% chance that it will fail in your car.

My Tesla has 96 of those silicon carbide devices in it. If you go through the math, you're almost assured. So we currently have OEMs, the EV manufacturers and large-scale suppliers that are not just EVs that specifically dictate wafer-level burn-in process conditions on our platform. So we were able to build preference for people to do it for quality, reliability, and safety.

And we think we can do the same thing in other markets as well. And with that, I'll open it up for questions.

Speaker 2

What percent did you say failed in the first year without your intervention?

Gayn Erickson
CEO, Aehr Test Systems

So on silicon carbide, it was north of 1%. Same with GaN power semiconductors going into data centers. What we've admitted to is less than 1% on AI. Everybody's very sensitive to that. But if you use 0.5% or something along those lines, we're catching that during the burn-in process. All of those would fail during the first year. So you hear Meta, and there are all these things about what happens when one of these things goes down. It's right in the middle of a training event. You have to start over. It's enormously painful for them. Yes.

Speaker 2

So what you're doing is obviously providing tremendous value to your customers. Can you talk to the scale part of the ability for you to push on pricing?

Gayn Erickson
CEO, Aehr Test Systems

Yeah. Yeah. How come we're not making? Yes. No, that's a really good point. So in the package part side of things, we actually make very good margins on the systems. On the consumables, which are dominated by a socket for this, we make less.

We actually talk about material margins, so meaning our incremental cost. Our material margins on the package part consumables are only like 30%. On the systems, they're more like 60% plus. On our wafer level, they're north of that. And the wafer packs have the highest margins. And so we are able to get the margins out of that. You do need revenues for it to show up to your bottom line. And as you saw in our latest reporting, we actually had a pretty wimpy revenue quarter last quarter. But at the same time, we're able to project and capture the customer's confidence that we can supply a large number of systems. That's where people are always a little bit surprised by.

In fact, one of the big sales process for us is we bring them to the factory, and they're just not expecting to see the level of infrastructure and capacity that we have to ship that many systems. Other questions?

Speaker 2

Yeah? Sorry, can I just expand? Can you talk to, obviously, there's a lot of concern about the CapEx expanding role in AI. Just generally, you're sort of interacting with NVIDIA and all of the hyperscalers. What are the KPIs that you as a business are looking at from a macro perspective on AI spend and what the implications of that?

Gayn Erickson
CEO, Aehr Test Systems

So let me capture that because there's a recording question just related just because of the questions related to the macro spend going on. Are we watching it? Obviously, are there concerns, et cetera? So our opinion, my opinion, okay, is that there's sort of two sides to this.

The whole thing is AI in a bubble. What's it look like? Okay? When we look at it, the underpinning value proposition of AI inference and other things is continuing to grow. If we look at forecasts from across multiple market segments, the volumes are getting much, much larger. Now, as we look at the segmentation, there's AI accelerators in, say, data center. Training smaller than inference. Okay? Very important security, very important to have reliability. But let's say they're in the five million devices a year. Maybe it goes up a little higher than that.

The next one would be to go to automotive. Okay? Automotive, much higher. If you look at the AI processors that are going into and taking ADAS to the next step, I mean, Tesla would be the obvious example, but look at what NVIDIA just introduced.

Going to be way higher volumes, order of magnitude. Okay? If you buy into robots and you see in that, that takes it to the next step. Robots and automotive are going to be 100% burned in. The quality, reliability, implications of those things having a burp are a big problem. Okay? Beyond that, and we don't really focus on this as much, you would go to PCs, AI-enabled PCs and mobile phones.

So we focus on, in this case, the obvious thing everybody's talking about right now is data center-related training and inference. But a lot of our energy is actually in the automotive and these other called industrial edge without necessarily going at the PCs. I think those are more commoditized. If your phone dies, is it the end of the world? Okay?

But if my Tesla just doing a nice gradual turn at 70 miles an hour down 280, decides it kind of couldn't remember what it was doing, that's a problem. Okay? So there's an element of volume times importance or reliability aspect of it. So do I think training is going to be the most exposed? It has a huge opportunity to drive our revenue.

But I think the bigger opportunities are going to be in those other ones, which I don't think people talk about as much as the training bubble, if you will. And I probably should never use the term bubble on public, but I get it. And by the way, some of that's margin. Do I think everybody's going to make 80% margin selling AI processors? Absolutely not. Okay? Absolutely not. Do I think there's going to be a lot of processors made?

I will. Candidly, if NVIDIA is our customer, they're not giving us money for their margin. They would grind us on cost. Okay? And when you go to a Google or Tesla or Meta, they're looking at their cost. And our value proposition is overwhelmingly positive on the manufacturing cost without talking about very high margins. Okay? Yes?

Speaker 2

You ever quantify the benefit of the cost? And then how much more does it cost to do burn and test than just regular testing for the PCs or whatever?

Gayn Erickson
CEO, Aehr Test Systems

Yeah. Okay. So we do quantify because we do the math like, okay, you have a 0.5% failure rate. We know that that piece of silicon, because it's a TSMC wafer, it's a $40,000 wafer. It has 40 known good die on it. You can go through the math. And then it shows up on a CoW substrate.

That costs more than that, and eight stacks of HBM that costs more than that, and that 1% failure looks like this. My cost to test is $10, and you're like, okay, this is. I mean, the comments are, this is easy. Now, you'll point out, well, then you should raise your prices. Well, let's get in the market first with what we have, and if we could even have a we don't have to have dominant share.

I mean, we're the only ones doing it, but we could grow substantially with great margins, and that's our strategy right now. In terms of cost, the test cost, the 93,000 insertion of all of these steps is more expensive than the burn-in cost, so if you could avoid testing things later, actually, it's interesting. If I just simply didn't force you to test it three times afterwards because I've already failed it, maybe it saves the cost of the burn-in if you did it to wafer level. Other questions?

Speaker 2

Yes. With the two new AI accelerators wafer level customers, do you pretty much have everybody in the pipeline?

Gayn Erickson
CEO, Aehr Test Systems

Not everybody. So the question is, so you announced two more. So we have one in production now. We have one that's at, we're actually on wafer doing qualification. And two more we announced. Is that everybody? It's not. It's not. So we have one other one we talked about. So actually our large package part, the one that is ramping and has a big ramp, this is the hyperscaler ASIC that's ramping this summer. Do the math. Okay?

They're actually, we're about to do the qual, the high-temp operating life qualification in a couple of months on their new device. That's going to start sampling in the summer. They're going to do production on that too, and it's going to be higher volumes. Okay? So this first device that's on us is very high volumes. The next one's higher. And the third one is higher again. They said, that one we want to consider wafer level burn-in. I actually talked with the program.

This is recorded. What would you do with the burn-in systems? They're like, doesn't matter. The wafer level is so much cheaper that it displaces that. Now, that'd be an interesting business model. We sell them lots of package part burn-in systems, and they throw them away when wafer level comes along. I won't count on that. But that type of thing.

So it does make sense. And you do the math. And analysts that have gone through this, you start looking, and it makes sense. Questions? More? It's too easy. So what we did, we just announced earnings. I mean, to be fair, we didn't have guidance out before. We were trying to struggle as we introduced guidance. We pulled guidance on April 4th last year. I think we were the third public company right after April 1st. It's like, oh my gosh, we anticipated there'd be problems.

We pulled guidance. It was a brilliant move because a bunch of our vendors wouldn't even ship into the U.S. So we actually did struggle. Took us a couple of quarters to kind of work through that. Customers settled out. So we reinstated guidance. Of course, there's probably more uncertainty around tariffs. We'll find out tomorrow. But they won't hurt us.

No one's going to stop shipping to us if that thing goes away, but for now, we've stabilized that, and we have a better feel for it, so we reinstated guidance again. We said we'll do from we did 20-ish in the first half. We're going to do 25 to 30 in the second half in revenue, but we're going to go from $20 million in bookings to $60-$80 million in the second half, so sort of a mixed message there.

The revenues weren't that impressive, but the bookings look like they're teeing up for next year, and so with that, we'd be happy to take questions. If you don't have, Todd or Chris can raise hands. You can get business cards. We can set something up. I do see a lot of new faces here. In the meetings we're having upstairs, it's mostly people that are investors. So we can spend more time with you if you'd like. Anything else? Yes?

Speaker 2

I understood you correctly. So you're not pursuing kind of CPU or mobile because they're large and commoditized markets. Is that correct?

Gayn Erickson
CEO, Aehr Test Systems

So CPUs in AI are definitely targets. Okay? But if you said a standard CPU out of AMD or Intel, not so much. And I think on the mobile side of things, not yet. Not yet. It's not as obvious. You're not putting multiple chips together into a cell phone. The ASPs are lower. And if your cell phone dies, maybe that's a good thing if you're Apple. I don't know. Right? So just realistically, I don't think that would be a target at this point. All right. Thank you.

Powered by