Good morning, everybody. Welcome to the second day of Needham 's 28th Annual Growth Conference. My name is Quinn Bolton. I'm the Semiconductor Analyst for Needham . It's my pleasure to host this fireside chat with Credo Technology, founded in 2008 and headquartered in San Jose, California. Credo's mission is to redefine high-speed connectivity by delivering breakthrough system-level solutions that enable the ever-expanding demands of AI, cloud, computing, and hyperscale networks. The company's innovations address system bandwidth bottlenecks while simultaneously improving on power, security, and reliability. Last week, we named Credo our top pick for 2026. Joining me from the company are Bill Brennan, President, CEO, and Chairman, and Dan Fleming, CFO. We also have Dan O'Neil, VP of Corporate Development, in the audience. Bill, Dan, thank you for joining us.
Thanks for having us. Well said, by the way.
I wanted to start with sort of a big-picture question. The company's seen tremendous growth in active electrical cables, but Credo is more than just an AEC company. As you look at the data center interconnect market, can you talk about the importance of reliability and system-level solutions and the types of products you're trying to develop for customers?
Sure. So let me start with the first part of the question about reliability. So going on, I guess, two years ago, the light kind of went on for our team from two customer conversations that we were having. And so when we talk about AI clusters, we talk about tens of thousands, hundreds of thousands of GPUs all interconnected to form one big supercomputer. And when we look at the network reliability, our customers were describing an effect of link flaps, an intermittent flapping of a given link in a huge number of connections that could ultimately take down the cluster because they're all interconnected together. And so that became front and center from a conversation we had with xAI. They were building all of their scale-out connections with laser-based optics based on the form factor, 18 racks all tied together.
And they came to us and said, "Look, we're trying to build a ZeroFlap cluster. And the way we want to do that is move into our facility that's liquid cooled, compress the 18 racks down to six." And they felt like if they could make those connections with AECs, so replace all of the laser-based optical connections, they could achieve that ZeroFlap goal that they had because of the fundamental underlying reliability of copper being maybe 1,000x better than laser-based optics. The second conversation we had with Oracle, and Oracle's architecture was one where they couldn't go to short connections. Some of their connections were more than 50 m. And so the approach with Oracle was, how do we build a ZeroFlap cluster? And how do we do that given the fact that we're going to have to use laser-based optical solutions? So it was a different approach.
But they referenced the work that we did with Microsoft way back when we first started talking about AECs and having a product that was able to sense and act in a cable form, really unique and groundbreaking. But if we follow that same path and basically develop the ability to have real-time telemetry on every link in the cluster, would we be able to go up the stack, integrate within the network management software so that the optical modules themselves could be set at a threshold of link health? And when the link health dropped below a certain level, the idea is to identify a potential link and mitigate. And so that's what we did. So two different approaches to addressing the network-level reliability.
The idea is shortening the time to stability for a cluster of hundreds of thousands of GPUs and also keeping the productivity level highest so that the cluster's not crashing continuously. So thematically, reliability has been kind of our north star driving all of the roadmap development, including the new products that we recently announced, one being the microLED ALC solution. That kind of gives you a background. Yes, we view our role as a connectivity system solution partner to the industry as being much, much greater than developing chips and delivering those to connectivity partners that ultimately do that work.
Yeah, I would say that's a good segue into my next question, which is, what do you believe the company's core competencies are to enable the company to deliver the reliability to customers? Is it SerDes? Is it chip? Is it software?
So I think it's a combination of things. I mean, we've been at this a long time. I've been with the company more than a dozen years now. And the core competence that we start with is always SerDes. We are strong believers that SerDes is the starting point. It really is the secret sauce. So if you're doing things that are unique and application-specific and optimized at a SerDes level, that gives you an advantage. The next is chip design. So of course, there's many things I could point to that make us differentiated and unique the way that we design chips. But really, the breakthrough for the company was owning the product at a system level.
Having your SerDes engineers sit right next to your chip designers, right next to your system-level engineers, hardware engineers, and a firmware and software team that touches every level, that really is the starting point of the core competency. Ultimately, owning the product at a system level is what drives everything.
We'll get into in more detail the new products. But maybe just at a high level, sort of talk about the three new products that you've introduced to the market or talked to investors about over the last six months: the active LED cables, the Weaver interconnect, sorry, chiplets, and then the ZeroFlap Optical Transceivers.
Yeah, so if we start with AECs and we look out into the future into 1.6T ports, our AEC product family will reach 5 m. And so when we think about the TAM expansion, we think about it in terms of length. And so starting with AEC, I said that the next product that will deliver at a core level solid reliability equal to copper is active microLED cables or ALCs. The reach will be extended from 5 m - 30 m. And the idea is that would cover any kind of application in a given row. And then ZF Optics, ultimately, the product that I described that we worked on with Oracle, that will cover up to 2 km or whatever the max distance is within a data center. The goal there was also reliability.
And so if we look at just the length of connection and the TAM expansion, really going from a relatively short connection of 5 m, but obviously huge volume, multi-billion-dollar market, but expanding it to any length within the data center. And then from an OmniConnect standpoint, that's really going down the curve from a length standpoint. So we're talking about solving problems, die-to-die problems, and then system-level problems on moving from millimeters to 10 in. And so it's creating a super small SerDes that's very low power, maximizing beachfront density and improving the reach to 10 m. It gives GPU designers an incredible amount of flexibility. And you can build it so it's future-enabled. So it's really a game changer. That's probably the most complex system sale of the things that we've announced, but potentially the largest market opportunity.
As you think about these new products as they come to market, do you expect to largely sell them to your existing customer base, hyperscalers, or does it open up new customer opportunities?
I think for sure we're going to sell these products to our existing customers. But the point that you're making about the dynamics in the industry, we're seeing lots of opportunities with customers that are outside of what I would consider the six hyperscalers. And so in a sense, these products we've delivered maybe will appeal the most to these companies because they're not going to be resource-rich. And we are offering off-the-shelf solutions to enable them to stand up a cluster, have it come up in weeks instead of months, and then ultimately operate at highest levels of productivity. So it's a perfect fit for those companies that are going and finding the CapEx and want to enter the game. So I see many opportunities really around the globe beyond the hyperscalers.
Great. Last sort of big-picture question, just sort of your outlook for sort of data center AI spending. There have been a lot of concerns in the investment community about circular investments in the AI ecosystem. A lot of those around NVIDIA, OpenAI's ability to fund its 26 GW commitments, and then Oracle and debt financing being used to fund some of these investments. Just what are you hearing from customers? What's your outlook? Do you think we continue to see strong growth this year?
Yeah, I think the theme from my perspective during the last three months is finding the signal through the noise, and you and I talked about that. I appreciate you putting that in your report coming out of CES. What we do basically as a business is we find the signal through the noise. If you think about transmitting a signal over a copper wire, by the time it gets to the other end, it's got noise along with the signal, and so the DSP basically removes the noise. If you look at the market-level question, the financing of the demand is going to happen one way or another. The fact that the demand is there. That's really what we're focused on. I don't think there's any kind of circular house of cards that's being developed.
So again, I'm probably not the right guy to comment on it. We're responding to our customers. Really over the last 12 months and my expectation going forward is there is going to be a strengthening in the demand.
Excellent. Wanted to shift now to active electrical cables. You've obviously seen tremendous growth in this market over the past couple of years. But I'll ask sort of a similar question, sort of where do you think we are in industry adoption of active electrical cables? Sort of the what inning are we in type question?
Oh, it's only early innings. I learned that a long time ago.
Yeah, that's true. I've never heard an eighth or ninth inning.
If anybody says middle innings, they're going to be a sell-off, so we're definitely in the early innings, so I appreciate the question. Next. I'm kidding. I'm kidding, and the way that I would build that bear case, or I just did it. I think the case, anyway, the bull case on this is that I can only point to one customer that is fully deploying in every part of the network that they can with AECs. Every other customer, I see great growth opportunities from a swim lane standpoint. If you look at front end, if you look at scale-out, if you look at switch racks, if you look at routers, and ultimately scale-up is a network that's going to emerge as a large volume opportunity, and AECs will definitely apply there as well.
I can make the case as the market goes from 50 Gbps lanes to 100 Gbps lanes to 200 Gbps lanes. There's going to be a growth in the market for sure. I don't think there's any question. We see these new companies that are emerging. We're starting to take material orders from those that are outside of the hyperscaler community.
Got it. So outside of perhaps one hyperscaler, still lots of room for further adoption.
Many hyperscalers.
The hyperscaler customer base. What are you seeing in terms of attach rates per XPU as we get to next-generation platforms?
It's a bit hard for us to lock that in. There's the industry analysts and forecasters that focus on this type of summary. You talk about different levels within the network and exactly how many of those levels are using AECs. But I think comfortably that we can think about 1.5x to multiple x, more than 5 x, 1.5x all the way up to 5x or more connections per GPU.
Okay. That's a good number.
Depending on how they're building out their networks.
And then two of your competitors, Marvell and Astera , have both sort of said they think that as the market transitions to 800 Gbps active electrical cables, they become more competitive or you see greater diversification of the supplier base. Just any thoughts from Credo's perspective on the 800 Gbps transition and how you're positioned for the 800 Gbps cables?
So we're talking about the transition to 800 Gbps.
To 800 Gbps, yeah.
Yeah. So I mean, if we look at our business right now, we haven't done the math exactly, but I would say half or more of our connections in AECs are at that 800 Gbps level. So at both xAI and Meta, we're connecting NVIDIA gear with 800 Gbps solutions. So for us, that transition is well along the way. I think that if we think about the competitive landscape, I think that the market is big enough at this point. And we can point to different market forecasters to talk about multi-billion-dollar market, even over a $5 billion market as they see it today. The market's big enough to support multiple players. And I don't think there's any one generation that I can point to that would say our competitors become more competitive.
And I think fundamental to our strategy, I think we're really the only company in the industry that's taking ownership at a system level. And I can tell you from a competitive perspective, what we're focused on is delivering to our customers, delivering first, qualifying first, ramping first, having the ability to develop 20 SKUs in parallel and qualify those 20 SKUs in parallel internally before we go into a customer call. And the supply chain management part of it is also a huge advantage. So if you're going to do the whole job, you've got an advantage over a company that's going to design a chip and basically sell that chip and somewhat of a reference design to another company that's got to own it fully.
So I think the way we're approaching the market and ultimately the way that we're working with customers has proven out to ultimately cause us to be preferred. So I can't point to any one competitive aspect other than diversity that would drive any kind of market share shift.
Okay. How about thoughts on the AEC versus ACC positioning? I think one large hyperscaler is set to deploy ACCs this year on a next-generation platform. Do you see growing adoption of ACCs? Do you think it's a fairly niche product?
Yeah. So I think that if we look at the underlying technology and we talk about comparing AECs to ACCs, ACCs are using, from a semiconductor standpoint, a device called a redriver, or you could think about it as an amplifier. At each end of the cable, what's happening is there's an amplification of the signal, an amplification of the noise, and it's basically passing it down the wire to the switch or the NIC that it's interfacing with. And so that fundamentally is, from a signal integrity perspective, one that's challenged.
And so if a customer did own both ends of the connection and they were willing to go through the gymnastics of having customized or optimized firmware for every single port on that switch, because the length of the copper is different if you look at the board, if they're willing to go through those gymnastics and fight through it, it's possible. Each one of these hyperscalers is a different market. So some of them go extremely vertical and want to take ownership of that. And so I don't see this as a long-term solution that's going to threaten the AEC market at all, especially as you go faster. So the idea of interoperability, you lose that with ACCs. And from a signal integrity standpoint and a pure liability standpoint, AECs are in a different class.
Okay. Just sort of thinking about, you mentioned 1.6T optical modules are starting to ship this year. When do you think we see 200 Gbps per lane or 1.6T active electrical cables starting to ship?
I would say in that time frame. For those that have followed the market for many years, the transition between speeds is always slower than what the forecasters are predicting or even some of the companies that are talking about are predicting. If we were to go back two years ago and we listened to some of the noise around 200 Gbps per lane, they would have had you thinking in 2026 that 100 Gbps is over and that the market's dominated by 200 Gbps. There's talk about 400 Gbps now. But ultimately, our belief is that the market really takes off when the ecosystem is in place, and we're not quite there, so you've heard about delays consistently over the last 18 months.
The bottom line, it'll sort itself out and we'll be ready for the market with super competitive solutions across the entire portfolio that I just talked about. My belief on the crossover between 100 Gbps and 200 Gbps lanes is like, when will there be a higher volume of 200 Gbps lanes in a given year? I would project out in the 2028 time frame, and ultimately, there are very large hyperscalers that are still at 50 Gbps per lane now, and they're talking about a transition to 100 Gbps, so 200 Gbps's not even in any kind of near-term view. Again, each one of these hyperscalers is a different market, so the transition will be important. It'll be great for the connectivity industry, given that the more difficult things become, the better it becomes for the companies that can actually clear the hurdle.
Great. And then lastly, on the AEC side, you've historically received long-term forecasts from many of your major hyperscaler partners. Can you kind of talk about the length or the duration of some of those forecasts? And obviously, it gives you some pretty good visibility into the business. So just.
Sure, sure. So it's very standard for us to get 12-month forecasts. So every month, you can imagine a month gets added onto the 12-month forecast. We've got a couple of customers that are going out further, 24 months and even one that goes beyond that. I think forecasts are great, especially the ones that are shorter term, like 12 months. But I think there's a feeling in the industry that from a supply chain standpoint, that it's going to be a differentiator long term and that the industry's growing so fast that we can point to different areas that might become a ceiling on just overall deployments. We can talk about wafers. We can talk about memory. We can talk about lasers.
And so customers are going to great lengths to make sure that we understand what they need and we're doing the planning and making commitments on our end so that we can fulfill their demand. So it's really a great part. I mean, I can think back on my career where people didn't want to give any kind of forecasts that were beyond 20 weeks. So it makes it easier from a planning standpoint.
Excellent. I wanted to shift over now to the ZeroFlap Optical Transceivers. You gave us in your opening comments sort of a description of what the product was, telemetry. And so maybe just, clearly, they're differentiated from a commodity 800 Gbps or 1.6T optical module, but maybe spend a minute on how that reflects to you in terms of ASP that you might get for ZeroFlap Optics versus the more commodity and any impact on your gross margin as you start to ramp ZeroFlap Optics.
Yeah. So all of these new product announcements that we made in our last earnings call, they all kind of fit into that category of 63%-65% gross margin long term. So these innovative solutions really bring that value as we've gone up stack in terms of what we're providing to the customer. ZeroFlap Optics are certainly that. These are not commodity transceivers. We're not competing for commodity transceiver sockets. So there is an uplift in ASP, but most importantly, from a gross margin perspective, long term, 63%-65%, it fits right in that pocket.
Great. And then can you discuss sort of how customer engagements are going with ZeroFlap Optics? I think you'd mentioned Oracle was sort of the partner you developed it with. Are you expanding beyond that lead customer? And then thoughts on ramp timing for ZF Optics?
Yeah. So we've been definitely successful in seeing interest from several other customers. I would say that it's clear who the lead is. I think there's a second one that's strengthening. But generally, the idea of the product, it solves a real problem for anybody that's building clusters where the connections require laser-based optics. So I think from a ramp standpoint, on our most recent call, nothing's changed since then. So I expect fiscal 2027 is when we'll see material revenue, first year ramping, and then we'll take off from there. But I think the market size is quite big. And so again, I can easily make a case that it's a multi-billion-dollar market, especially if you think about just the size of the overall commodity module market for 800 Gbps.
And I don't want to be too cute, but when you say meaningful revenue, some people think meaningful has to be the SEC definition of 10% of revs. When you say meaningful, what do you mean? Is that tens of millions of dollars? Is there any way you can frame how you think investment?
I think it's beyond that.
Beyond, okay.
Yeah, I think it's beyond that.
Great. Moving to active LED cables, maybe just spend a minute talking about the development of that microLED-based technology. What Hyperlume brought to the company with that acquisition?
Yeah. So the reason that I'm very bullish on the product category is that when we bring this to market, it's going to have equal reliability to copper. It'll have equal power efficiency to AECs. It'll have equal cost efficiency. So the three main drivers on why the industry has adopted AECs so broadly, that will come with ALCs. What you get in addition is you get longer length and you get a thinner cable. And so the idea, the acquisition of Hyperlume was critical to marrying what we do from an electrical perspective with what they've developed from a microLED perspective. And so from here, I've mentioned before that it's really an execution play. It's bringing technologies that have been matured, ultimately merging them into this cable system and bringing it to market. So my expectation is that'll happen over the next year.
Fiscal 2028 is when we'll see first volumes.
And just sort of as you think about the TAM expansion opportunity for ALCs, how does that TAM compare to your current AEC TAM? And is there any AEC business that you see being cannibalized as ALCs ramp?
Yeah. So I think that we can talk about the areas of TAM within the AI and data center, the different parts of the network, and so I would add, in addition to front-end scale-out switch racks, routers, I would add scale-up is going to be a big opportunity. And so the routing densities are going to be much more dense than what we're seeing on scale-out, and so AECs will definitely be preferred for short connections. ALCs expand the TAM significantly. And as we look at that TAM generally expanding, that's why I feel comfortable saying that the market opportunity is double what we see for, at least double of what we see today for AECs.
Now, if I were to have a customer tell me that they really like the fact that you can make a thinner cable, they really like going to the optical form factor, and they want me to build 2 m cables, I'm agnostic. I'll build that. I'll build an ALC if that's what's preferred. So in a sense, you could make a case that the ALCs might cannibalize some of the market for AECs, depending on the customer.
Okay. One of the questions we'll get your thoughts on CPO in a second, but just can you deliver an active electrical, sorry, active LED cable in a non-pluggable form factor? I mean, does it have to be a pluggable? Could you bring it into a sort of a chiplet, bring it closer to a switch or an XPU?
Yeah, great question. Yes, absolutely. In a sense, the other companies in the market are really focused on that opportunity. And for me, that's out on the horizon. That's beyond this point A to point B execution. That market opportunity is going to come in several years, not next year, not the year after that. The great thing about ALCs is doing the work and basically bringing the technology to market. It hardens the solution. And so the idea of having it be in a pluggable. Absolutely not. On our roadmap, our longer-term roadmap that we've talked about with OmniConnect is doing a gearbox that interfaces electrically with the GPU. And then the gearboxing will happen to a bundle of fibers. And the difference about that kind of an NPO solution is, again, no lasers.
So when you compare it to CPO or NPO, when you talk about lasers, again, it doesn't clear the hurdle on reliability. And that's the promise of microLED long term. And that's where you can make a case that OmniConnect becomes the largest opportunity that we've got long term, meaning on the three- to five-year timeline. But we're more focused on the execution play in the next 18 months.
Got it. Bill, just on Hyperlume, what's the odds that we have a standards-based committee with HBM and microLEDs to HBM disaggregated? HBM, and are you guys going to be involved with that? HBM 5, 6, would that be possible?
Yeah, absolutely. I mean, there's no bright lines on where we will be or where we will go. But if that opportunity emerges, you can count on the fact we'll be involved.
[audio distortion].
Yeah. Yeah, count that as additional opportunity.
All right. Moving to OmniConnect and Weaver, talk about the first instantiation of that product, the memory fan-out gearbox application, and I think you've announced your first partner with Weaver.
Right, right. So we've talked a lot about training. Everybody understands how massive that opportunity has been and will be ongoing. But inference is just starting to happen and happen in a big way. And the challenge in inference is different than the challenges in training. And so many people talk about the memory wall. And if we kind of baseline on a product that is being talked about coming to market, this Rubin CPX, and we look at that and we look at the size of memory being 128 GB, and you kind of zoom in and say, why is it 128 GB? And you look at the beachfront area on the GPU and how much of the beachfront is taken up by the SerDes and the PHYs that connect to 128 Gbps. It's about 75% of the beachfront.
There's no path to 1 TB just from the standpoint of the die area, the beachfront area. And then if you look at the actual memory, it can be about an inch away from the GPU based on the reach of those SerDes. And so you're limited by two things physically, the beachfront area, because it's not very dense from the standpoint of throughput per millimeter on the die edge. And it's limited by the reach. So you can't get the DRAM very far away. So you're limited to packing it around the GPU. There's markets like real-time AI-generated video that require large models. And so if you're building an inference machine and you're not able to put the entire model into DRAM, your performance is going to be horrible because you're going to be paging in and paging out of DRAM.
And so if you can kind of unlock those limitations by designing a SerDes that's got super high throughput per linear millimeter, and you can have a reach that's 10 in or 10 x what's out there today, you could imagine, and with our first partner, they announced what they're going to be bringing to market is an inference engine with 2 TB, 15 x the memory or something on that order compared to what is coming for smaller models. So we think that is really a breakthrough. And another market that we can talk about is automotive. There's no paging in and paging out. So if you're trimming your models because you don't have enough DRAM, that goes directly to the quality of full self-driving and safety.
The idea of designing an inference solution that had a future-enabled ability to add more memory as your model increased, it'd be game-changing. Having the ability to change the gearbox from, say, LPDDR5 to LPDDR6 when the market changed, all you'd have to do is change the gearbox. That would be a completely different way to think about the challenges of that application as the models grow, which, by the way, they're growing exponentially if you look at the data that's publicly available.
Right, right. And as you think about these chiplets, what would the dollar opportunity, dollar content opportunity be if you attach to an inferencing processor or a high-end autonomous?
Yeah. So for this first gearbox, for this memory, the Weaver device that is kind of breaking the wall on memory, it really depends on how much memory is being deployed. Right? So with our first application, I would say it's well over $1,000 of content based on this system that they've defined.
From a design perspective, this requires the XPU or the ASIC provider to sort of design to your sort of chiplet interface. And so it's got to be a very tight design, co-design process. And so maybe just talk how long does that design cycle take before you would be able to see the chiplets ramp?
For our first customer, it feels like it's like a two-year type of timeline to production. That's why we signaled that fiscal 2028 is when we expect first revenue. It is a much more collaborative design activity compared to the system solutions that literally I can take an order and you can put those in a deployment in weeks, not years. Yeah. If you think about the concept of OmniConnect, think about everybody seems to be familiar with NVLink Fusion, where if you add NVIDIA SerDes to your GPU design, that becomes a gateway to connecting with, say, the NVLink ecosystem. Same thing here is if you put our SerDes in your GPU design, that becomes a gateway to all the gearbox chips that we're going to make. We'll make gearbox chips for memory first.
We'll make gearbox chips for scale-out, for scale-up, and ultimately near-packaged optics. And so you can imagine just the game changer it would be if you could make a GPU design that's future-enabled, that you wouldn't have to make your decision on I/O until late in the process. And you can be really prescriptive with the gearboxes you want us to design.
So you could have an electrical version of the gearbox. You could have an NPO or a CPO version of the chiplets provide.
Yeah. So very simply, if we talk about scale-out, maybe the first gearbox we'll deliver is a 200 Gbps per lane capable. You could run it at 100 or 200, but when 400 Gbps is a reality, I'll deliver a new gearbox that you can connect to the same XPU and enable that functionality, enable that speed increase, so in a sense, that's where I think about it as being future-enabled. You don't have to spin the GPU, and so that's game-changing.
Last question for me, just sort of thoughts. You did a financing recently, I think put an extra $750 million or so of cash on the balance sheet. You guys are nicely profitable with, I think, 45% + net margins. So very cash flow positive. Just thoughts on capital allocation and M&A. Obviously, you've done a couple of deals recently.
Yeah. So we closed, as you mentioned, $750 million financing. And really, with Hyperlume, you saw us do our first acquisition as a company. There may be others of similar size in our future, more tuck-in in nature or certainly not what I would call transformative, but very adjacent to where we are today. And with this close of that ATM, we preserve a lot of strategic flexibility with any opportunities that may present themselves to kind of accelerate some of the areas that we're going to.
Excellent. We've got a couple of minutes left. So I figure I'll open up to the audience for any questions. Any takers? All right. Well, we'll wrap it here then. Bill and Dan, thank you very much for joining us at the Needham conference. Really appreciate it.
Thanks so much.