Credo Technology Group Holding Ltd (CRDO)
NASDAQ: CRDO · Real-Time Price · USD
195.04
+9.50 (5.12%)
At close: Apr 24, 2026, 4:00 PM EDT
194.44
-0.60 (-0.31%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

26th Annual Needham Growth Virtual Conference

Jan 18, 2024

Quinn Bolton
Semiconductor Analyst, Needham

Everybody to the third day of Needham's Annual Growth Conference. My name is Quinn Bolton. I'm the semiconductor analyst for Needham. It's my pleasure to host this fireside chat with Credo. The company's a leading provider of secure, energy-efficient, and high-speed interconnect solutions that target every connection in the data center. We see the company as a strong beneficiary of data center spending and global investments in generative AI. We have named Credo as our top pick for 2024. Joining me from the company are Bill Brennan, President and CEO, Dan Fleming, CFO on stage, and Dan O'Neil, VP of Corporate Development, is here in the audience as well. Bill, Dan, Dan, thank you for joining us.

Bill Brennan
President and CEO, Credo

Thank you.

Dan Fleming
CFO, Credo

Thank you.

Quinn Bolton
Semiconductor Analyst, Needham

Since you're still fairly recently new to the public markets, to the extent there are a few folks in the audience who may not be familiar with the story, do you wanna spend just a minute or two giving us an overview of Credo and the IT business, as well as the four product lines you're in today?

Bill Brennan
President and CEO, Credo

Sure, absolutely. Yeah, so I appreciate the introduction, and we're coming to the end of our sophomore year of being a public company, so still relatively new. So Credo is a pure-play, high-speed connectivity company. The data infrastructure market in general is really going through a revolution in a sense that there really is an exponential increase in bandwidth demanded by the data center as well as other places within the industry. Our company, we focus primarily on the Ethernet market, and that's the protocol that is ubiquitous within the switching network, everything above the NIC within the data center. So our strategy was a little bit different in the sense that, being a connectivity company, you know, we didn't just see the world as optical.

We didn't see, you know, just, the world as certain copper connections. We really focused on all of the connections, Ethernet connections in the data center market. And when we talk about our product lines today, our product lines are based on the different connections that you'll see. And so, we're very well known for pioneering the active electrical cable market, becoming really a new product category that's de facto for making short in-rack connections, in the Ethernet space. We're also very well known for our Optical DSPs. So we provide electrical components, optical DSP, TIAs, and laser drivers for optical modules.

We build line card PHYs, which typically sit on switch line cards, and either making backplane, backplane connections or front panel connections where there's a need for encryption. We build chiplets as well, so this is kind of a new emerging space. We've been working on it for more than five years, Tesla being one of our lead customers. That, that's gone into production, and now the world is really talking about chiplets as a, an important category long term. Those are our four product areas, and then, we also sell our IP. Our SerDes IP is core to the company. So I'll mention that, a serializer/deserializer circuit, or abbreviated as SerDes, is really the core technology. It's ubiquitous. Every connection over any kind of wire is going through a SerDes.

And so, all of our products are built on top of this, you know, differentiated SerDes IP platform that we've built. But we're also selling our IP for companies that are building chips that we wouldn't build, so switch chip, for example, or a GPU device like Tesla is building. So that, you know, that IP business has been, you know, more than 10 years now that we've been in it. And, it's really served us well in the sense that it puts us at the forefront of new developments. So that's kind of a good summary to start.

Quinn Bolton
Semiconductor Analyst, Needham

Yeah. No, that's, that was great. I'm gonna jump right into the biggest part of the business, which are your active electrical cables Credo invented.

Bill Brennan
President and CEO, Credo

Sure.

Quinn Bolton
Semiconductor Analyst, Needham

This solution was first to market. Your largest customer, Microsoft, has been a big buyer of active electrical Ethernet cables or electrical cables. They started off in the general compute network, but late last year, they introduced their Maia 100 accelerator racks that brought AECs into the, you know, back-end networks of an AI rack. And so talk about the opportunity as you move from the general purpose front-end network to the AI back-end network, and maybe use that rack that's publicly-

Bill Brennan
President and CEO, Credo

Sure

Quinn Bolton
Semiconductor Analyst, Needham

... out there from Microsoft, an example of the kind of content increase you'll see as you move into the back end.

Bill Brennan
President and CEO, Credo

Sure, sure. So, you know, I will say that the way we think about the networks, as we talk about front-end network connections, traditionally, that has been the network connection between servers, which have typically been general compute. When we look at AI appliances, the front-end network is the same, so there's a front-end network connection. And so when we think about front-end networks, it typically looks like a single port, that connects from a NIC to a top-of-rack switch. As the AI application has really emerged in the past year... It's, by the way, something we've been working on with customers for several years. So, you know, 2023 is the year we'll all remember. That's when AI happened, you know, from a production standpoint.

But if you look at an AI cluster, and if you look at the connectivity challenge, it's a much greater challenge than just connecting to the front-end network. So every appliance does connect to the front-end network, but in a cluster, the key to the application is creating a point-to-point network between all of the GPUs. So basically, forming a neural network with supercomputer functionality, with all of these GPUs acting as one compute unit. To do that, you need to network the GPUs, and this is where the industry refers to this as the back-end network. It's a network that's dedicated to the cluster. If you have 10 or 20 racks in the cluster, it would be a network that, you know, serves all of the GPUs in that cluster.

You know, and so, if we, if we look at a typical general compute server, a single port front-end network connection, and if we compare that to, say, an AI appliance, the one that Microsoft publicly introduced, you know, those AI appliances have 12 ports. So there's a single port for the front-end network connection, and there's 12 dedicated ports that serve the back-end network. And so when we think about, you know, just the sheer bandwidth, think about a 200 Gbps connection to the front-end network, and then having 12 ports of 400 Gbps dedicated to the back-end network. So in terms of bandwidth, that's 200 Gbps versus 4.8 Tbps bandwidth.

That talk that speaks to the opportunity for connectivity companies, but it also speaks to the challenge of building these networks. So where we're... You know, if you look at that one example, it's a single AEC connection for the front-end. It's six AEC connections. Two ports are served by one AEC on each appliance. So the opportunity is really a one for one. For every GPU, there is an AEC that will connect to the top-of-rack switches that are dedicated to the back-end network.

Quinn Bolton
Semiconductor Analyst, Needham

Okay.

Bill Brennan
President and CEO, Credo

Tremendous opportunity, you know, from a connectivity standpoint.

Quinn Bolton
Semiconductor Analyst, Needham

As the speeds go up, I assume the ASPs go up as well.

Bill Brennan
President and CEO, Credo

Well, I would say that core to our business over the last 10 years that I've been involved, we've always looked at the, you know, the bandwidth curve, moving from single-lane speeds of 10 Gbps to single-lane speeds of 25, to 50, to 100, and we're even working on 200 Gbps now. The AI application and the bandwidth required by this back-end network is really moving forward, really accelerating the need for faster speeds, and so that's really wind in our sails. And I know you like to talk about ASPs, and I will say that-

Quinn Bolton
Semiconductor Analyst, Needham

I am. Yeah.

Bill Brennan
President and CEO, Credo

... you know, the technical hurdle of delivering, you know, the leading-edge solutions is a much higher hurdle. So that plays into our advantage. And, you know, and of course, you know, there's an opportunity to have ASPs that are favorable.

Quinn Bolton
Semiconductor Analyst, Needham

Yeah. Well, one of the things that we heard, or at least I learned at dinner last night, was in addition to the AECs in the rack connecting the GPUs back to the switches, that there may be an opportunity on the switch side if you look at, like, a super pod or a cluster of multiple racks. And so there may be additional opportunities that the investors don't realize that, that play in that AI network. Maybe spend a minute just talking about what you could see in the switches that connect all of the, you know, a 16,000 you know, GPU node cluster.

Bill Brennan
President and CEO, Credo

Great, great point. So when we think about, you know, the connections that we're making in an AI appliance rack, and let's say there's 48 GPUs, 48 AECs that are connecting to the switches, they populate basically half the switch ports. And those... You know, what are the other half of the switch ports connected to? So there's a dedicated Leaf-spine switch network that you know, applies to a cluster. And so, you know, if you think about the connections, the other half of the ports in the appliance rack, those are all optical connections that get pulled over at row scale, so, you know, something on the order of 10-20 m, and they're connecting to Leaf-spine switches.

You know, in the case where somebody goes forward with a disaggregated switch rack, so they're basically building switch racks, there's a large AEC opportunity there as well. And if you look at cluster scale, it basically tracks to another 1-to-1 per GPU. So in the case of a 10-rack cluster, you would have, in the appliance racks, 480, 48 per rack in this example that I'm talking about. And then in the switch racks, the two dedicated switch racks, there'll be 256 on each rack of AEC volume. So roughly a 1-to-1 in addition to the 1-to-1 in rack.

Quinn Bolton
Semiconductor Analyst, Needham

So in that switch, the AECs are almost taking... replacing the backplane for that local connectivity.

Bill Brennan
President and CEO, Credo

Yeah. Another way of deploying, you know, you know, one of these leaf-spine switch racks or switches would be in a chassis form, where you have the same number of switches that are all interconnected within the chassis, and the connections, you know, you're right. The parallel is the backplane connections that are made within a chassis become in-rack connections in a switch rack.

Quinn Bolton
Semiconductor Analyst, Needham

Got it. Wanted to move to your second hyperscaler that's announced that they'll be adopting AECs, which is Amazon, and I think you have started to ship in their general compute with a NIC-ToR application. But sounds like at re:Invent recently, they also introduced an AI appliance that may have similar content, you know, very similar structure to what we just discussed with Microsoft. But maybe spend a minute. When do you see... You know, Amazon revenue's been, you know, single-digit millions, you know, kind of $3 million-$5 million. When does that really start to take off?

Bill Brennan
President and CEO, Credo

Yes. Yeah, you're right. We've been working with Amazon for, you know, for quite a while now in building out their next-generation platforms and providing that in-rack interconnect. And so the question on revenue timing, what we've signaled in the past is that our fiscal 2024, which ends in April, we expect you know, the revenue that you would expect to see, you know, during early production, qualification pilots. Material in a sense, but we don't expect to hit the linear ramp until our fiscal 2025.

We've also signaled beyond that, that we see a real inflection point in the H2 of our fiscal 2025.

Quinn Bolton
Semiconductor Analyst, Needham

Okay.

Bill Brennan
President and CEO, Credo

I think we're quite well positioned.

Quinn Bolton
Semiconductor Analyst, Needham

Okay, good. You know, you had on the recent earnings call, you mentioned now a third hyperscaler looking at 400 G AECs, that, that is a nearer term opportunity, and a fourth hyperscaler looking at 800 G AECs, maybe a little bit further out. But, you know, to sort of conclude our discussion of AECs, maybe address those couple of opportunities as, as well.

Bill Brennan
President and CEO, Credo

Yeah. I'm not in a position to give too much specific information, but I'll say the third hyperscaler, the application is a switch rack application, so a disaggregated chassis. The fourth customer we're working on or working with, it's really a connection that's made within, you know, within servers.

Servers to top-of-rack.

Quinn Bolton
Semiconductor Analyst, Needham

Okay.

Bill Brennan
President and CEO, Credo

And-

Quinn Bolton
Semiconductor Analyst, Needham

A NIC-to-ToR kind of application?

Bill Brennan
President and CEO, Credo

NIC-to-ToR application.

Quinn Bolton
Semiconductor Analyst, Needham

NIC-to-ToR. Okay. Okay. The last question for me on AECs, describe the competitive landscape. You've come in, you're the first player in this market. You probably have effectively a 100% share. We've seen Marvell in the optical DSP space, you know, the incumbent tends to have pretty high share for a pretty extended period of time. How are you looking at competition as we move into AI? I assume at some point, you'll see some competition, but how should folks think about the competitive-

Bill Brennan
President and CEO, Credo

Sure

Quinn Bolton
Semiconductor Analyst, Needham

... landscape and, and when competitors may start to... Yeah.

Bill Brennan
President and CEO, Credo

Yeah, we fully expect to have competition long term. And that's really driven by the desires of our customers.

Quinn Bolton
Semiconductor Analyst, Needham

Yeah.

Bill Brennan
President and CEO, Credo

Right? So there's no hyperscaler, you know, that exists that says, "Hey, I really want a single source on everything." And so-

Quinn Bolton
Semiconductor Analyst, Needham

Let's do that.

Bill Brennan
President and CEO, Credo

... you know, for sure, the desire is to have more than one source. I would say that you know, we've established competitive moats, and, you know, our goal is to continue to be able to deliver better solutions to the customers. Time to market is really critical. So the first solution to be qualified, you know, it's been proven many times that there's a big advantage for the first mover. And so the way that we've organized is, you know, we take ownership of the entire system. And that's proven to be, you know, I think the key factor in establishing the product category. You know, it's very simple. Very simply, you could say, "Hey, you're a chip company.

Why don't you sell a chip to a company that assembles copper cables?" You know, but the big gap there is, you know, the resources required to bring this system, you know, to a point where it can be qualified in a hyperscaler application. The engineering depth has got to exist somewhere within the supply chain, and we chose to invest in that organization. It's proven, you know, to be a big advantage in the sense that we're having, you know, a tight iterative loop in development with the hyperscalers directly. That means that we typically deliver first, it means we're first to be qualified, and that that gives us a big advantage. As it...

I think it'll play out that way in the future as well because, you know, just the customer relationship is so tight, one clear owner of the system. And my expectation will continue to play out that well, especially as we open the door for innovation. We open the door for innovative ideas from the end customer on how to make their rack design more valuable. The other, you know, competitive moat that we're building is that typically, if you compare us apples to apples with our competition, we're gonna deliver solutions, excuse me, that are fundamentally lower power. And so when you speak to the, you know, the total cost of ownership of a solution, OpEx is a huge factor.

And so if you can deliver a solution that's significantly lower power, and that's really driven by, you know, our core technology platform that I alluded to earlier, which is our SerDes IP portfolio. So that drives a competitive moat as well.

Quinn Bolton
Semiconductor Analyst, Needham

Wanted to, to move from the AEC market, where you are the incumbent, to sort of optical DSP, where you're positioned more of a disruptor.

Bill Brennan
President and CEO, Credo

Sure.

Quinn Bolton
Semiconductor Analyst, Needham

You know, you've, you've started shipping Optical DSPs to a large U.S. hyperscaler at the 400G speed. Maybe talk about that deployment, you know, where are we in the ramp? And I don't know if you can give us a sense, at steady state, how large that might be, but, you know, is this a significant win, or is this just, you know, the first big win and there's more to come?

Bill Brennan
President and CEO, Credo

Yes, I think that any win at any of the U.S. hyperscalers is significant. So this is, you know, this is our first U.S. hyperscaler to ramp, so we're really happy with the progress that we're making. I've mentioned before that the application is really a 400 G application, and it's for an AI back-end network. So we feel good about that. I would say that we're, you know, I think the last quarter that we reported was really the first significant quarter of the ramp, and for those that track it closely, you know, I think they crossed the threshold of being a 10% customer for the quarter that we reported. So it can be significant.

I'd say that we're in the early innings of the ramp with that customer, and, you know, the key is to follow that with, you know, many next-generation solutions. So basically, you know, become supplier of choice at that hyperscaler. I think we're doing quite well in a sense that, you know, we've lined up another, you know, customer that we're in hot pursuit with, and we expect to, you know, to crack the seal there in our fiscal 2025. And then from there, we can talk about, you know, really the, you know, the blocking and tackling that happens on a daily basis to try to put ourselves in position at every one of the U.S. hyperscalers to become an Optical DSP supplier through their module partners.

Quinn Bolton
Semiconductor Analyst, Needham

Maybe spend a minute, talking about how you—not go to market, but, but how those designs are awarded. Hyperscalers, I think, usually, green-light a couple of module vendors for a project, and so, you know, are you selling to the module guy? Are you kind of pitching the module guy?

Are you ultimately engaging with a hyperscaler to sort of be named as one of the, you know, DSP providers to the greenlit, module vendors? How does it, how does it all work?

Bill Brennan
President and CEO, Credo

Yeah. So generally speaking, there's no question about the module suppliers wanting to have disruption in the supply chain. So, you know, there's an unlimited number of opportunities if we wanna pursue module module vendors. They'll build modules with our DSPs. But the key link here is, including the hyperscaler, it's gotta be a very tightly coupled decision that really involves the hyperscaler to say that they will prioritize Qual and that they want a solution. And so, you know, that's really where we've been over the last couple of years, and that's really very much the focus is you know, any kind of development that we're involved in has gotta have some sponsorship from a hyperscaler and the customer. But our clear customer is the, you know, the optical supply chain.

We're supplying the device to them, and they're supplying the module to the hyperscalers.

Quinn Bolton
Semiconductor Analyst, Needham

Got it. I think your initial wins in optical DSPs at the 200G level with some of the Chinese hyperscalers, you sort of got some awards. It seemed like it went cold, but seems like you've recently started to see some green shoots there. You know, does that look like it could be a, you know, meaningful revenue stream, or do you see most of the optical DSP, especially the higher speed stuff, being U.S.-based?

Bill Brennan
President and CEO, Credo

I think we're very focused on the higher speeds, and we're focused on the U.S. hyperscalers. I would say, you know, that, that I think the 200G port speed and 400 G, 50 G lanes will be popular for a long time in China. So I kinda view that as a, you know, as a possibility, but it's not something we're really baking in in a big way as we talk about, you know, the upcoming quarters.

Quinn Bolton
Semiconductor Analyst, Needham

Okay

Bill Brennan
President and CEO, Credo

... or years.

Quinn Bolton
Semiconductor Analyst, Needham

So, back in March at the OFC show, Andy at Arista made a pretty big splash, sort of pushing the idea of linear optics and removing the DSP entirely from the module. I think it's, you know, sort of come out that that could be a pretty challenging solution. But I think it was in December or November, Credo introduced the idea of a linear receive solution. So tell us what's linear receive, what are the advantages, and how it might be able to save power in these networks?

Bill Brennan
President and CEO, Credo

Sure. Yeah, so it was kind of a big, you know, big deal at OFC last year when Andy Bechtolsheim, really a visionary in the industry, talked about the 1.6T generation, 1.6T ports. And so, 1.6T port is 8 lanes of 200 G. He was talking about the power challenge and how the path that we're on to build these 1.6T modules, the path from a power perspective is really unsustainable.

So if you look at building an 800G optical DSP, and you're looking at, you know, something in the 7-8 W range, and you're able to navigate the technical challenge of moving from 100G lanes to 200G lanes and keeping the power per bit the same, means that your optical DSP will be in the 14-16 W range. The ceiling on an industry-standard connector, like a QSFP-DD or an OSFP, is 20 W. And so, you know, the case was being made that, look, there's... You're not gonna be able to build the module from a power perspective unless something changes. And so I think what he kind of, I guess, threw out to the industry was, you know, DSP is the highest power-consuming device in the module.

There are five companies on the show floor showing modules without a DSP for the 800 G generation. And he kinda just said, you know, he left it at that. So it created a lot of activity during this year. Marvell did a great job in explaining to the industry why, you know, moving away from doing clock and data recovery or DSP in each end of the module connection, you know, is really not feasible for a for any kind of large percentage of the market.

I think the industry analysts that go deep on optical have agreed that maybe it's 10% of the market, at best, at the 800G level, if somebody is owning each end of the connection and is willing to put the work in to make that, you know, even feasible from a signal integrity standpoint. But you lose interoperability, you lose IEEE compliance, the signal integrity is questionable at best, and really, you know, I think they did a good job, and the analysts have agreed that it's a small part of the market. And I think you can make an argument at 1.6T, but technically, it's not feasible, right? If you...

At 800G , that's a small percentage, and if you think doubling the speed is not a doubling of the technical challenge, it's multiple times harder at 200 G Our approach to it was to come away thinking at a high level, the challenge is power, right? So if we focus on an innovation that would enable us to maintain IEEE compliance, interoperability, have much better signal integrity or raw bit error rates, maintain the same test points that are, you know, very well established in building an optical module, but if we could lower the power by half, that would potentially be, you know, extremely meaningful to the industry.

If you think about an optical module, there's direction. There's traffic traveling in both directions into the module and out of the module. On one end, there's a connection from the switch to the optical module, doing clock and data recovery on that transmit path in the module, and then, you know, transmitting it. But on the receive side, you know, eliminating the DSP, eliminating the clock and data recovery, and passing the signal to the other switch.

And if you know the argument is, if that SerDes is strong enough to do clock and data recovery or equalize the signal, you know, then you're looking at a solution that reduces the DSP power by 50%, which changes the whole conversation about 1.6T. And it becomes very meaningful at 800 G. It's huge power savings, and theoretically, there's some cost benefit if you optimize.

Quinn Bolton
Semiconductor Analyst, Needham

Okay. How has the reception been? I know it was just introduced late last year, but what kind of reception are you hearing from either hyperscalers or module vendors as you go out and talk about this new sort of architecture?

Bill Brennan
President and CEO, Credo

So I think across the board, feedback from module suppliers as well as hyperscalers, is that the idea—it's actually a game changer. You know, people believe that technically, this type of thing is feasible, and potentially an answer for the 1.6T generation. And also potentially very meaningful for the 800 G generation. So, we do have customers at a module level that are building with our solution. We've got hyperscaler customers that are, you know, looking at eval boards and awaiting delivery of these modules. So we've seen, you know, extremely good response to it. And from a 1.6T generation, the light's gone on for everybody, that this is potentially an answer to the industry challenge.

Quinn Bolton
Semiconductor Analyst, Needham

Do you expect that we may see modules or at least demo modules at OFC this year?

Bill Brennan
President and CEO, Credo

That's what we're shooting for.

Quinn Bolton
Semiconductor Analyst, Needham

Okay.

Bill Brennan
President and CEO, Credo

It's a heavy lift, but that's what we're shooting for.

Quinn Bolton
Semiconductor Analyst, Needham

Perfect. I wanna just move quickly to the chiplet business. You talked that this business was near 10% in your last quarter, and so, like Optical DSPs, this is starting to become meaningful as well. You mentioned Tesla as one customer, but maybe talk about just the outlook for that business, your work with others to develop chiplets. You know, are these more custom, or is it more standard product? How do you see that business developing?

Bill Brennan
President and CEO, Credo

Yeah, so our approach on chiplets is to build based on customer need, but have it be a solution that we can sell to the industry. And so, you know, it's been, you know, really interesting to watch the creative developments that are ongoing as, you know, people are developing system solutions in a package, going far beyond maximum reticle size. And, you know, we can talk about CoWoS, and we can talk about the other packaging technologies that have been game changers in the industry. So the idea of building a connectivity chiplet is becoming, you know, a pretty popular thought, where it was. It was kinda crazy when we started talking to Tesla five years ago about taking this approach.

You know, it seemed like, you know, we were on our own in our belief that this could be a meaningful product category that would enable, you know, really creative solutions. But we are seeing others that are, that are, you know, headed down that path, really driven by, you know, what's happening in the AI space.

Quinn Bolton
Semiconductor Analyst, Needham

I'm gonna stop here and just see if we have any questions from the audience. I have a number more I can ask, but let's see if anyone in the room has a question for management. No? Okay. I wanna move on to the line card ICs. You know, part of the discussion, I think, around the linear optics was the stronger SerDes in Broadcom's Tomahawk 5 chipset. To the extent that the chipset has a stronger SerDes, how does that potentially affect your line card IC business? Does it potentially remove the need for some of your retimers, maybe not gearboxes? You know, how do you view the impact of Tomahawk 5 on the line card business?

Bill Brennan
President and CEO, Credo

Let me get it right. The Tomahawk 5 has long reach SerDes?

Quinn Bolton
Semiconductor Analyst, Needham

Apparently, they can drive a signal over 4 m of AEC cable.

Bill Brennan
President and CEO, Credo

Okay

Quinn Bolton
Semiconductor Analyst, Needham

... in a lab, in a perfect setting.

Bill Brennan
President and CEO, Credo

It's possible. So, Broadcom has always had good SerDes, and so I'm not sure if the, the... I'm not exactly sure what to make of it, 'cause they've always had, you know, every generation, they said they've got the longest reach SerDes in the market. So I would expect them to do the same. I don't think it really changes the opportunity for us, you know, from a Line Card 5 perspective. There's a couple of applications that are, or maybe, yeah, a couple of applications I'll talk about.

One is when you've got a chassis, a switch chassis, those connections that are made over the backplane. I don't think, you know, crossing the backplane would, you know, would be something that I don't think eliminating the PHYs that have traditionally sat on the backplane and helped make that connection, I don't think that's gonna disappear. And then, of course, the other big part of our business is encryption. And so when we talk about, you know, those switches that require security, so all data, you know, leaving is encrypted, you know, that's not gonna be replaced either by a-

Quinn Bolton
Semiconductor Analyst, Needham

Okay

Bill Brennan
President and CEO, Credo

... by a strong SerDes.

Quinn Bolton
Semiconductor Analyst, Needham

So the SerDes in the Tomahawk switch or any switch doesn't have MACsec built in, you would need a discrete retimer to add that MACsec?

Bill Brennan
President and CEO, Credo

That, that's what we're seeing.

Quinn Bolton
Semiconductor Analyst, Needham

Okay.

Bill Brennan
President and CEO, Credo

Yeah.

Quinn Bolton
Semiconductor Analyst, Needham

Okay.

Bill Brennan
President and CEO, Credo

With our customer base.

Quinn Bolton
Semiconductor Analyst, Needham

... Can you talk about just the benefits? Obviously, we talked about how AI would drive the AEC business. Is there an uplift in, you know, kind of signal integrity or retimer or gearbox, you know, driven by AI as well?

Bill Brennan
President and CEO, Credo

Yeah, I think, I think really, you know, the move to 100 G per lane speeds feeds every part of our business, you know, including line card PHYs. And I think encryption generally is becoming more important. Of course, in and out of the data center, everything is always encrypted, but as more edge deployments happen, you know, there's, I think there's a real need for security that maybe you wouldn't need within a data center. You know? So I think that business we see as a growth opportunity.

Quinn Bolton
Semiconductor Analyst, Needham

Perfect. IP business, you've said long term, 10%-15%-ish of sales. I think some time ago, you signed a larger license on the, the consumer side. I assume at some point, that sort of tails out of the revenue stream as you get those milestone payments. You know, how do you feel about, you know, being able to replace that revenue with new licenses, you know, for, for other SerDes IP?

Bill Brennan
President and CEO, Credo

Yeah, so there's a huge amount of activity that we're seeing on the IP licensing front, really driven, again, by all of the creative solutions that are, you know, being brought to market by so many people right now. And so SerDes is a key enabler, and so we're very well known for IP that's more optimized, whether that's, you know, in a given process node or in the case where power is really important. So our funnel looks quite strong, really driven by all of these new applications. And as it relates to our long-term model, I think, you know, the challenge there is our product business is growing, you know, extremely fast, and we expect that to continue.

And so when you say 10%-15%, it means that we've got to grow our IP business at the same rate to maintain that level. So I would say as if you kind of project out in time, you know, I think we'll probably fall to the lower end of that spectrum, and. But it's always, I think, gonna remain a very strategic part of our business. So, you know, being involved in developments that are, you know, really creative, innovative, and futuristic, it's important for us to have that visibility through our IP business so that we can be in line with our product business as, you know, those systems come to fruition.

Quinn Bolton
Semiconductor Analyst, Needham

One of the things we've heard is, Broadcom and Marvell, which I think both have good SerDes, will not license at the individual SerDes because they're trying to get the entire SoC or the ASIC platform. And so does that open up, you know, sort of opportunities for you, where you're willing to license the SerDes, don't feel like you have to take on the entire, you know, basic development? I assume that that's something that's-

Bill Brennan
President and CEO, Credo

Yeah. I mean, for sure, you know, we you know, we're ROI driven, right?

If we can enable somebody that's trying to build their own IC versus being, you know, kind of bundled in through an ASIC type of capability, we're not gonna go down the ASIC path. You know, that's clear. But we will support those companies that wanna invest themselves. And, you know, for sure, you know, like, doing a license there is absolutely green light.

Quinn Bolton
Semiconductor Analyst, Needham

Okay. One of the things you showed at the Open Compute was an AEC for the CXL standard. I know we're still very, very early days in CXL.

Bill Brennan
President and CEO, Credo

Mm-hmm.

Quinn Bolton
Semiconductor Analyst, Needham

Talk about the opportunity to bring AECs to a different part of the market and, you know, kind of the demand you may see building for CXL within the rack.

Bill Brennan
President and CEO, Credo

Sure. Sure. I would say that although the first, you know, first business that we've ramped is Ethernet, we're really protocol agnostic, and so we do have a big investment in PCIe and CXL. You know, and we're customer driven, and so we, you know, we listen closely to the messages that we're given. And there's an argument to be made that CXL as a protocol, you know, kind of gives you advantages over other protocols in the market, specifically related to latency as well as power-down modes. And so if you add power-down modes, you know, to addressing some of the latency issues that have been talked about in the industry, it becomes a pretty compelling solution.

If you can save up to 85% of the power of these links, if, you know, if you go power down when the link is inactive. And so, you know, we presented kind of a vision piece at OCP, pretty compelling. We showed a back-end network connected with CXL, and we showed a front-end network, you know, that was for, you know, say, a traditional general compute rack that would, you know, be able to eliminate the stranding of compute resources and memory resources. And so that's, you know, we think that solutions, you know, with that standard can be compelling.

Quinn Bolton
Semiconductor Analyst, Needham

I'll see if there are any more questions, otherwise, I've got a couple more. Anyone? No. All right, wanted to ask about the next generation SerDes 224 G. I think some of the IP or EDA companies, Synopsys, Cadence, have announced their 2020 , sorry, 224G SerDes-

Bill Brennan
President and CEO, Credo

Mm-hmm

Quinn Bolton
Semiconductor Analyst, Needham

... as have folks like Alphawave. You know, where are you in your 224G development? How are you feeling about that standard? And, you know, we're probably talking still, you know, some number of years out, but when do you think 224G electrical SerDes, you know, really starts to, you know, ramp?

Bill Brennan
President and CEO, Credo

Yes, I think one of the key takeaways from 2023 was, you know, clearly the driver, you know, from an application perspective for an ecosystem that is at 224 G is AI. You know, the if you look at the AI cluster, the big challenge there, you know, it's a bandwidth bottleneck. If you can go faster, there's benefit to doing that. And so, this is different than any of the other kind of speed increases that we've seen over time. There's a very, very clear demand driver. And so I think that, you know, that the ecosystem you know, is working on all of, all of, you know, a complete end-to-end solution for 224G, both electrical and optical.

We're in fab in 3 nm now, and, you know, what we see is, the game here is really power. I alluded to the conversation, you know, that Andy started, but it carries over to every aspect of, you know, the 1.6T port generation. And so I think a lot of the solutions that have been announced, I'm not sure how power efficient-

Quinn Bolton
Semiconductor Analyst, Needham

Efficient

Bill Brennan
President and CEO, Credo

They are. And so we think that, you know, moving to the key is, the key is energy per bit, power per bit. You can't double or triple the energy per bit and expect that to be something that you can take to production.

And so, you know, managing, you know, an equal or lower energy per bit is really the path that we're on.

Quinn Bolton
Semiconductor Analyst, Needham

Okay.

Bill Brennan
President and CEO, Credo

And so we think that we're well positioned from a timing standpoint.

Quinn Bolton
Semiconductor Analyst, Needham

Yeah, I mean, the bits are doubling every generation, right? So you've gotta probably reduce that picojoules per bit.

Bill Brennan
President and CEO, Credo

Or we're, you know, in a situation where, you know, the amount of power that you're spending on the SerDes becomes unsustainable.

Quinn Bolton
Semiconductor Analyst, Needham

Yeah. Dan, I wanna close with two quick questions for you, not, not to put you on the spot. But, the analysts, myself included, have a pretty healthy revenue growth forecast for the company. I won't ask you to comment on that specifically, but, you know, what, what gives you confidence in the growth? You know, talk about, you know, do you receive customer forecasts? Do you have backlog? You know, what, what kind of visibility do you have into sort of future growth?

Dan Fleming
CFO, Credo

Yeah, certainly. So we do... From our largest customers, we do receive, you know, 12-month forecasts. Those forecasts are not always right, we know, but, but they do give us increasing visibility into what to expect over the upcoming year. So we feel very confident in where we are today. As Bill mentioned, our, our IP pipeline is as strong as it's ever been, so we're, we're very comfortable with where we are right now.

Quinn Bolton
Semiconductor Analyst, Needham

Got it. And, gross margins hovering around 60%. You have a target of 63%-65%. What's a reasonable timeline for investors to think about when you might hit that, 63%-65% level?

Dan Fleming
CFO, Credo

Yeah, generally speaking, you know, just over two years ago is when we started communicating our long-term model. And when we say long-term model, we really meant four to five years out. So this goes back to early 2022, late 2021. So we're about halfway there, and this is all really framed around, you know, $500 million in revenue, that long-term model. And what you see this year, this fiscal year, which Bill mentioned, ends April, so just, you know, three and a half months from now. We expect, you know, 200 basis point expansion versus last year, which is what we have communicated, and that gives us, you know, another 300-500 basis point expansion, which is really going to be driven by scale.

Scale and additional product mix changes over the upcoming years to attain that. So I call it FY 2026 is where we should be in that neighborhood i n that range,

Quinn Bolton
Semiconductor Analyst, Needham

Okay.

Dan Fleming
CFO, Credo

... from a margin perspective.

Quinn Bolton
Semiconductor Analyst, Needham

Perfect. Well, it looks like we're at the end of our session time. So Bill, Dan, thank you very much for joining us at the Needham Growth Conference. We really appreciate your participation.

Bill Brennan
President and CEO, Credo

Yeah.

Dan Fleming
CFO, Credo

Thank you.

Bill Brennan
President and CEO, Credo

Thanks for the great questions.

Quinn Bolton
Semiconductor Analyst, Needham

Thank you.

Bill Brennan
President and CEO, Credo

Thanks, thanks to everybody for the interest. Yeah.

Powered by