All right, good afternoon, I guess, everybody. Thank you for joining us at the Roth Conference. My name is Suji Desilva. I'm the semiconductor and intelligent systems analyst here, and it's our pleasure to have Credo Semiconductor here. Bill Brennan, the CEO, is here, and Dan Fleming, the CFO. With that, Bill and Dan, thank you for coming. Love to get right into the story here and talk about the trends here. We just had a Bitcoin session in here, and there was a lot of talk about data centers and the GPU demand and all that. Maybe you kind of frame this for us 'cause you guys play in that supply chain.
The drivers of the AI infrastructure demand and why they may be a multiplier effect going on for components like yours, which hasn't, I think, happened historically at this pace.
Sure. I think if we kind of start with historical perspective, how were things done prior to AI? Really, data centers were assembled with racks of general compute servers all hooked to a network, you know, a single connection from a server, NIC to ToR, ToR to leaf to spine. That's how things worked. So there was one, you know, dedicated network for everything that was related to those general compute servers, and that was sufficient for that type of network. With AI, we're talking about GPUs that are all connected together with a separate back-end network. It's absolutely required to get the bandwidth that's needed to have all of these GPUs, thousands of GPUs in a cluster, acting in concert as a supercomputer or almost like a neural network.
So we look at from a connectivity perspective, if you look at a typical AI appliance now, there is that single connection to the front-end network, that traditional network, but there's 5, 6, 7 connections to the back-end network.
Right.
And if you look at the speed of those connections, single-lane speeds, the front-end network can be served with relatively low speed, still very high speed, but 25 Gbps is common today per lane. And you look at these AI deployments, and they're going straight to 100 Gbps per lane, which is very much leading-edge technology. And for Credo, our business is connectivity, and the wind in our sails is really when the world goes faster. So moving from 25 to 50 to 100 Gbps per lane, you know, and, and actually, you know, the multiplier effect of assembling these back-end networks is really, you know, quite good for our business.
Right. I mean, in the past, these upgrade cycles from 100 to 200, 400 have taken time, and the ecosystem had to get ready. It sounds like these, the 800 gig upgrade is gonna happen very fast. Was the supply chain always able to do it this fast, or is it being accelerated? It's pretty impressive.
Yeah, I think that, you know, from a supply chain standpoint, I think the suppliers can come together quickly-
Yeah
... and transition technology. But if you think about the data center market, we think in terms of the top five U.S. hyperscalers really as our main focus. We look at each one of them as a different market almost. They all have their own architectures. It's not like they're building and shipping a product. You know, basically, they're deploying equipment, and they're all service businesses. So they're. You know, that's where the equipment stops. It's shipped to them, and it never ships, you know, outside. So you think about architecturally, there's so many different ways to skin the cat. And so, you know, if we look back historically on the 400 gig generation, you know, I'm now with Credo for 10 years.
Yeah.
Thank you. When I joined, the big buzz was, "Man, 400 gig is coming.
It's gonna be huge.
It's gonna be huge. Everybody's gonna convert." In reality, Amazon was the one that drove it, and it wasn't really a bandwidth thing. It was a form factor thing 'cause they were trying to build a switch rack, and they needed to get 4 times the bandwidth compared to a 100G port. The rest of the industry saw that the 400G, you know, port was more expensive than buying four 100G ports.
Right.
And then Google chose 200, and then, most of the industry followed them with the 200 gig. So there was a real fraying of port speeds and interconnect, you know, in that kind of timeframe. And now that we're looking at 800 gig, AI applications are driving the need, right? This new application is driving the need, and it's, you know, the need is immediate. You know, the faster that you can connect the GPUs, the more high performance your cluster is going to be. And as we talk about the future, that's why we can all see in the industry that there's this, you know, extremely strong demand for the next-generation 200 gig lanes.
And so it's great to kind of be sitting here after 10 years and knowing that there's a clear application that's gonna drive all of these hyperscalers...
Yeah
... to the next generation, the next-generation speeds.
It's amazing. It's a single app this time, which is amazing. I think we're all looking from the outside and watching AI, maybe some people have deeper understanding than others, but it's sort of a nebulous set of demands coming across, a hyperscaler trying to do something internally, a hyperscaler doing something that's customer-facing, a hyperscaler dealing with a third-party cloud service, an AI company that's new and forming, and then enterprises, right? So can you just give us kind of a walkthrough of that landscape and where the support for the demand for AI is coming from? Is it from all of them, and they're being kind of lined up, or what's going on?
We're at the very early stages, and it's, it's hard to say. It's, this is one of those things, "Build it, they will come.
Yeah.
You know, and I think we'll be shocked, you know, two years or three years from now, the applications that are emerging that are utilizing this technology. If we look at any inflection point over time, the inflection point happened when all of the technology came together at the same time. If you know, you look at the Newton-
Yeah.
Right? The difference between the Newton and the iPhone-
Small
... is the technology didn't exist to kind of support the ecosystem. It was way ahead of the curve. So now that AI is real, it's here, the applications are gonna be unleashed.
I think the cost per query is a big factor here in how quickly it rolls out, right? So you may see interest in it, but a concern about how expensive these racks are and waiting for those maybe to come down. Is there gonna be an effort in place that does help bring that cost per query, the cost of the hardware, down? Are you part of that equation? Can it be helped at all, or-
I, I think you're baiting me.
... That's what you do for a living, yeah.
Let's just say NVIDIA, NVIDIA has done a tremendous job. You know, the acquisition of Mellanox might be the best acquisition in the history of business, period.
They didn't even know. Yeah.
Right? And so what it did for them, it took a capability that they had with GPUs, and it added all of the connectivity around it. And so, you know, they've created this system that's bundled, and, you know, when you talk about cost per query, I think if you check their gross margins, you might get an idea of why it's high.
They did okay.
So you're talking about how do we, you know, I guess, democratize this technology-
That's-
at a price
The implicit question, I guess. Right.
It's more affordable.
Yeah.
You know, and I think that's key to their cluster is the InfiniBand and NVLink protocols. We think in terms of Ethernet and PCIe. The bottom line is there's ways to skin the cat. There's ways to solve the problem, and I think you'll see most of the top five hyperscalers move towards that open protocol, where you have many suppliers, and then you'll get an effect of driving the cost per query down.
Okay. Two questions along that. One is, what would cause 2-3 years from now, we said this, and then the proprietary NVIDIA stack dominates, the open ones didn't get in? And second of all, are you participating equally in both of those, or give us a sense of the relative participation for you.
Yeah, well, I think that, in terms of Ethernet versus-
Yes.
InfiniBand.
The open stack versus the NVIDIA stack.
Yeah, so, you know, I surely look at NVIDIA today as an opportunity for us.
Okay.
We, you know, have 500, more than 500 people that come to work every day, you know, working on solving connectivity problems. And so-
Yeah
... you know, knowing now that high-speed connectivity is center stage under the spotlight, this is a key enabler for AI. They're a leader. I look at, "Hey, how can I help them solve problems?" You know, for sure. For an example, you know, one of our largest product line is our active electrical cables.
Sure.
One would think, "Hey, Bill, could you build an InfiniBand cable?" So I thought that. I asked my engineers to do it.
Yeah.
It took them about 30 days to change the firmware. It's not a hardware change. So we're talking about, you know, fast pipes that are communicating between different systems. And there's a handshake, there's a protocol figured out in 30 days. But I guess until we could become part of that ecosystem, we've got to get the green light to do that. But we're protocol agnostic. I mean, we can talk about Ethernet, we can talk about InfiniBand, but also, I think PCIe is an important conversation.
Sure.
Because you look at the trade-off between Ethernet and InfiniBand, and it's, you know, open versus closed. You can make an argument that InfiniBand is better because it's lower latency. You can make an argument that Ethernet, that whole, you know, that whole group of people that moves that forward, they're working on Ultra Ethernet. So there's a latency argument, and is it open? But if you look at PCIe, it's open, there's no latency.
Mm-hmm.
And yet there's also power-down modes. So if you think about potentially lowering the power of each one of these links when it's not active, you can get up to 85% power savings by implementing that protocol. So we actually, you know, showed a vision piece at the Open Compute Platform Summit in October, where we envisioned PCIe being the back-end protocol.
Mm.
So again, we're agnostic, and we're ultimately trying to solve connectivity problems.
Absolutely. So I guess, what would it take for you to be included in an NVIDIA-dominated stack that's InfiniBand? Is that-- Would the customer have to press for that, or...?
I think a combination of things would need to happen, but I would have to bring something compelling to the game.
I see. Right, and the customer would say, "This is a better combination than-
Right.
-what NVIDIA offers." Got it. You're competing, essentially.
Right.
Got it. Okay, that's fair. Can you talk about the customers you have and the programs you have? You have one large customer in a program, and then maybe you can rewind to, I guess, a year ago, where there was probably the biggest pivot I've ever seen in data center spending in my career.
Sure
... where the brakes were put on very quickly, and everybody felt AI had to happen yesterday. Just what was the impetus for that? You know, maybe it wasn't as dramatic as I thought, as I perceived from the outside.
Well, it was sure dramatic for us, and the pivot that you're talking about was away from general compute spending-
Mm-hmm
... and, you know, the decision was made to deploy AI clusters. And so we weren't part of, you know, that, so we had a big air pocket with our revenue. You know, and I think, you know, you think back a year ago plus to December of 2022, you were one of the guys probably in the first week that played with ChatGPT and thought, "Man, this is-
Yeah.
This is amazing.
Really scary.
Right. We had no idea that Microsoft was collaborating with NVIDIA at that point and OpenAI. So they. It was a long time in the works, and I think everybody that might have been caught flat-footed, it was like, "Right now, you know, put the accelerator down. We got to bring it out.
Oh, so they've come in stealth and-
I think so, very much.
Oh, wow.
Yeah, so... And I think they see that as a huge opportunity, which it's played out, for sure.
Can you talk about the rest of the hyperscaler customers kind of waterfalling into AEC from here? What's happens between here and them ramping? Is it a certain platform going in place? Is it their...
Yeah, sure. You know, there's always a first program that you engage on. There's three basic opportunities in every single one of these data centers from an AEC standpoint, active electrical cable. There is the front-end network connection, there's the back-end network connections, and there's also applications within the switching layers. For folks like Amazon that do disaggregated chassis, basically buying a lot of white box switches and stacking them vertically in a rack, there's also a large volume in-rack cabling opportunity there.... And so, you know, our focus as a partner to the data centers is really let's cover the roadmap. Let's get out front, let's solve the problems early as they think about their next-generation architectures and deployments. But it always starts with a single.
For our first two customers, I think we're well-positioned for both front-end and back-end networks, as they, you know, as they look towards next-generation programs. We've got a third hyperscaler customer that's qualified our AEC. It's a first program. It might not be, you know, the largest volume program, but there's a second program that we're already working on. So it's, you know, how do you, you know, basically show the benefits of moving from passive copper cables to active electrical cables and then, you know, kind of expand with them as they architect future generations? I will comment, we are engaged with the fourth and fifth hyperscalers here in the U.S., and we're a little bit further, you know, back in the process.
We're in the kind of the early development stages where they've defined specifications, we've delivered samples, and, you know, we're probably a year behind on that. But, you know, we kind of stack these up. But ultimately, the goal is to cover the roadmap. Similar to optical, I would say.
Sure. Now, I'm gonna talk about AEC, you know, maybe a little more than I should this session, but there's a competitive landscape forming out there, and there's a different approach from the competitors. There's some new entrants in the capital markets that are not doing the entire cable, like Mellanox and you did. There's folks doing just the chips. Can you talk about the benefits, time to market, integration-
Sure.
cost, whatever, why you chose this path to go on?
Sure. I mean, I can talk a lot about why we chose this path. And, you know, we could speculate as to why you might wanna just sell a chip only, or you might wanna-
Fair enough.
- You know, split the baby and sell just a module.
Mm-hmm.
But I can tell you why we made the decision, and it's because this is not just integrating a chip, you know, into a connector. It is much, much more than that. Our first customer is Dell, and, you know, I thought, "Hey, I'll find a cable partner, and I'll sell my chip. It should be, it should be easy." And so it, so it went. We all agreed to three-way meetings every week, and the first five weeks, the three of us came, and the cable company was silent. They weren't bringing any value. They're assembler of-
The way
... you know, copper and connectors and paddle cards. The next five weeks, they didn't even come, and I realized this will never come to market if we don't take full ownership of it.
Mm.
As we did that and as we went deeper, we realized that it's not just, you know, designing the paddle card. It's not just designing the firmware that connects over the copper and to the switch. You know, it's not just, you know, designing the paddle card with low signal or with high signal integrity. It's much, much more than that.
Mm.
And so how do you test these? And when you test them, how do you optimize for every single pair of wires? So we've gone deep. We've decided to own, you know, start to finish on these products. We're a single throat to choke. From a customer's perspective, the iterative loop is so tight with our team, and we're talking about... When I envisioned getting into this market, I was thinking about standard products only, but the bulk of the products we're shipping, we've implemented custom firmware, custom hardware features, and it's the first time I think any of the engineers have ever thought about innovating as it associates with a, you know, any kind of cabled connection.
And so especially when you, when you think about that, that's a, a widening of the moat if you're, if you're gonna compare us to a, a company that's selling a chip to a cable company or designing a paddle card for a standard product. We've gone much, much deeper. You know, in fact, we've got more than 100 people working on this system, not developing the chips, but everything, everything AEC.
Totally.
I need more people. That's how many SKUs that we're executing on now.
That's nice. That helps us get a picture of what's going on behind the scenes. It's a lot of work-
Oh, tremendous
... obviously, to satisfy these customers.
So look, the work has to be done. No matter who's bringing a product to market, the work has to be done, and I can tell you, we think it's much more efficient to do all of the work ourselves-
Sure
... versus relying on another company that may or may not be as feeling as urgent as I'm feeling about delivering to a customer.
Not operating at your cadence.
Yeah. So there's no shortcuts here.
Got it. I should talk about the, maybe two questions that will go into audience Q&A. The optical DSP market, I guess there you, I presume, compete with Inphi, who Marvell acquired, and they and others have had pretty dominant positions there. How are you breaking into that market? You seem like you have a big hyperscale opportunity there. Is that a, you know, is that a major program? Just how should we think of you coming into this market, perhaps?
Yeah, that's, it's a good topic because we've got OFC next week.
Correct, yeah.
We'll have a big presence there.
I'll be there.
So in this market, we're not the pioneer. I think they want you to stop saying Inphi, by the way.
I, I, I-
Say Marvell.
That-
That's what they want.
That's how I think about it.
They told me to tell you that. And anyway-
Marvell, Marvell makes hard drives.
We are the disruptor. We are the disruptive force, and we've been the disruptive force, and the seal has been cracked. We're ramping, early stages of ramping our first U.S. hyperscalers. We're in the throes of competition on the second. And when we talk about 800 gig, if we go back to OFC last year, Andy Bechtolsheim, kind of a visionary that everybody recognizes in our industry, he basically introduced this concept of LPO, which means pull the optical DSP out of the optical module. Basically, do everything, you know, with linear direct drive. Kind of a big shock to folks that build DSPs, like us and like Marvell. And, you know, it was pretty shocking, and there was a big buzz, and for a year, people have been working on it.
Our response to that was to think about what the catalyst Andy Bechtolsheim was talking about. The catalyst, the call to action was really power. It was really about energy efficiency, and specifically for the 1.6 T generation, that there was, you know, that we were on a path that was unsustainable, and so we have to pull power out. Our response, after we picked ourselves up off the floor, was, "Let's listen to the message." And if you think about any link from switch to switch, there's actually three DSPs going on, typically. When you launch from the switch and you hit the transceiver, you do a DSP. It's called retiming, also called clock and data recovery, but that's that function.
Mm-hmm.
Launch it over the optical fiber, you do a DSP on the other end. You launch it to the switch, and the switch has got a SerDes that's doing the DSP as well. So three DSPs in-
Right
A typical link. Andy was talking about making it one long link, so a single DSP from switch to switch. So, you know, we thought about the, you know, there's so many pitfalls with that approach: signal integrity, you're now off of industry-standard compliance, so there's no interop. You've got to hand-hold everything if you want to make that solution work with no DSP. So we thought, "Let's look at that link and then eliminate one of those DSPs instead of two," and so that's what we call Linear Receive Optics. We do a DSP on the transmit side, and on the receive side, you know, it's passed through to the switch. What do you get with that? You address all of the negatives, the pitfalls of the LPO.
So interoperable, it doesn't look like a different module. You know, you've got better signal integrity, for sure, half the power of the typical DSP, and potential for lower cost. And so we, you know, decided right after OFC to develop a chip that was optimized for this LRO.
Mm.
We delivered the chip in November, and I've got three partners next week-
Okay
... that are going to be demonstrating-
Okay
... you know, with this LRO concept, or half of the DSP that that's typically done in a module. So we've gotten. And basically, you know, Andy is now, after one year, he's basically talking very positively about LRO.
Mm
... especially as we look towards 1.6T. You know, the goal is to deliver a solution that, you know, that can fit within the power envelope of the given connector and those lane speeds. We're going to deliver both full DSP and LRO at the same time. You know, we'll be taping out later in this year in 3 nm.
Are the new wins LRO, or you haven't disclosed that yet, or?
The wins that we've had are not LRO.
Okay. It'll be going forward.
It will be going forward. I mean, there's these modules will just be sampling to hyperscalers, you know, in the upcoming quarter.
Any, questions from the audience? I can tell the front rows are empty, so people are intimidated, but there's some brave people back there.
Dan needs to talk.
Oh, of course, Dan. Ask the CFO-
Can we get a question?
I have a question for you, Bill. Any questions from the audience? Yeah, I have another question for Bill.
Okay.
I gotta pivot now. Dan, maybe you can talk about visibility for the company. You have programs, obviously, order, you know, orders in hand can fluctuate. How do you try to kind of put the guidance out there where you can feel comfortable, as best you can?
Yeah, we in the short term or near term, of course, we have what I would call very good visibility, and that's how we come up with our quarterly guidance. The other thing that we stated in our last earnings call was really, we haven't given specific guidance about our fiscal 25 yet. We're an April fiscal year-ended company, so our fiscal 25 begins this coming May. But we did set the expectation that there's a second-half inflection point on our product ramps, really driven by AECs and, and driven by, you know, the two, two large customers that have been large customers of ours in the past, based on the programs that we that we've had, that have been in flight for quite some time. So good, good visibility.
On the IP side, our pipeline is as strong as it's ever been, due to just general market conditions. AI, interest in AI, ASICs are becoming a much more talked about type of chip today, specifically for AI programs. SerDes is a requirement. High-speed SerDes is a requirement, of course, for that as well.
Any questions from the audience? Yeah.
Um-
Hi, Dre.
I think traditional data spend has been put on hold for a bit. Can you guys just talk about your outlook for, you know, when traditional data spend will come back?
Yeah, it's a great question. Yeah, I think we're seeing first signs, you know, at the customers we're engaged with, that there's, you know, there's the early stages of rebalancing happening. Can't be real specific about that, but for the past year, I couldn't say that. And this, this probably goes out, you know, in the 6-12-month timeframe, but, you know, near term, we see everything is all about AI.
Yeah, that's a very important question. Thanks, Dre. Any other questions? I guess I'll leave you with a whopper of a question: How, What kind of odds would you handicap that people can compete with NVIDIA very well in the next year or two, and is that relevant to your revenue opportunity, or are you neutral to it?
How would I handicap it?
Yeah.
I would give strong-
Okay.
I would give strong odds that, you know, the world will be normal. There will be competition.
Okay. Does that create a tailwind for your demand or-
Sure, absolutely.
I wanted to just clarify that. With that, I think we're out of time. Any last quick questions? All right, with that, I'll thank Bill and Dan.
Thank you.
Thanks for your time.
Thanks so much.
Great conversation.
Yeah, thanks. Thanks for the interest.