Credo Technology Group Holding Ltd (CRDO)
NASDAQ: CRDO · Real-Time Price · USD
195.04
+9.50 (5.12%)
At close: Apr 24, 2026, 4:00 PM EDT
194.44
-0.60 (-0.31%)
After-hours: Apr 24, 2026, 7:59 PM EDT
← View all transcripts

Barclays 23rd Annual Global Technology Conference

Dec 10, 2025

Tom O'Malley
Analyst, Barclays

All right. Welcome back to the Barclays Global Tech Conference. I'm Tom O'Malley, Semi and Semi-Cap Equipment Analyst. We have Bill Brennan and Dan Fleming from Credo. Thank you for joining.

Bill Brennan
President and CEO, Credo

Thanks for having us.

Tom O'Malley
Analyst, Barclays

It's always an interesting conversation because I feel like every year we're up here talking about different things, which is really good, right? so new innovation, the market looking different, but maybe a step back to start, which is you're seeing $3 trillion plus of announced spend. Are we in the early innings of this AI investment cycle? You've been very clear in the past about how you guys are going to enable that, but I'd love for maybe you guys to start by sharing that. Where are the blockades in these deployments that you guys kind of enable, and what sort of products are you guys talking about most recently?

Bill Brennan
President and CEO, Credo

You know the answer.

Dan Fleming
CFO, Credo

I know the answer. It's not for me.

Bill Brennan
President and CEO, Credo

Too early, where are we innings-wise? It's always early innings. [crosstalk] If you say middle innings, you're in trouble.

Dan Fleming
CFO, Credo

Oh, no.

Bill Brennan
President and CEO, Credo

That's a general statement. All kidding aside, yeah, it's tremendous to see the energy, the continued energy in the build-out of AI infrastructure. I think, personally, I think this is one of these pivotal points we can point back over time. This is going to be unlike any that we've seen, so I think this is we're looking at a decade-plus mega trend that will come in clarity. Of course, there's going to be ups and downs, but I think for sure the world five years from now is going to look completely different. It'll be reshaped completely and to the tune of more infrastructure and more applications and more benefit, more productivity. For us, the big change over the last 12-18 months in our focus at a product level is really focusing on reliability.

And specifically related to the AI cluster, the nature of the cluster is 100,000 GPUs, million GPUs, all connected together, all acting in unison as one big supercomputer. The key link from the GPU to the first switch, unlike other networks within traditional data centers or Tier 1, Tier 2, there is no redundancy. And so if you lose the link between one GPU and that switch, you have the probability that the entire cluster comes down, that you'll lose the training run that you're on, that you'll have to go back to the last checkpoint. And so poor reliability ultimately equates to a loss of productivity and a loss of revenue or profitability.

And so we learned that through the conversations that we had with xAI going back 18, 20 months ago, and they came to us very frustrated because they were building an 18-rack, not very dense row that was connected with optics, laser-based optics. And they came to us and said, "Look, we're going to a liquid-cooled facility where we're going to compress that row from 18 to 6. And now if you can build seven-meter cables," and it's very well known that copper solutions are rock solid from a reliability standpoint. They said, "If you can build seven-meter cables, we can build a ZeroFlap cluster." And the term ZeroFlap really resonated. And at that point, our team really started focusing on reliability, especially for that first hosted T0 link.

And all of the developments, all of the products that we've come out with recently, they're differentiated along the lines of reliability. So that's really the big movement for us as a team and the barriers that we're overcoming.

Tom O'Malley
Analyst, Barclays

So I want to go through all the verticals that you talked about most recently on your earnings call outside of AECs eventually, but I do want to start with the bread and butter of the business, which is these AECs. So you started with one large customer that became two, and then the second customer became the largest, and now you've added a host of customers across the board. As you progress here, it seems like every time you add customer diversification, you get increasing questions on the sustainability of one customer or another customer. And people tear open boxes, look at certain deployments, and say, "Look, Credo's already seen its day in the sun.

We're seeing more competition here." I'd love you to spend some time talking around your roadmaps with these customers, why you feel like you're well-positioned with your largest several, and then why you guys would be able to kind of maintain the position that you have today?

Bill Brennan
President and CEO, Credo

Sure. Yeah, I think we're unique in a sense that we pioneered the market, and the approach that we took initially, it didn't work because we weren't thinking about taking ownership of the complete system, and so at one point, seven or eight years ago, we realized that if we didn't take complete ownership of the system, maybe the product category doesn't exist. You have to go all the way up the stack, be accountable for every aspect of the product, from the design of the product to the qual of the product to the supply chain management for the product, and ultimately, when it's in production, be accountable for any issues that come up, and so that's been the real differentiator.

And my belief is that at a competitive level, what we're focused on is being first to deliver, first to qualify, first to ramp, and doing a flawless job, being incredibly flexible on upsides. And I think over the past year, we've shown that as we've seen many of our customers absolutely accelerate, having the ability as one company to respond to that. And we've proven that it's possible. I can't imagine if I was a chip company and I was hoping that a cable company could respond if I wasn't in control of it. There's moats along the way there.

Having all of the engineers under one roof, including the engineers that design the actual physical cable, having an internal qualification where we're taking ownership of the link, not just our cable, but our customer's switches, our customer's GPUs, and hardening, adding orders of magnitude better bit error rate prior to going into qual with our data center partners. And yeah, I feel great about our competitive position. And maybe it's just, I think, the world kind of underthinks the system-level difficulty of what we're doing. We even have competitors completely changing their business model to try to respond to what we already do well.

Tom O'Malley
Analyst, Barclays

Yeah, I was going to comment. We had a competitor in the space talk about golden cables this week, and is that them evangelizing the ecosystem to kind of work together? Like, "Oh, this is a chip solution. We need others to kind of hop on board with us to have more success." Is this kind of going after the same approach as you guys? How did you read that from a competitor?

Bill Brennan
President and CEO, Credo

So we're going from copper to gold? I mean, it's expensive.

Dan Fleming
CFO, Credo

I think it's a color. [crosstalk]

Bill Brennan
President and CEO, Credo

I had enough color nonsense in the past.

Dan Fleming
CFO, Credo

No, I understand.

Bill Brennan
President and CEO, Credo

So I didn't really read too much about it, but I understand it is a shift to try to enable maybe a more predictable supply chain or ecosystem, something we've already achieved. So yeah, again, very comfortable with where we are competitively. And ultimately, for me, it's about focusing on our customers, being intensely focused on delivering differentiated solutions. This is not a commodity space that we're in. This is not like designing IEEE standard 1.6T ports. This is going above and beyond special features and special hardware SKUs with two, three, four, five, eight different connectors. It's a really dynamic space if you allow your customers to innovate.

Tom O'Malley
Analyst, Barclays

Got it, so if I rewind the clock, Microsoft in the early days front-end applications, Amazon, you moved to more back-end applications. You talk xAI. You just spoke about an instance where you worked with them on a row-scale type of architecture. When you are adding these customers, can you talk maybe some of the solutions or some of the problems that you're solving with your solutions? Is it always kind of the same AEC cable in the back end? Is it just always straight cables? Can you maybe, I think people struggle with, "Oh, this is the same thing over and over again. That's why you can't scale." Can you maybe give people some perspective on what problems you're solving with these different customers and why it's not just the same thing over and over again?

Bill Brennan
President and CEO, Credo

When you say scale, do you mean revenue?

Dan Fleming
CFO, Credo

Revenue. I think you don't.

Bill Brennan
President and CEO, Credo

I think if you look at our revenue scale, we've kind of proven that we're able to scale, and I think we've guided to the fact that we should be continuing that scale, so I know I'm not sure how that could be easy to say, but generally, right, there's different places in the data center where AECs apply, and I can tell you that in addition to having many different ways to solve problems, these connectivity issues, whether it's in the back-end network, scale up, scale out, switch racks, front-end connections, there's a lot of opportunity to pursue innovation that goes beyond what you would consider standard, and so I think that you can expect that you'll see a lot of innovation in the AEC space, especially from our team.

Tom O'Malley
Analyst, Barclays

When you think about future generations with customers and early designs, and I hate to go down this road, but it is a reason why the stock has reacted as in recent days.

Bill Brennan
President and CEO, Credo

I haven't checked.

Tom O'Malley
Analyst, Barclays

Oh, yes, so essentially, you open boxes, you say, "I can count the number of cables in there that are Credo's because of a certain color." Is it fair to always assume that the cables that you see in certain boxes are always purple? They're Credo's, and if you move to higher speeds, is the opening of the first box always going to be what all those boxes look like at that customer? I asked this on stage because I've heard this answer from you before, but I'd love for you to clarify it.

Bill Brennan
President and CEO, Credo

So I'll break a little news. We don't build only purple cables. So some customers really feel like it's an advantage to have the folks in the data center know exactly what the cable is, and color is a good differentiator. So if I had four generations of purple cables for that particular customer, there might be confusion as to the difference. And so early on, they asked us to put different color jackets on the cables. And of course, that's within what we think is reasonable to ask. So I know that every time one of our customers has a show where they kind of show next-generation technology, that there's a lot of assumptions that are made that go along with what's shown. And there's a lot of math that's done and a lot of conclusions that are drawn.

And I can just say that the first time that I went to a show, I looked at something that was pretty exciting. It was purple. It took a couple of years for that to actually come to fruition, and it actually came to fruition in a different configuration than was shown as future technology. But the bottom line is it's about your customer relationships and making sure that you're right there with them solving problems. I have very strong confidence with all four of our 10% customers that we're doing that work. And we get 12 months of visibility that's, in a lot of cases, binding. I've got one customer that's going out two years and another customer going out three years to make sure that I understand exactly what they need. So that's reassuring. And so I think from our perspective, no concerns.

Tom O'Malley
Analyst, Barclays

Another area just on the AEC front that I get a lot of questions on is just co-packaged optics, both from a scale-out ecosystem and then a scale-up ecosystem. I think that generally the industry knows that we're headed in that direction, but the timing of when it gets here is, I think, very debated. Maybe talk about, does your solution bridge that gap ultimately to getting to co-packaged optics? Do you think that with the Hyperloom acquisition, which you've talked about, like 20-30 meters, you will have some capability to do some scale-up optics eventually? What's your view on timing and then your ability to address CPO?

Bill Brennan
President and CEO, Credo

Yeah, it's an all-encompassing question.

Dan Fleming
CFO, Credo

Go with it as you will.

Bill Brennan
President and CEO, Credo

And I think that as we talk about the first part of it was CPO. And exactly zero times I've heard, "Now that CPO's in production, how do you feel?" Nobody's ever asked me that question. For many, many years, I've been asked about CPO. And there are things that you can point to that may be promising if you hit a wall the way that the current ecosystem works today. And so for me, I don't necessarily think that people will say, "I need CPO because of the inability to do it the same way we've been doing it for 10 years." There's nothing that stands out. You can't point to cost for sure. It's going to be more expensive. You can't say reliability because it's going to be far less reliable given the fact that it's laser-based.

You're not going to be able to say it's the lowest power of anything, of any of the technologies being discussed. So I think that the pitfalls have kept it from going to production. I think the energy around it, it's fine. It's great. Innovation is great. But I think the ecosystem has strongly stated over the last five to 10 years, as it's been discussed, that we'll find a path with kind of the existing ecosystem that's being deployed today. And so as it relates to when the industry does hit a wall, maybe it's speed where the wall is hit. And that's in particular for copper distances of, say, more than two meters or something. The micro-LED effort that we're putting into ALCs, there's a direct application for near-package optics. That's one-third the power of CPO.

It doesn't come with such exotic switch design as you see today when you see at conferences. We think we've got a path that's much better when the industry finally gets there. Let me answer the question about scale-up. That was a different part of the question. Scale-up is a network that started within the appliance, networking the GPUs within a box. It was an inside-the-box network. What we're seeing is that NVIDIA and others are now doing rack scale-up architectures. There is a big, big performance benefit to going to row scale. When we talk about row scale-up, we're talking about a huge number of new connections. This is a new TAM that is going to be available in the market. You can make those connections with copper. You can make those connections with other mediums as well.

Reliability is really the key thing still for scale-up, and so I think we're talking about scale-up this network as being some sort of different network than exists elsewhere. It's going to be a more dense network, and so routing density is a challenge, but if we look at the systems that are being deployed today, they're pretty dense. The next generation are pretty dense. You can easily make those connections with copper, but when we get to the point where routing densities are so intense that you can't achieve it physically with copper, that's where ALCs come in for us, and I will mention that reliability being kind of one of the key points, ALCs have equal reliability, fundamental core technology level, equal reliability to copper, and so equal power efficiency, which is another thing, half the power of laser-based optics, and equal from a cost profile perspective.

And so we think that is the right technology. It looks and feels the connectors are all the same. Basically, it's an extension of the existing ecosystem. So we feel great about that. I know there's a lot of people talking about competing technologies, and that's why we have so much fun competing.

Tom O'Malley
Analyst, Barclays

Okay. So it's two verticals. We've got AECs. We've got ALCs. ZeroF Optics. So recent conferences, you were shown with a lead customer on stage, or at least a partner for today. And I think it's interesting you speak about reliability so frequently. And the way that you've ensured reliability in the past is move from a chip-based solution to a cable solution, right? Handling the entirety of the solution. This seems to be move from an optical DSP to an entire module. What economics or what benefits can you get from a performance perspective moving to a full module in optics that you could get in electrical as well? And why would you choose to do this full solution, and why does the customer like it?

Bill Brennan
President and CEO, Credo

Yeah. So Zero Flap Optics. So the Zero Flap theme is, ultimately speaking, to reliability. So we had Oracle approach us about 20 months ago, and they were struggling also with link flaps, same as xAI. And we said, "Hey, use AECs." And they said, "Our links are too long, far longer than 7 meters." And so we're going to have to deal with laser-based optics by definition. And so they said, "We'd love to work with you on a solution where we go up the stack together." And the idea is, could you design a system solution where you could provide visibility and telemetry on every single link in the cluster to the point where you could determine the link health on all of the links? So you've got a million GPU cluster.

Could you have a system where all of that telemetry data is being fed real-time into the network management software so that you could set a threshold? And when you saw a link that started declining from a signal integrity standpoint, could you set a threshold where they cross that threshold? Could you actively take that link down? Could you take that GPU out of the cluster to avoid a link flap? And so the whole goal was to design a system-level solution to be able to recognize potential link flaps before they flap and to mitigate by taking them out of the cluster, taking the link down. And so it started with us having to redesign a custom optical DSP with the ability to talk between the DSPs in band. So while we're transferring high-speed traffic, could we talk back and forth between DSPs and enable this telemetry data?

The next thing was taking our pilot software and tightly coupling it so that we could basically take this raw data and turn it into telemetry data. And then it was linking with an SDK through a switch SDK to be able to integrate within the customer's network. And so now we're able to give them real information, real-time, continuously on eye height, SNR, pre-FEC, bit error rates, post-FEC histograms. We're able to recognize if there's ESD damage. This is a super common source of failure that while they're installing the racks, if you mishandle a transceiver, you can slightly damage it with ESD. We can recognize, and they can replace that proactively as they're turning on the rack the first time. We can recognize that there's dust on the fiber.

So you'll get multi-path interference because if you've got dust, the light will bounce the other direction, which causes havoc from a signal integrity standpoint. So far beyond any kind of system solution for a laser-based optics module. And so we recognize that we would have to go up the stack, that we'd have to own the entire solution. We'd have to be accountable for the entire solution. So it started to feel just like AEC in that sense. And so that's the path that we're on. We're not competing for kind of standard modules. This is being designed at the ground up. I don't expect that we're going to be the only supplier in this space. My expectation is this is delivering such value that we'll have others come and join with Zero Flap Optics solutions.

Tom O'Malley
Analyst, Barclays

Another vertical that you're saying impacts financials in the near future is PCIe. I was curious. I know you're intersecting the market, a next-gen PCIe, but where do you see your first value prop to the industry? Is it PCIe onboard retimers? Is it PCIe cables? Obviously, the heritage of the company suggests that you may want to go in the direction of the cables first, but today the market for PCIe onboard retimers is bigger. How do you see your go-to-market approach?

Bill Brennan
President and CEO, Credo

The go-to-market's parallel. It's very straightforward. As we're bringing our retimer to market, we're developing the AEC in parallel. I can say that I'm really, really pleased with the results that we're seeing with our silicon, and that'll translate into advantages at a retimer level as well as an AEC level. I can't emphasize enough that owning the SerDes gives us an advantage. The results that we're seeing from a reach, we are definitely the longest reach solution in the market for Gen 6. Latency, we're the lowest latency solution. Normally, these two things fight each other. You either give up reach for low latency or vice versa. We've achieved both, and that delivers real value to the customers that are using us in the sense of it's easier to design systems and they can design higher performance systems, so feeling great about our competitive position.

Tom O'Malley
Analyst, Barclays

And then moving to the last kind of vertical that we haven't touched on yet is just the OmniConnect portfolio. The last time we spoke to one another, you said, "Tom, go watch the video. We'll explain it all to you." I went and watched the video. I'm feeling a little bit better, but I'm not all the way there yet. I've heard a variety of different memory enhancement strategies between CXL, whether it's a pluggable, a chip on board, a card. Maybe just for the audience more broadly, the problem you're trying to solve is bandwidth, right? From either a CPU or accelerator to memory that's sitting somewhat adjacent. The way in which you do that has varied across multiple different companies and solutions. How are you guys going about doing this?

Bill Brennan
President and CEO, Credo

Yeah. So let me first describe that. OmniConnect is meant to be a family of gearboxes that we develop over time. The first gearbox in the OmniConnect family is a device we're calling Weaver. And this is a device to overcome the memory wall, specifically related to inference. And so if we think about, if we compare Rubin CPU that was announced, I think it'll go to production next year, 128 gigabytes of DDR memory. If you look at a die-level analysis of that, about 75% of the beachfront is occupied by the DDR5s or SerDes. And if you look at how the memory, the 128 gigabytes is laid out around it, you can see that there's a physical limitation because you can be about 25 millimeters or an inch away from the GPU.

And so that kind of defines the memory wall as it relates to inference because in inference, if you can load the entire model into DRAM, you're going to get much higher performance than if you're unable to do that and you're having to page in and page out of DRAM. And so if we look at emerging applications like AI-generated video, these models are enormous, right? Far greater than 128 gigabytes. And so how do you design a solution so you overcome that beachfront limitation as well as that space limitation so that you can deploy terabytes of capacity? And so for us, it started with the concept of getting the memory further away from the GPU. And how do you get, instead of 25 millimeters, how do you get 250 millimeters or 10 inches away? Well, you have to address the beachfront challenge first.

With Tesla, we designed a SerDes for their Dojo that was very tiny, very low power. The first XSR SerDes prior to the XSR standard even being ratified by IEEE. We were able to design a SerDes that maintained that size and power, but instead of 50 millimeters that we were able to achieve for Tesla, we extended it to 250, so 10 inches. From a beachfront density perspective, we completely resolved that issue. It's more than 10 times more dense from a beachfront density. The first partner we're working with, the AI-generated video market now, to do a 15-second clip can take 15 minutes. The goal is real-time. You can start looking at achieving that if you can increase the capacity and also the bandwidth.

And so the solution that we introduced with our first partner, they introduced an inference platform that has two terabytes of memory. This is compared to 128. We could go up to 6.4 terabytes. And you think about that, and the feedback that they're getting in the market is that they've lifted the lid on this. And the bottom line from a content perspective is the gearboxes that we'll sell to interface with all that DDR. This is a content that would be $1,000 or higher per GPU for us. So it's really a breakthrough solution. And there will be other gearboxes that we build that will be equally game-changing on other I/O that you typically see in GPUs.

Tom O'Malley
Analyst, Barclays

I got it. So the handoff between needing to use higher bandwidth memory that sits closer to the accelerator gets essentially taken in by you guys.

Bill Brennan
President and CEO, Credo

Yeah. For training, if you wanted to do a training chip, you eliminate the need for HBM. You can buy DDR that's one fifth the cost, and you eliminate the need for CPOs. So it's super game-changing related to training. There is a bit of a trade-off, slight power increase compared to the interface on HBM.

Tom O'Malley
Analyst, Barclays

Got it. Got it. Something on the gross margin side, Dan, I know this question gets asked every earnings call, but you continue to come ahead of your midpoint of the long-term model on gross margins. Bill, I think you've done a good job explaining just essentially what customer dynamics look like when your gross margins go up, what they have to pay you to make that happen, and how that continues through the supply chain. I know we're early days with a lot of customers, but you're also introducing a lot of new products that are very innovative as well. When you look at the portfolio of verticals that you've talked about, how do you see gross margins trending in the long term with these new additions?

Dan Fleming
CFO, Credo

Yeah. Just a couple of quick comments on that. So all of these different pillars of growth that Bill has described last week and today, we expect all of them to be, from a gross margin perspective, right in our kind of the sweet spot of our long-term gross margin model, which is 63%-65%. We've not changed that. Since we've gone public, we haven't changed it. That's what we expect long-term. And if I look historically at what we've done, just over the last year, for instance, our story has been one of increasing scale has really driven expansion to margin and up over 400 basis points year- over- year from Q2 of last year to our recent earnings announcement. And right now, we're clearly in a kind of a pocket of time where we're a bit above that long-term margin model.

We guided at the high end, but long-term, we certainly expect all of these new exciting opportunities that we're pursuing to be right within that 63%-65% range.

Tom O'Malley
Analyst, Barclays

With that, we're out of time. Thank you so much, Bill and Dan, for joining us. I really appreciate exciting times. Great out.

Dan Fleming
CFO, Credo

Thank you.

Bill Brennan
President and CEO, Credo

Appreciate all the good questions.

Tom O'Malley
Analyst, Barclays

Thanks, guys.

Bill Brennan
President and CEO, Credo

Thanks.

Tom O'Malley
Analyst, Barclays

Thank you.

Dan Fleming
CFO, Credo

Yeah. Thank you.

Powered by