Good morning, everyone. Thank you so much for joining. My name is Toshiya Hari. I cover the semiconductor space at Goldman Sachs. Very excited, very honored to have the team from Credo Technology with us today. We have Bill Brennan, President and CEO, and we have Dan Fleming, CFO. I do have a list of questions, but we'll try to keep it interactive.
Sure.
So we'll open it up to Q&A in a little bit. Bill, it's been a pretty crazy year if we go back to January. I was hoping we could kind of take a step back and you know, maybe share with us your view of the year at the beginning of the year, what happened in Feb, and then how things have evolved since then.
Sure. Yeah, it has been, you know, quite a, quite a fun year, kind of a rollercoaster.
Yeah.
So if we, if we, you know, go back to February, I think we found ourselves in a position where we had customer concentration with a customer that really has been leading in this pivot towards spending on AI. And so naturally, we were, you know, really involved with their general compute side of the business, and, you know, so there was a reset that we did. I didn't get as much color as I'd wanted in February and March, but I think looking back, 2020 hindsight, it's pretty clear that we're gonna be talking about 2023 as the year that AI really started the deployment.
Mm-hmm.
You know, I believe that it's gonna be more revolutionary than evolutionary. I think this is gonna impact the spend for a long time. I do think there's gonna be a balance. But I think for our business, you know, I think that, you know, if we look at our business outside of our largest customer, what we're seeing is more than 100% growth year-over-year. And so with our largest customer, we might be taking, you know, an adjustment of more than 50% if I compare year-over-year. But the hidden, really positive news is that we're seeing growth in many different areas across the board, and I do think we're benefiting generally, short term as well as long term, from the move to AI cluster deployments.
Got it. That, that's a great place to start. So obviously, AI is extremely topical. Every single session, AI comes up as, as sort of a driver, enabler, what have you. You participate in both sort of traditional spend, if you will, as well as AI compute. When you think about the net impact, and I guess you sort of answered my question, but is the net impact positive? Is it neutral? I think investors are trying to understand, given the dynamics around the largest customer, it's less clear from the outside, but how should we think about the net impact as both AI compute grows, but also general compute catches up at some point?
Yeah, I think that, you know, we look at it a bit differently, and of course, you know, long term, you'll see us participating with general compute as well as AI. You know, my expectation is that we'll have a good footprint in both applications. And so longer term, I think the balance between the workloads or the spend, you know, might be something that's one layer, you know, deeper than we go.
Mm-hmm
Since we're gonna participate in both.
Mm-hmm.
You know, so I think that the world we see is really, you know, we see bandwidth and the need for more bandwidth and lane speeds in particular. And so I think the best thing that comes from AI deployments is there's an acceleration in the bandwidth that needs to be deployed with these systems. So it means an acceleration from 50 gig lanes to 100 gig lanes to 200 gig lanes over the next few years. And that, you know, that application, you know, moving towards faster speeds and more connections, there's. In AI deployments, there's a significantly higher density of connectivity.
Right.
And if you really look at the point-to-point connections that are being made in these RDMA backend networks, just the sheer density and the sheer bandwidth of these applications is a great driver for our business. And so I think for the first time, you know, we're talking about interconnects and connectivity being the bottleneck.
Right
Y ou know, for, you know, for the entire, you know, system deployment. That's a good thing for a company that is pure play on, on connectivity.
Got it. That makes sense.
I think it's definitely gonna be a catalyst for our business. As it relates to the near term, sure, I can point to many different specific things, but I think longer term is where it will really play itself out.
Got it. Maybe a couple of questions on your AEC business, and I know you don't want to get too specific with specific customers.
Sure.
But with the largest customer, I think when you made the revision back in February, there were we were getting questions: Oh, did the customer drop AECs? Did someone else take their business? Is it permanent share loss? Is it fair to say, in hindsight, given the information that you have today, it was, a push-out and, and you're positioning, and none of that is really-
Yeah, there's no question that I don't think our relationship has changed at all.
Yeah.
I think that it was merely a shift in spend, and, you know, I think long term, I think we'll be well positioned for both general compute and when that recovers-
Mm-hmm
and AI, when that ramps. I, I feel great about the relationship.
Got it. Got it. And your visibility on the general compute side in terms of potential timing of recovery, I know, I know it's not easy, but what are your thoughts on that?
You know, so it's, it's a bit tough for me to talk about, but I think it's really a calendar year 2024-
Right
Type of time frame. And we can, it's hard to get specific whether that's midyear.
Mm-hmm
Or, after.
Got it. And then the second hyperscale customer in AEC that you often talk about on your calls, and I feel like your tone around that has gotten more positive and constructive. As you sort of compare and contrast how, you know, significant of a customer they could become relative to your first customer, like, as outsiders, how should we think about the opportunity set as you ramp into late 2023 and calendar 2024?
You know, I think generally the opportunity can be bigger. And, you know, I would say that, you know, the different areas of business that we can participate in might be broader in the sense that, yes, you know, we've got a, you know, position that will be growing in the general compute area. We've got a position that will be growing, you know, for AI deployments. But a third area is in switch racks, so in the kind of the front end, switching and routing network. Their strategy is to build their own switch racks, and, for sure, there's gonna be an AEC opportunity that's significant. So I would say collectively, it can be a larger relationship than, than our first customer.
Got it. Got it. And as we think about broader AEC adoption, as your customers transition to 800 gig, is it a must to transition to AECs, or can they still sort of, you know, stay on their conventional technologies, DACs or AOCs, or what have you?
Yeah. So we're seeing that, as you know, applications move from 25 gig per lane to 50 gig per lane, we're seeing that the bulk of our customers are transitioning to AECs versus, you know, trying to stick with passive copper technology.
Mm-hmm.
As we talk about moving from 50 gig lanes to a 100, 100 gig lanes, we really see it becoming, you know, a de facto standard where our architectures have short in-rack cabled connections. It's really driven by signal integrity. It's driven by form factor. It's more and more, you know, for the designs we're involved with, we see it driven by special functionality that we can build into the cabled solution.
Got it.
So for many, many reasons, we think, at 100 gig per lane, it's really going to become, you know, the de facto choice.
Mm-hmm
For short cabled connections.
Right.
We don't see DAC really existing for 100 gig per lane, and we think for optical, for many reasons, it's, you know, much higher power, much higher cost, and there's less flexibility with implementing special features.
Right. Right. So, I mean, we, we spent a lot of time talking about your largest customer and your second customer, but given what you just described, I'm guessing your, your customer engagements are significantly broader. Would it be-
Yeah, we've got conversations with-
Yeah
With really, you know, all of the hyperscalers-
Mm-hmm
Different stages from, you know, different stages in the engagement, but I think, it's playing out very well-
Mm-hmm
I n kind of line with what we've always thought over the last several years.
Right. Right. And is there any pushback on, on AECs as a technology, as you sort of, you know, converse with?
I think the product category is being well accepted.
Okay.
There was, you know, the natural pushback, when we first started talking about this, you know, going back 4-5 years.
Right.
There was maybe a little chuckle because everybody was assuming that, you know, the thought for the last many, many years has been when passive copper runs out of gas, it's gonna be an all optical world, and that's simply not the case.
Right.
I think it's well accepted in the industry now.
Right. Got it. I think at the time of your, your IPO, you had estimated roughly half of hyperscale servers will use NIC-to-ToR AECs by 2025. Is that still the, the right sort of framework? Or, or have, have things evolved over the past couple years?
I think that, you know, first of all, the forecast that we reference, this particular one came from the 650 Group.
Okay.
We look at other forecasters as well that have picked up the market. And, you know, I think that the way that we view things is really on a bottoms-up, design-by-design, hyperscaler- by- hyperscaler, and the top-down view looks like it may have shifted a year. So we were talking about 2025, maybe it's 2026 now. But I think the trend towards faster speed is happening, and as that trend happens, the AEC product category will grow.
Mm-hmm.
It could be even beyond what the forecasters say.
Right. You pioneered this technology, this segment. There is some competition, or there could be some competition in the future, maybe not so much today. Can you remind us, you know, how you're different? Obviously, you're vertically integrated, and we get all those points, but when you go to a customer, you know, why do they choose you as opposed to your peers? Maybe product availability.
Well, you know, I think it boils down to delivering what they want, when they want it.
Right.
They typically want it quickly. So the way that we're organized is, we take ownership of the entire system solution. So, you know, of course, we develop the chip, and I've got more than 100 people that are focused exclusively on the AEC system development at some stage, whether it's, you know, defining the product, engineering the product, qualifying the product, and ultimately being responsible for the product in production. Our competition is typically the cable manufacturers that are trying to transition from building passive copper cables to building something, you know, that's got a lot more intelligence. So the natural idea is: Hell, go buy a chip from a chip supplier in the market, and I'll integrate it.
And so what we find is, for our largest designs, you know, the customers we're closest to, we're on almost a daily iterative loop with our engineering team, directly with theirs, and that just simply doesn't exist.
Right
W ith the collection of companies that's trying to compete with us. You know, so I think it's, I think the first place we'll see competition is in, you know, a standard type of product, maybe an 800 gig product, where it's got two connectors, it's a straight cable. It's a fixed target, and so that'll, I think that's an easier target to aim at versus a target that's moving-
Mm-hmm
where we're implementing custom features kind of on the fly with our customers. But those are the higher volume opportunities from our perspective.
Right. Right. So the standard stuff, is that more enterprise or which, which sort of, you know—
No, I would say it might be in the switching network-
Okay. Okay
as an example.
All right. Okay, Interesting. Have you sort of seen or identified any progress on the part of your competitors in coming to market?
We've, you know, we've heard a lot.
Mm-hmm.
We've seen very little in the field.
Okay. Okay, got it. In the Optical DSP place, marketplace, you know, the positioning is kind of reversed, right?
Right.
You're coming in as the follower, the disruptor.
Sure.
I think earlier on, you had talked about, you know, very, you know, strong progress in China. That market unfortunately weakened, and then more recently you talked about, you know, a win with the U.S.-based hyperscale customer.
Yes.
Maybe talk about the engagements there. Again, your differentiation in Optical DSP relative to the incumbent.
Sure. Yeah, so the roles are a bit flipped here.
Yeah.
And so we find ourselves as the challenger or the disruptor. And what we're bringing to the market, I think we're almost liberating the market from a supply chain standpoint, where the supplier, you know, that has incumbency is very much acting like an incumbent. And what we can bring is, you know, we can bring a sharpening of the, you know, the competitive factors. Of course, it's equal or better performance, it's equal or better power. But from a commercial standpoint, I think we can, you know. I think we can, you know, bring a lot of positives to our optical module partners and our end hyperscaler customers.
Okay. Yeah. And I think you talked about the business being potentially 10% of revenue.
Yeah, so?
next year, or?
Yeah, the objective internally is that we drive, you know, 10% of our revenue next year in optical-
Mm-hmm
as, you know, kind of a first starting point.
Okay. Okay. And you can get there with the current customer base, or are you kind of contemplating further wins in hyperscale?
So we've got lots of activity, going with many optical module makers.
Mm-hmm
As well as hyperscalers. We feel that those, those activities will drive that number in fiscal 2025.
Mm-hmm. Got it. And like your other products, the N-minus-one aspect of your business is a big differentiating factor, right?
Sure.
Yeah.
Just to make sure everybody understands-
Yeah, yeah, yeah.
I'll reiterate N-minus-one. And so, 12nm is our workhorse process now, and so when we're competing with a product that's built in 12nm, we're competing against a product in 7nm or 5nm. It's a little bit counter to kind of think that we've got an advantage, but we make, you know, our decisions, primary decisions, based on connectivity. And so we look at the, you know, what's needed in the design, and if we can do it in a process that is less expensive to develop, if I can deliver equal or better performance, equal or better power, do it with small die size, in a process that costs a fraction, you know, for tape-out costs and a fraction for wafer costs, I'm gonna deliver, you know, a really strong commercial advantage.
I'm gonna have a real cost advantage. And so that N-minus-one means that we've got to be fundamentally better at a core SerDes level, and that's really what we've proven over time. We've proven that advantage extends to 5 nm, where, you know, the SerDes we've delivered in 5 nm are 40%-50% lower power-
Mm
Than the competition. It'll extend to 200 gig per lane in 3 nm.
Mm.
The idea is to compete with us head-to-head, you'll go, you'll need to go to a more advanced process, and that, that advanced process would be the N. We're in a minus one-
Mm-hmm
geometry. So absolutely, it's delivering the kind of advantage commercially that we expected.
Mm-hmm. Mm-hmm.
And so, you know, if you're gonna play the role of disruptor against an incumbent, that is the main factor that you've got to bring.
Right.
We're not talking about a market. The market we're serving is not a rich market. You look at the, you know, the financials of the optical module companies, and, you know, they suffer from being commoditized.
Mm.
They're in a very much a commodity competition, you know, and what happens there is, you know, the price is kind of who can go lower faster.
Mm-hmm.
And so if we're helping fuel that competition and helping the winners of that competition be more profitable, gain market share, that's really key to our success.
Right. Right. I guess if I'm a customer, and to your point, I'm suffering from relatively permanent, you know, challenged margins, it feels like a no-brainer. So what's the pushback? Is that? Is it a competitive response, or is it just time that solves this, or how should we think about that?
I think it, the end customer really matters.
Right.
I mean, I think the hyperscaler is the loudest voice in the room.
Right.
And so it's making sure that, you know, that they're included in the early conversations, they're included in the execution. And, you know, the bottom line is that's the, you know, that's the big hurdle that we cleared recently in ramping our first hyperscale customer,
Mm
here in the U.S.
And your presence in the AEC market, I would assume, helps.
Oh, sure
progress on DSP side, right?
Yeah, absolutely.
Right.
Absolutely.
Right.
So collectively, if you look at all of our solutions-
Yeah
you know, from Optical DSPs to AECs, to line card PHYs.
Mm-hmm
To SerDes Chiplets.
Mm-hmm
Even IP. You know, collectively, I think Credo is being recognized as a connectivity leader, and so there's no question that, you know, our footprint elsewhere, you know, leads people to absolutely believe that we can help them with optical.
Right. Right. And then you talked about SerDes Chiplets. I feel like this is a business that gets less airtime relative to AECs and optical. You've talked about two very large customers, and you've had a lot of success there. Maybe talk about the opportunity set as it pertains to SerDes Chiplets and how you see the business, you know, evolving going forward.
Yeah. We've been believers in the SerDes Chiplet market for, maybe too long.
Mm.
We've spent a lot of resources, and we've been successful with bringing two big customers to production: Intel and Tesla. Tesla's maybe a little bit more notable because it's on a leading-edge AI deployment. I think that you know, we're still believers long term in the benefit of looking at architecting systems differently. And I think we're seeing lots of interesting architectures come to market.
Mm-hmm.
You know, the Tesla Dojo AI cluster is really impressive on many fronts. And if you look at that tile design that they've talked about publicly, and they've talked about Credo being their connectivity partner, you know, it's really probably the most advanced, highest bandwidth example that we can point to in the market. Every one of their D1 ASICs has 576 lanes of our 100 gig IP, making that 57.6 terabits per second of bandwidth per die. There's 25 of them in a 5/5 matrix on their tile. Surrounding the tile are Chiplets. So there's 40 Credo Chiplets on every tile that sit underneath those connectors and heat sinks that you see on the edge. So a really, really unique architecture and something we've been working on for actually several years.
It's great to see them, you know, successfully bring that to market.
The breadth of your customer engagements, I mean, given where the market or the industry is headed from a Chiplet kind of architecture perspective, I would assume your engagements must be broadening.
The conversations are definitely broadening. So the Chiplet, any Chiplet that we develop, we'll have the rights to take it to market, even though it might have a lead customer.
Mm-hmm.
You can imagine we've got, you know, a handful of other conversations happening around that same development. We've got another Chiplet that's in design in 5nm, and I believe will probably, you know, that'll be a good, good device to intersect a broader part of the market.
Got it.
But I'll also say that we're also looking at the network inside the box or inside the server or appliance, and it's really a PCIe CXL network. We're part of the UCIe consortium.
Mm-hmm.
And we believe that, although, you know, the Chiplet designs we've been involved in have been very much tailored for the lead customer, we do believe that there is the possibility that a standard Chiplet can emerge, especially from this standards body that's really been in place for a year or so. So that would be a Chiplet that would have UCIe.
Mm-hmm.
On one side, but connect to the die within the package and PCIe, you know, that would be on the other side, you know, off-chip facing.
Interesting. That's fascinating. Maybe I will pause here and see if there are any questions from the audience. All right, I'll keep going. Maybe bring Dan into the conversation. A question on gross margins. You know, in the July quarter, you exceeded the high end of guidance. And I think you gave a couple of factors, and this was despite weaker IP revenue. So maybe remind us, what were some of the key drivers? Were they transitory, were they permanent? And how should we think about the go forward, given the.
Yeah.
The 63%-65% target?
Yeah, let me start with just kind of the broader perspective. Nothing has changed in terms of our long-term expectations when it comes to gross margin. 63%-65% is what we expect long term, and that's with IP being 10%-15% of overall revenue. Fiscal year 2023, last year, we ended at 58%, so there's, you know, 500 basis points minimum of, you know, expansion in the upcoming years. And what we've stated for this year, at fiscal year 2024, would be about 100 basis points year-over-year expansion, so 59%+. And what's really driving that, revenue-wise, we have modest growth, essentially, FY 2024 from 2023. So there are some underlying product mix changes in the year that are really driving that expansion this year.
That's all relevant because that was. It was visibly on display in our Q1. Our product gross margin expanded by 700 basis points sequentially, and that was really driven by increased contribution of some of these emerging product lines. Our chiplets, our Optical DSP was material really for the first time on a quarterly basis. And as a result, we've always talked about AEC being at the lower end of our margin spectrum.
Mm-hmm.
So that was a, you know, a lesser component of the total for the quarter. So, there will be some quarterly variability throughout the year. But long term or for this year, we expect, you know, 59%+, for, for the entire year. And that's with IP being kind of right at the, at the high end of that long-term expectation.
Mm-hmm.
Call it 15%-ish of total IP revenue.
Right. So obviously, in the near term, product mix is probably the biggest driver of.
Yeah
Gross margins to for the upside and the downside. But as you kind of look at your profitability on a product-by-product basis, where is the most sort of potential, if you will, as you think about your business over the next couple of years?
Well, if you look at, if you just talk about gross profit.
Mm-hmm
AEC, we've always said will be, you know, 50% plus of the overall revenue mix.
Mm-hmm.
Even though the gross margin percentage might be on the lower end from a gross profit, you know, from a dollar perspective, that's where a large contribution comes from, the largest contribution.
Mm-hmm
Comes from.
Got it. Got it. Bill, maybe back to you. So, you know, wafer pricing has been relatively inflationary the past couple of years, and.
Nice way of saying it.
Right. Right. The past couple of years, I guess the outlook into next year is probably still uncertain, but there are various views. Some people are saying it's gonna be up. You know, How do you think about your foundry strategy? Obviously, TSMC is a very strong supplier. You've got, you know, a very strong relationship there. But as you think about, you know, your business growing over the next, you know, several years, you think about the geopolitical backdrop, what's sort of the debate internally as it pertains to who you source from?
Sure. So we've been fortunate to have great supply partnerships, including TSMC. You know, they've done a fantastic job of cultivating our business. And you know, the situation, the reality is that you know, we can achieve our objectives with TSMC as a sole source. However, as we look out towards the future, it really becomes a conversation about resources and priorities. Obviously, I think it's pretty clear that having multiple sources is a good idea. And you know, I think a lot of our customers that we talk about optical, I think there's a desire to have multiple sources, and that's really driving our business. So I think the same principles exist for us.
I think it's a little more complicated when we think about the resources, but more and more, as we look at next-generation processes, there's not. You know, I think the gap can be bridged by our tool suppliers. And so, you know, it's hard to say. I don't, you know, I can't say anything regarding the near term.
Mm-hmm.
You know, but obviously, we're looking at all options long term.
Right. Right. But there's always an offset in that having multiple lanes from a foundry recipe perspective is expensive, right?
Sure.
Right. Right. Okay, got it. I guess from an OpEx perspective, and this is for both you, Bill and Dan, how should we think about the leverage going forward? I know you've got common R&D. It's a good platform. You leverage it across, you know, different applications and various customers. Should we expect? Is it fair to expect pretty healthy OpEx leverage as sort of business comes out of this, you know, correction, and you start to grow into 2024?
Yeah, we expect to see that on a quarterly basis this year. That'll be really on full display, right? We, you know, we faced a decision two quarters ago on what to do OpEx-wise as we had our reset with our largest customer, and our choice was, with all of the organic opportunities in front of us, to continue to invest strategically in areas that will yield a nice ROI in the future. So that meant that, we had a small loss in our Q4 of last year and Q1 of this year. That returns to essentially break even this quarter and return to profitability in the second half of the year. Our OpEx growth is really what we've termed modest sequentially throughout the remainder of this year, but the top-line growth is, you know, at or near 20% sequentially.
So we expect to return to double-digit operating income in by our Q4, is what we had established as a goal. And so you'll see that leverage occur throughout this year, and that should continue in the future, as we do have a very highly leveraged R&D organization.
Okay, great. I'll pause here again. Any questions? Good. Good. Maybe in the last couple of minutes, Bill, as you spend time with investors and analysts, anything about the, you know, Credo story that people overlook or underappreciate, or anything about the broader market that you feel like collectively, analysts and investors don't quite get or understand, if you will?
I think this group is a pretty smart group. I always learn a lot when I come to conferences like this. But I think that, you know, we've been talking about the applications that AI in particular, we've been talking about this for several years.
Mm-hmm.
And we've been talking about it, you know, you know, in terms of our business, you know, really is fueled by faster speeds. And the bottom line is, we've been working on many, many different AI deployments, and I think up until 2023, it was hard really to draw a straight line from point to point, saying.
Mm
How and when is this gonna benefit your business? But I think it's absolutely right in focus now, when people talk about the application generally. Although we don't, you know, track it that way, obviously most of our business long term is gonna be, you know, driven by AI, because that's really the catalyst towards, you know, faster speeds. When in time have we talked about connectivity being the big problem to solve, right? If we look at AI clusters and we look at the scale-up and scale-out, you know, that is on the roadmaps, you know, there's a real catalyst to move to 200 gig lanes now. And so, you know, when in time has there been that catalyst?
If we think back to the 400 gig market, all of the forecasters were saying it was gonna be a hard cutover. There's only one customer that made the cutover, and it was a form factor-driven decision. This is really the first time that I can remember.
Hmm
That connectivity is in the spotlight. And so when you're a pure-play connectivity company, boy, we find ourselves in a great spot right now as we look forward. And so I do think the investment community is right on, you know, as it relates to our business and, you know, the fact that we see this ten-year mega trend that is gonna continue, and it's really very clear to identify why it's being fueled and how it's being fueled.
Do you get the question, the Ethernet versus InfiniBand?
Yeah, that's a popular question, and I think that, you know, coexistence is really what we see. I think you can build a model if you go data center by data center, and you would say, "Hey, Amazon, is InfiniBand big there?" I'd be interested in your model. It probably wouldn't be something you'd say it's big there. As an example, you could go data center by data center, and I think that our growth is really fueled, our business is really fueled by the hyperscalers.
Right.
You know, we see Ethernet being very popular, so we're involved in lots of different deployments, and I think it's a function of time.
Great. Really enjoyed the conversation. Thank you so much.
Yeah, thank you.
I appreciate the time. Thank you.