Joining us at the Mizuho Tech Conference. I'm Vijay Rakesh, Senior Service Analyst for Mizuho, and joining me today is Bill Brennan, CEO of Credo Semiconductor. Bill, welcome.
Thank you.
Thank you for joining us, and I appreciate the opportunity.
Thank you.
For more than 25 years, Bill has been leading and scaling organizations to deliver steady revenue growth and profit. Bill joined Credo in 2013, leading the company to its IPO in January 2022. He has overseen Credo's growth from a small startup to an organization of more than 500 employees. Under his stewardship, the company has pioneered revolutionary networking technologies based on Credo's high-performance, low-power SerDes technology, including Active Electrical Cables, AECs—guys, you'll hear a lot of it today—and LPOs, Linear Pluggable Optics, some of that too. Under Bill's leadership, Credo has emerged as a gold standard for high-speed connectivity solutions, delivering revolutionary retimers, gearboxes, and MACsec cables. With that, Bill, thank you for joining. Let's join in welcoming Bill . With that, let's get started. Okay, Bill, here we go.
Let's go.
AEC cables. It's been the big chunk of your revenues. I mean, it's been a big driver for you. I think since your IPO, your AEC cable revenue has grown almost 10X, I believe. Just amazing. I don't think anybody saw it growing that fast. Maybe you can talk to us about the competitive moat, what is driving that business, and how you see that going forward.
Sure. We are a company that delivers connectivity solutions. I've recently talked about the three different tiers of innovation, and this will ultimately get to the competitive moat that we're building. Everything for us starts with SerDes. It's a super difficult circuit that's critical for any kind of high-speed connectivity. I believe that we go deeper and broader in our SerDes IP portfolio than any of our competitors. We're very, very much application-specific in our mindset. I think there's a moat that's created at a SerDes IP level. The second tier of innovation is really ICs, Integrated Circuit Designs. We do a lot of different and special things with our IC design, so adding to the competitive moat there. We do things with more efficient power, lower cost, and performance in addition to adding circuits that can go higher level from a feature standpoint.
The third tier is really the system-level solution, and that's delivering products that are integrated within our customer's network at a level that surpasses what the IEEE standard is. If you look at these three tiers of innovation combined with what we're doing from a software and firmware perspective, all of this adds to the competitive moat. I think we're unique in a sense the way that we're approaching this product category in a sense that we're vertically integrated. From that system level, we define the products with our customers. We develop them entirely. We qualify them—very intensive qualification—where we qualify the entire link from switch to NIC running at speed, over temperature, over voltage. That's done in-house at Credo. Ultimately, we're responsible for every aspect of production. I've got a silicon operations team, and I've got a system solutions operations team.
It's really that way we're organized adds to the competitive moat. It's very difficult to compete with us.
Got it. I think you mentioned the chip side and the software side. On the chip side, obviously, you talked about how you're at N minus 1, at 10 nm, 12 nm, and you're able to beat your peers on power consumption or cost. How do you see that as you go forward? You're already below your peers in many cases, like on SerDes when you look at Broadcom, Marvell, all these guys. How do you see that as you go to 5 nm, which I believe is tape out next year or?
Five were in flight already. We've got products that we've introduced, but 3 nm is what's in fab now.
How does that compare to what's out there? What does the competition do when you start coming to market with a 3 nm?
Yeah, so I think the competitive arena has changed shape over the last 10 years for us. I mean, we're an existence proof that we do things better at a SerDes and an integrated circuit level. Just the fact that we exist, it doesn't mean they're just as good as Broadcom or just as good as Marvell. Where it might have been a little bit more simple 10 years ago that you got to deliver something that's super high performance with very low power and low cost, it's kind of shifted today, especially because the bulk of our products are system-level products. I would say number one today is reliability. Number two is power. Cost is always important. It's important that you approach your business with that mindset.
More and more in the AI application, when we think about putting 10,000 or 100,000 or a million GPUs together to form one huge supercomputer, understand there is not redundancy in that appliance-to-switch connection. If you look at, say, optical technology, laser-based optics, very, very susceptible to link flaps, a momentary loss of connection, which will cause the entire cluster to go down. You have to start your training run over. It's a massive loss of productivity. Our customers are now putting reliability at the top of the list from the standpoint of what's important. We've won. We've actually replaced optical connections because our AECs are well more than 100 times more reliable than laser-based optics.
Got it. I think so that brings us to an interesting point, which is the hardware side. Obviously, you have a pretty good lead, but then you also bring in a good software overlay on there where you're able to kind of predict where the next failure point is or predict a link flap, which actually helps the customer get more uptime on the server, might be going to some predictive maintenance, etc. Maybe you can talk to that because that's a big differentiator now on the AEC side versus many of your peers, right?
Right. It's a great point that the way that we work with our customers, try to work with our customers, we try to make the connections that we're providing more valuable in a sense, more feature-rich. One of the things we've learned with our AEC business is that if we're open to innovation, our customers bring a lot of really interesting ideas. Now, you're talking about an area where engineers have not been able to innovate. We're talking about replacing passive copper cables, basically a jumper cable, or looking at the optical space. There's no real innovation that's above the IEEE spec. We have developed a lot of interesting technology that allows our customers to do a lot of different things. Telemetry is one of the big value adds that we've delivered. It means a network engineer can see link health in the entire rack.
Instead of having a green light only and then a red light when you get a link flap or when you do have a link that's degrading, having a check engine light or a yellow light is super valuable. During a maintenance cycle, they can go proactively replace those links that are likely to degrade further. Bringing that across the product family is really our next objective.
Is that something that you can roll out on existing customer deployments already, or is that only with a new ramp that you can roll in the software?
There's different points to engage, and it's really based on the customer's willingness to take us down that path. Yeah, there's no limitation on when it can be implemented.
Okay. On the AEC side, going back, obviously you have grown 10X, but investors, they're not really satisfied with the 10X growth in revenue.
What have you done for me lately?
Yeah, exactly.
Yes.
As you look out, how do you see that pipeline growing, either from a customer diversification standpoint or just increasing engagement at your existing customers?
Sure. Yeah, I think there's going to be natural diversity as we expand our footprint with the leading hyperscalers in the world. We've mentioned that last quarter we had three customers that were over 10% of the revenue. We mentioned two additional customers that we'll be ramping this year. One of them will, one will happen first, and that'll happen probably in the late second quarter, early third quarter, and then the other will be following. You will see customer diversity over time, but also we're really at the very early innings of the product category being adopted across the customer spectrum. There's different areas within the network where AECs are a good choice. I mentioned there's a front-end network, so the front-end network connection from the NIC to ToR. There's also scale-out back-end. There's two pieces to that.
There's the appliance-to-switch connection, and then there's also the switch rack that sits within the row dedicated to the AI cluster. That switch rack application is also in the front-end network. There are lots of different areas where AECs should be deployed. You will see a natural expansion within those applications. Most recently, there is a real expansion as we look at rack-to-rack connectivity. Previously, we have kind of been the intra-rack connectivity de facto as people move from 25 Gb per lane speeds to 50 to 100. Almost everybody converted at the 50 Gb per lane, but really across the board for our customer base, 100 Gb AECs in rack are a de facto solution. Now that there are new technologies being deployed as they build new data centers, specifically liquid cooling and incredibly greater power sourcing, there is an example with one customer that we have got.
They were originally building an 18-rack row, two switch racks, and 16 appliance racks. They were in a facility that was air-cooled. The power sourcing was more traditional, so say less than 15 kiloW for a rack. It was not a very dense deployment. As they moved into the building with liquid cooling and the kind of power sourcing that you would need, maybe 10X the power sourcing of traditional, they were able to compress that footprint from 16 racks of appliances. They were able to quadruple the density, so only four racks of appliance and two switch racks. Because link flaps are top priority to eliminate, they asked us to build 7-meter cables, and that is how we ultimately introduced the zero-flap AEC family back in fourth quarter. Now you are seeing the expansion of TAM because we are actually replacing optical connections.
This concept of going from intra-rack only with all these different applications, now we're looking at expanding the market to rack-to-rack connections that can cover a densely populated liquid-cooled row.
To that point, you already have a massive proof point with Amazon, one of the big customers. They're ramped well. Are you seeing other customers start to look at these proof points and say, "Hey, we can ramp up to that scale"? Are you able to kind of use that example as a starting point in your other ramps?
Yeah, I think that we can look at the opportunity from a top-down level, and we can maybe build a model on CapEx and ultimately what would be the portion of that CapEx that would be spent on connectivity, specifically on connectivity that's maybe seven meters or less. I think from my perspective, I view all of the six hyperscalers in the U.S. that we consider, they all have the ability to drive a very large number from a revenue standpoint, especially as they deploy new technologies and move on to next-generation deployments. I think they all can be 10% customers long-term.
Sticking with the theme of cables, I guess, I want to go to the scale-up side. I think the new trend that's coming on is your rack and spine are starting to flatten out. There's more clustering of GPUs. They're going from 70 to 128 to 512 and 1,000 cluster GPUs. Maybe you can talk to the scale-up side, what's the opportunity there versus scale-out that you do today? What differentiates you from the others?
Right. Just to be sure, we're talking about the back-end network in an AI cluster, scale-out being an Ethernet protocol, scale-up today being PCIe protocol or with NVIDIA, it's NVLink. We look at that scale-up architecture, that scale-up network really starts within the appliance. The market today is largely connecting GPUs within an appliance. This is a retimer market. What we're seeing is that there are rack-scale deployments happening. If you can get higher performance by linking all of the GPUs directly in the appliance, it definitely makes sense that connecting all of the GPUs in a rack will give you higher performance. There is an argument to take that network and go row-scale, directly connecting all of the GPUs in the row.
When we think about networks, when we think about front-end, traditional, what we all thought before two or three years ago, we think the scale-out opportunity is a 10X opportunity from a volume perspective. We think scale-up, if you play it out and you go all the way row-scale, we think that's another 10X from a volume standpoint, so 100X of the front-end network. It's an absolutely massive opportunity. We have to talk about protocols. Currently, most of the deployments right now are PCIe Gen 5. It's really the language that CPUs, the protocol that comes from the x86. It's natural that because it's native on the board that PCIe was used to connect the GPUs directly together. In the future, the speed needs to go much, much faster than 32 Gb per second.
There's a natural path to PCIe Gen 6, which doubles the bandwidth to 64. That is still a third or a quarter of what NVIDIA is achieving with NVLink at 200 Gb per second. Our business over the next, I would say, 18 months to three years, the core of the scale-up opportunity for us will be retimer, PCIe retimers and AECs. After that, there's a large discussion happening in the industry about where does the protocol land. Of course, people are familiar with the UA-Link effort that's being led by different companies within the industry. We've heard recently from NVIDIA about NVLink Fusion. We heard recently from Broadcom about scale-up Ethernet or SUE. For us, we view the AEC solutions we make are layer one. It is a bit stream.
To give you an example, my InfiniBand cables, we were just at the interop effort that took place last week. The only difference between Ethernet and InfiniBand at layer one is just the handshake. When you plug it into a port, you have to raise your hand and say, "I'm an InfiniBand connection." At layer one, it means that the protocol war that is going to happen between NVIDIA, Broadcom, and the UA-Link group will be able to support all of it with universal solutions because we are layer one. It is a very interesting conversation because if we think about NVLink Fusion, we are really talking about the GPU maker, right? They have to make a decision what SerDes to integrate into their hundreds of millions of dollars of cost in developing that chip. If you choose to go with the NVIDIA IP, you are married to them, right?
You're going to be buying these chiplets that enable you to interface from the GPU to the outside world. The Broadcom approach, it's really a hack of the Ethernet protocol. What they did was they hacked it, they fixed the packet size, and ultimately reducing latency for short connections really targeted at the row. It gets you 80% of the way there. That is something that you could implement today. There is a time-to-market challenge with all of these. When we think about UA-Link, this would be a brand new ecosystem. There aren't switch suppliers that exist. Broadcom pulled out of UA-Link, and they're going SUE.
I want to get to that in a little bit. I think the key is when you look at the top of the rack, your AEC cable, that itself has driven a massive growth for it, 10X. This scale-up opportunity is something that was not talked about even on the IPO side. Now that is starting to gather steam, get you design wins, and there is a massive ramp ahead of you. You talked about design win on the scale-up side in second half 2025, which looks like pretty quick. How are you able to differentiate on the scale-up versus the existing? What is different? Is it power? Is it performance? PCIe is an industry standard. How are you differentiating versus the competition?
Yeah, to be clear on the timing of ramp, we view calendar 2025, the year we're in, is really a year of design wins. 2026 is when we're going to see the first ramp. I think we'll see retimers first, followed by AECs. If we talk about PCIe specifically, you want to get down to the SerDes level?
Yeah, yeah.
Sure. Okay. So normally I think.
I'm sure investors want to know why.
The discussion about latency with the scale-up network is really important. PCIe at a SerDes level, in addition to reach, which is how far can a signal travel and still be recovered at the other end of the wire, so the receiver or the clock and data recovery or the DSP, many different ways to talk about that. Reach, we talk about what's the energy required to get the signal across the wire and cost. In addition to that, you've got to throw latency in. There's a tug of war between reach and latency. If you want low latency, you've got to reduce your reach. I think because we focus so deeply on building application-specific SerDes, we've delivered our PCIe SerDes with the ability to have a greater than 40 dB loss handling.
At the same time, we're able to get well below the 10 nanoseconds.
Where is the competition when you look at that 43 dB?
There is a trade-off.
The distance versus signal loss, yeah.
The distance versus latency. We see a lot of our competitors dropping below 40 dB into the 35 dB range, which means for certain links, it is not going to have enough loss handling performance. On latency, we see competitive is 10 nanoseconds. We are at six. We have successfully offered reach and low latency. That is a huge differentiator. In addition, I think if you look at our pilot platform, which is really a software and firmware platform for development debug, we are going to be faster time to market. In the Ethernet world, going from 25 Gb to 50 Gb to 100 Gb, this software capability has been built over a decade. It is really the way that you get to a high-yielding board design quickly.
Yeah. So one of the things you mentioned is NVLink obviously has a much higher bandwidth than PCIe Gen 5, let's say, said 32 Gbps , that was 200. When do you start to see it competitive to NVLink? You mentioned UA-Link and SUE, but obviously UA-Link looks like the sponsors are dropping away, I guess. It's only AMD left. How does SUE stack up on bandwidth versus NVLink versus, I guess it's Ethernet, might be PCIe Gen 5 itself versus what you will be doing?
Yeah, so that's the key point. From our perspective, what we see is the same SerDes, 200 Gb ps SerDes. That is common amongst all three different standards. For us, it's again, we have the ability to build a universal solution.
Got it. Okay. When you look at going to market with the scale-up product, would the first customers be the first AEC, initial AEC adopters? Would that be the route to go? Because.
Yeah, my feeling is that we're going to see a natural progression from a retimer market where we're selling ICs to AECs as that network wants to go rack scale. An advantage that you get with going to liquid cooling, you get this compression in your footprint on the row. Now when we talk about going row scale, we can also talk about using AECs for that, which gives you, again, highest reliability, which is the critical factor.
You also mentioned one of the things on your AEC cable, sorry to go back to that, is a big chunk of your shipments are still 50 Gbpl . Obviously, your 100 Gb pl coming where AEC becomes a much bigger use case. Can you talk to how that should logically accelerate the AEC adoption as you start to see Tomahawk five or six start to ramp here and that whole bandwidth per lane start to increase, I guess?
Yeah. Our first couple of customers, we were at lower speeds than 100 Gb ps . The work that we're doing with xAI, that's straight to 800G ports, 100G per lane. We're connecting exclusively with NVIDIA GPUs. The next customer to ramp Gb bandwidth as well, also connecting exclusively to NVIDIA GPUs.
You haven't said the name yet?
What? No, you're tricky. Yeah, we're in a competitive world, so. I guess the point being is that we're going to see a natural trend as speeds increase and as reliability becomes more in focus, these trends are going to lead to AEC adoption more broadly.
Got it. I want to go to the optical DSP side. I think it's a small market for you. It's a small revenue for you now. Like your AEC side, it's grown like 10X in the last two years, three years.
From a smaller base, right?
From a smaller base, definitely. Obviously you are expanding significantly in that space. You said you have three customers. You add two more by the end of the year. Can you talk to what's differentiating you from the competition and how you're able to break into that market? When you look at that customer set there, obviously it'll be CSPs, but does it kind of line up with how the AEC customer ramps look like?
I haven't seen a correlation in ramps yet. If I look at the offerings that we're bringing to market right now, we offer absolutely the lowest cost solution with our 12 nm solution. It's got very competitive power. You can build 15 W optical modules, and it's got very competitive performance. We just announced our 5 nm product, 800 Gb DSP, and that is by far the lowest power solution in the market. In fact, with our LRO variant on that, you can achieve module power compared to 15 W, which is standard. You can achieve 9 W. As power and energy efficiency become more important, we see a huge amount of interest coming out of OFC and that product.
Ultimately, what we're trying to do is also deliver system-level features to our customers to allow some of those interesting things that we've done with AECs to allow that in the optical space as well.
When you look at, since you're talking about optical, I want to go to the CPO side of this.
Fun conversation.
There you go. There is all the talk about CPO at top of the rack at the switch or CPO at the GPU level. And now as the cluster sizes increase from 70 to 128 to 512, do you see CPO more starting off at the GPU than top of the rack, or how do you see that playing out?
Yeah, I see CPO taking off first in the higher level switching.
Switching across the rack.
I don't see it happening anytime soon from a GPU to top of rack switch. That will be an electrical connection for reliability. CPO has one big pitfall that has not been answered yet, and it's acronym RAS, Reliability, Availability, Serviceability. This is core, right? If you can't address that at a fundamental level, especially where it's needed, which is from the appliance to the first switch, the top of rack switch, you don't even get past the starting line. Even after that, you've got to consider that it's not the ultimate low power. There's ways to do it with better power efficiency. In fact, with copper, I mean, it's fundamentally lower power. That's power efficiency.
How does it compare versus optical when you look at copper?
We look at half the power, right? And then at the end of the day, I mean, cost does matter. And when you're talking about a supply chain, the optical transceiver and AOC manufacturers, they live in a neighborhood of 20-30% gross margins. The neighborhood of 70% or higher, it means that it's going to be seriously unaffordable. But for us, we don't see CPO happening before the 200T generation of switches. And when it does happen, it should happen in the higher levels within the hierarchy of switching.
Got it. When you look at the scale-up versus the scale-out opportunity, obviously scale-out, you guys have a pretty good, you've been playing this game and you've been able to knock down a couple of design wins there. Scale-up is a new opportunity for you. How do you compare the technical complexity of scale-up versus scale-out? Where do you see the competitive moat is much more challenging? Is it on scale-out, scale-up?
You know, I think the technical challenge on both is high. This is high hurdle stuff that we're doing, which means there's only going to be a few companies that we can even talk about as potential competition. I think you'll see a diversification of our business at this protocol level. The scale-up opportunity is one that we're going to grow into over the next five and even 10 years.
Got it. When you're talking scale-up, maybe you can talk about the SUE side. Are you part of that, of the SUE group? When do you see that actually pick up versus NVLink? Are you seeing CSPs actually ask for scale-up versus NVLink?
Yeah. So again.
We are not talking about NVLink Fusion too, because obviously.
Right, right. You could consider Credo as not having a dog in this fight. Again, layer one on AECs in particular. As that plays out, what we are looking for is the solution that happens fastest. My belief is that the Broadcom solution can happen very quickly as opposed to NVLink Fusion or UA-Link.
Just because of the way how the Ethernet side has evolved, I guess.
Yeah, it's an extension of the existing ecqosystem. When you're looking at kind of a hacked spec of a protocol that is very, very.
Familiar, people are not.
Like well deployed. You're talking about these standard Ethernet solutions can become very low latency if you follow the SUE protocol. The hurdle to actual time to market is lowest, I think, with SUE. Again, we don't have a dog in the fight. When the market moves that way, we're going to have products that are extremely compelling to making those in rack and rack to rack connections.
What does NVLink Fusion do to that equation?
It offers one more alternative, right? It's.
It's still NVLink, but it just opens a protocol to.
Yeah, I mean, to deploy NVLink Fusion, you would have to, if you're a GPU maker, you would go get the IP from NVIDIA, embed it in your GPU, and then you would have to buy a chiplet from NVIDIA for that off-GPU communication. It's pretty complicated. Again, I don't have a dog in the fight, but that's a pretty long time to market. It's making decisions that people really have to think through. Having some dependence on NVIDIA is, I think that's why UA-Link was started to try to offer a standard that was really independent from the other things. A combination of the ecosystem being there, time to market, and ultimately the cost of doing that, I think that's going to determine the winner.
For us, I think, I mean, we're going to be supportive of all three, and we're going to do whatever we can to help the time to market piece. Sooner is better.
When you look at the top of the rack, obviously move from InfiniBand to Spectrum X to Ethernet to AEC, it took a little bit of time, but it's happening. On the NVLink side, is the move from NVLink to NVLink Fusion to like an Ethernet PCIe standard, is that a faster transition? Is that an easier transition because of an industry standard there? What's the cost benefit for a CSP? Because we hear CSPs saying, we hear basically CSPs saying they want Ethernet than NVLink, but what's the cost transition from?
Yes, I think you've got to consider that the scale-up, everything on the scale-up that's outside of NVIDIA right now is all PCIe. Even the 64 Gbps , the Gen 6, which will happen in calendar 2026, that has just a, it's a third or a quarter of the bandwidth that you get with NVLink. There is this massive gap that the rest of the world is looking at. How do they close the gap and do 200 Gb ps from 64 to 200 Gb? That is the real question. I do not think it's anything, but how do you get the fastest time to market to a scale-up protocol that's got very low latency over short distances?
Got it. Obviously you now have a lot of your AEC cables being made out of China and you moved it to Malaysia, I guess. Maybe you can talk to how that capacity, how you're lining it up, because right now it's mostly geared to AEC cables. Your optical DSPs go to TSMC, but what's the capacity that you will need from a scale-up? Can it be built in the same factory? How do you envision that capacity rolling out?
Yeah, so the manufacturing of cables, it's not on par with the manufacturing of chips. Adding a fab is a massive, massive investment. Adding more lines to produce AECs, it's a relatively small investment that is relatively quick to implement. We just went through a very vertical ramp driven by our largest customer, and we were able to add significant capacity in less than six months. We are talking about from a volume standpoint, nearly doubling the volume that we were producing in less than six months. I do not lose sleep over any kind of forecast.
Another opportunity coming, there's 10X SI, so it's scale-up, so.
Yeah, it's basic blocking and tackling to add more capacity. From a geographic diversity standpoint, of course, you know that we started working on diversifying well more than a year ago, and it puts us in a position to be very flexible with where we produce. I think we're in good shape regardless of where we land on the tariff situation.
Okay, great. I think we have about five minutes. Maybe we'll take some questions. Any questions in the audience? There you go.
Just kind of curious what you think the roadmap for copper could be. Like I know 400Gb per lane, copper seems almost impossible.
Are you a physicist?
No, I just like going to the conferences.
Yeah, me too.
I mean, they tell you it's like inches, right?
Right.
Yeah, what does that mean? Does that mean PAM6, PAM8? How do you think you can scale past that?
Yeah, so I've been in this industry for 12 years. I joined Credo 12 years ago, and I didn't know much about networking. I spent a lot of time in the hard disk drive world. I was told when I joined the company, I take advice from anybody that wants to give it to me. I was told that we should only focus on optical because copper is going to be basically obsoleted by optical in the very near future. I sat down with who is now our Chief R&D Officer, and I just tried to learn about all the connections in the data center, and the vast majority were copper, and then the long ones were optical. We decided we're going to go down the copper path because I want to grow into a large market.
I don't want to wait for some sort of market to develop. That's how companies die in Silicon Valley. The market doesn't happen soon enough. It's a pretty good decision. We actually conceptualized AECs in 2014, the first slide I did. The concept was, if copper is doomed, how do we extend the life of copper? This shiny object that people talk about, there are different forms of optical really obsoleting copper. I think we're finally landing on reliability matters. If you can use copper, even the largest companies in the industry, NVIDIA says, you will use copper if you can because it's just fundamentally more reliable and lower power. I think that argument's over. It becomes a physics argument on when do the physics break down. The argument right now is around 400 Gb.
The 200 Gb per lane, that horse is out of the barn, so to speak. It's going to be the same ecosystem. It's going to be copper and optical. For 400 Gb, the jury's out. I'm not going to bet only on one. I'm going to have solutions for optical and I'm going to have solutions for copper. There's different ways, like you said, you can change the modulation to PAM8. You can basically put more bits per symbol to get the throughput. That'll play out long term. I don't lose sleep over it. It's not something that really occupies my thought. It's going to take a long time to get to 400 Gb. I got a bunch of questions today about, hey, the 400 Gb optical market seems to be like carrying on way beyond what anybody thought.
We are talking about a transition to 800 Gb ports. Two years ago, I was told 1.6T ports were going to wipe out all the 800. Everything takes longer than we expect. We are a big believer that the 100 Gb per lane or the 800 Gb port market is going to be a very large market for several years. When 1.6T does take off, which will take some time, that is going to be a very, very long generation prior to getting to 3.2. I do not know if that answers your question, but it is a raging argument for 12 years in my history. The bottom line is copper continues to win.
I think Broadcom on the recent earnings call talked about networking growing 170% year on year. I think networking in total is probably up to a $15 billion market. You guys are sub billion. We'll see.
Yes.
Any other questions, I guess? Great. I think that brings us to the top of the hour.
Thank you very much.
Thank you, Bill.
Yep.
Share it.