Hey, folks, thank you very much for joining us. We're coming to you live today from the Optical Fiber Conference, OFC. This is Simon Leopold, Raymond James, Data Infrastructure Analyst. I'm pleased today to be hosting Bill Gartner of Cisco, who runs the Optical Business Unit over there. We've got some prepared questions. We're going to go through in sort of a fireside chat format today. Bill, I think we'll have a lot of interesting thoughts on what's going on in the industry. He's been around for a bit. Bill, why don't we start off? Just set a little bit of context for our audience so folks understand your scope of responsibility and your fit within Cisco.
Yeah, thanks, Simon. First of all, thank you for having me. Let me just start with a forward-looking statement that I'm encouraged to make by our investor relations team. That's that I will be making forward-looking statements, and our actual results may differ materially from those forward-looking statements and are subject to the risks and uncertainties found in our most recent 10K and 10Q. With that, I have responsibility for, really, you can think of as three separate businesses within Cisco. One is the Optical Systems Business. That's the traditional DWDM business that's used to carry signals over long distances across a city, across a country, or subsea, typically using chassis-based solutions that include ROADMs and amplifiers.
The second business is our optics business, which are the transceivers that are used by switches and routers inside a data center or inside a central office or in a campus environment, typically less than 10 km for those applications. I have responsibility for Acacia, which was an acquisition we completed just about four years ago. Acacia provides the underlying technology for our optical systems, as well as pluggable coherent technology that we use in many applications. We will be talking more about that.
Great. Bill, the trade show is just about to kick off. We attended the executive forum yesterday. What do you think will be the hot topics at the show? Do not simply say AI. We want to get a little bit deeper than that. What do you see as the hot topics? What will Cisco be highlighting?
Yeah, so certainly AI is sort of the overarching theme behind a lot of the things that are driving capacity in customer networks, whether it's hyperscalers or service providers, and ultimately enterprise networks as well. I think one of the hot topics is co-packaging. People seem to be very, that has sort of renewed interest around co-packaging. We can talk a bit about our views on that. Cisco has been very deliberate in advancing the idea that pluggable coherent optics can replace transponders in many, many network applications, including data center interconnect, metro, and now long haul. We are showcasing a 400 gig ultra long haul optic that can be used in applications up to 3,000 km, as well as 800 gig and 800 gig ZR+ optics that really advance the state of the current 400 gig optic used in our inner data center and metro applications.
We're also showcasing a new optical line system that's really optimized around metro applications, optimized around point-to-point metro applications, leveraging our NCS 1014 chassis that has historically been used to host transponders. We've now expanded that to include line system components like amplifiers and mux/demux. We'll also be showcasing client optics that are optimized for AI applications. 400 gig and 800 gig client optics really are targeting AI applications.
I definitely want to get into some of the more technical discussions. Let's start off with a little bit about what's going on maybe near term in the marketplace. There's been a lot of noise about the AI builds by the large hyperscalers being lumpy, some projects being deferred, what's being slowed down, accelerated. What is your take from somebody in the trenches as to the level of activity?
For the first half of our fiscal year, which began August first, the demand has been exceptionally strong and mostly driven by hyperscalers. I would say that I would characterize that demand as very lumpy. It comes and goes. We have seen huge upswings and huge downswings in that demand over time. This most recent period was almost all upswing. We do see some of the hyperscalers take a breath at this point. Overall, in aggregate, I would say the demand is still very high. They are building out data centers and crossing data centers for AI applications, which has an impact on things like our ZR optics at a rate that we have just never seen before. We are pretty bullish on the near-term prospects for continued growth. I would say we are also cautious.
We've been caught up in cycles before where things just slow down without any particular explanation. The real challenge for a company like Cisco is how much inventory do you build and carry in anticipation of that demand continuing to grow and being ready when that demand is there versus scaling back and trying to take a more conservative position.
Maybe this is a little bit out of scope for you, but just wondering if you've got thoughts on what's creating lumpiness. Is it a product transition where we know NVIDIA is going from Hopper to Blackwell, or is it more sort of macroeconomic questions, or is it simply managing large projects takes time?
I think it's all of the above. In some cases, it's a given customer may be waiting for fiber build-out, for instance, or they can't get components that are required to build out the data center. And there's a number of things required, obviously. You have to have servers. You have to have GPUs. You have to have networking. You have to have optics. If all those things don't line up, they're going to tap the brakes a bit. When all those things are lining up, they're hitting the gas. It can be a number of things. I don't think it's a macro issue, though. They've all signaled very strong CapEx plans for the year. I don't think there's a macro issue that we would see here.
I do think that there can be things like fiber build-out or specific supply chain issues that might be slowing down in some cases.
Interesting disclosure Cisco made on the last article that about half of the $350 million of AI-related business were optics. I sort of feel like that's not been well appreciated. Maybe help folks understand how optics fit into that and what's the AI business and what's the optics in that.
I'd say there's three categories of optics that I would consider in that. One is the obvious client optics that are used in switching, networking. For some applications, when a hyperscaler has indicated to us that it is an AI build-out, and in some cases, they call it their AI WAN, as an example, it might be an optical system capability that we're delivering to that customer or a ZR class optic. All of the hyperscalers, I would say, at this point are deploying ZR class optics, either ZR or ZR+ 400 gig for the most part. When they have signaled to us that it's part of an AI build-out, we would include that as part of our AI business that we've signaled to the street we'll do a billion dollars this fiscal year.
It includes really any of those three, the transceiver optics or optical systems or ZR. It has to be part of an AI build-out. We would not consider that part of just a normal WAN development for any of the customers.
Now, Cisco did a product press release last week that caught my attention. I want a little bit of help kind of putting this in context and significance. It was a new 3-nm 1.6 Tb PAM4 DSP, 200 gig per lane. I do not know that people typically associate Cisco with those kinds of products. Maybe give us a little bit of background and what is the strategy here.
Yeah. Cisco, I think one of the things that's underappreciated about Cisco, and especially the optics area, is that we do have one of the biggest optics businesses on earth. That's largely due to the fact that we serve all of our customers, whether it's an enterprise customer or commercial, public sector, service provider, or hyperscaler with optics that are sold as part of our routing and switch sales. We also sell optics to customers that may have chosen a third-party vendor, whether it's Whitebox or a competitor. In that case, we're happy to offer optics to that customer. We should talk a little bit more about that buying behavior as well. We developed some of those optics in-house, and we source some of those optics from the industry. What we announced last week was really a 1.6T DSP that we're developing.
That's actually being developed as part of the Acacia team, who has very significant experience in developing DSPs, obviously. We will be offering that DSP to module manufacturers who may want to incorporate that into their module and sell it on the street. We will also be including that in our own optic that we will be offering to customers.
Just to clarify, I believe this is 200 gig per lane inside the data center, not wide area network.
That's correct. This is not a ZR optic. This is a more conventional short-reach optic that would be used inside the data center.
Let's pivot the discussion to the proverbial elephant in the room, co-package optics. I don't think you could swing a dead cat and not run into that conversation. Let's start off at a very high level. Just what's your take on the technology?
I want to offer somewhat of a balanced view here. I think there's going to be, we're in a froth cycle in the last, I would say, month or so around co-packaging. Cisco demonstrated a co-package solution at OFC, actually two years ago, OFC 2023. It was a 25.6 Tb switch that included co-packaged optics. I think Broadcom demonstrated one at the same time. I would say the industry response, and by the industry, I mean primarily customers, the customer response was fairly muted, muted to negative. There were a lot of reasons why customers looked at that and said, "That may not be something I really want to consider deploying." Let me just outline a few of those reasons that became apparent to us at that time.
One was that if you think about today, there's a number of silicon providers, Broadcom, Cisco, NVIDIA, a couple of others out there. There's many optics suppliers for the industry. Our customers benefit from the fact that there's a multi-vendor optics community that they can leverage and make sure that they have supply chain integrity and diversity and choice and the ability to negotiate. When you build a co-package solution, you're effectively consolidating the value of both the silicon and the optic into one monolithic structure that then really deprives the customer of that choice on the optic. I think many customers looked at the co-package solution and said, "You know, there are power benefits." That was a primary argument for co-packaging. There's a power benefit that's delivered. That power benefit is probably not worth trading off the multi-vendor supply base that I have in optics.
I think that was one issue. The other is that optics today, in many applications, are pay as you grow, meaning they buy a switch with 32 or 64 ports on it, and they populate that over time. They take advantage of the fact that they can defer the cost of that over time. That goes away with co-packaging because all of the optic is basically delivered day one. I would say there are applications where customers mix and match optics on a given switch or router. It's not one monolithic optic like a 2 km or 500 m optic. There's mixing and matching depending on the infrastructure and depending on the needs. You might have ZR optics and short-reach optics, for instance, in one router. That goes away as well.
At that time, I think the feedback we got from the industry was we should pursue other ways to reduce power while preserving a multi-vendor optics environment. Other ways included things like looking at LPO and LRO, ways that we can focus on the optic and take power out of that solution but still preserve the multi-vendor optics base. That was kind of the state of the world in 2023. NVIDIA, just last week, announced that they're going to be deploying co-packaging as part of their InfiniBand switch and ultimately as part of their Ethernet switch. I think that's kind of juiced up the industry in terms of, "Hey, what's happening here?" I'm not sure that the fundamentals that I outlined as some of the objections for the industry have really changed.
I think it's going to be interesting to see what the customer take is for this and whether the power savings that are delivered are really worth trading off some of the things that they would get otherwise. One other thing I would say is when we look at power savings, I think you really have to look at the whole solution, like the whole AI solution, whether it's within a rack and a scale-up or whether it's across racks and a scale-out. My somewhat cynical analogy is if you reach into your refrigerator and you replace the incandescent light bulb with an LED light bulb, you can claim 70% power savings on the light source for your refrigerator, but you're not going to change your electric bill.
I think there's some of that that we have to look at in this context as well as to say, yes, we can get, say, 30% power savings if you look at the switch plus the optic. What does that really represent as a part of the total power that's being consumed in a GPU structure? There's an argument that says, "Look, every little bit counts, and we should get every little bit we can." That's a fair argument. I think you have to look at what the trade-offs are. I would say that we're in a bit of a state of, "Let's see how this plays out." I think we'll see some trials. We'll see some customers dabble with this. There may be a compelling event at some point in the future where co-packaging becomes the only option for us.
I don't think we're at that compelling event yet.
Maybe step back a little bit for an audience that's financial analysts, not technologists. What's going away when we do CPO, and how does that compare with the acronym CPO, LPO, LRO? Walk us through some of the basics here.
In co-packaging, first of all, in today's world, you have a switch or a router that fundamentally has a piece of silicon in it. That silicon has to deliver traces to the faceplate where there's pluggable optics that would be plugged in as the customer needs the given capacity. A typical switch or router has 32 or 64 ports on it. Each one of those ports accepts a pluggable optic. That pluggable optic is delivered by suppliers in the optics industry. The switch or router is delivered by guys like Arista, Cisco, Juniper, Nokia, others. We, of course, also deliver optics, but customers have a choice of using our optics or some third-party optics. That's always a choice that they have. One analogy to think about is it's a little bit like if you have, for instance, an HP inkjet printer at home.
If you buy your ink from HP because you're worried about you've read the warranty and the warranty says if you buy your ink from somebody else, your warranty is void. If you buy your ink from HP, then you look like a sort of a very conservative customer that we would have that would say, "I'm going to buy my optics from Cisco, and I'm going to buy my switch from Cisco because Cisco is going to take care of me. And if there's any problem, I know it's going to be taken care of." If you buy your ink from a third party because maybe you use a lot of ink and you really want to save a few pennies, then you look like the customers that are saying, "You know what?
I'm going to buy optics from a third party because I'm pretty sure it's going to work, and I'm not too worried that things are going to break. I would also say if you buy ink by the barrel and fill your own cartridges, you look like some of our hyperscaler customers. That's kind of the analogy I would use for the optics world. We have customers that buy directly from us. We have customers that buy from third parties and customers that try to build their own as well. That goes away effectively. That choice goes away with co-packaging because co-packaging suggests that you take the guts of the optic and you physically package it with the silicon. It now becomes one monolithic structure that's mounted on the switch or router line card.
The only thing coming out of the faceplate are fiber connectors, but there's no more pluggable that's part of that solution.
The LPO, LRO options?
The LPO, LRO options are basically playing a bit with some of the innards of the optic to say, "If we removed some of those pieces and asked the switch silicon to work a little harder, could we reduce power?" That is really what LRO and LPO is all about, it is basically shifting some of the problem that is dealt with in the pluggable optic today into the silicon, into the switch silicon, and delivering an end-to-end solution through better switch silicon performance and a little bit better optic performance, but removing things like a DSP that might sit in the optic today. In doing that, you can achieve some pretty significant power savings, maybe not quite as much as co-packaging, but pretty significant.
I guess one of the things that intrigued me out of NVIDIA's announcement a couple of weeks ago was that initially their CPO would run on an InfiniBand switch. In my circles and your circles, there's been this debate about AI clusters migrating from the InfiniBand protocol to the Ethernet protocol. Now that we consider the fact that CPO will initially run on an InfiniBand switch, what does that tell us about this evolution, transition, competitiveness?
Yeah. I think, first of all, not surprising. NVIDIA has got a large embedded base of InfiniBand switches. It is not surprising that they are going to leverage that first. The other thing I would remind people is we announced a couple of weeks ago a partnership with NVIDIA where NVIDIA is actually qualifying and including Cisco Silicon as part of its reference architecture, Cisco Silicon and Optics as part of its reference architecture. We are the only silicon provider other than NVIDIA that will be standardized as part of that reference architecture. Over time, I would expect that customers are going to migrate to Ethernet. Ethernet is much more widely deployed than InfiniBand. We believe Ethernet has a much longer life in AI applications. Over time, I would expect the industry is going to see a much bigger shift to Ethernet.
That would include Cisco Silicon as well as the NVIDIA Silicon.
Maybe the partnership's intriguing. What is sort of the rest of Cisco's play in whether it's LPO, LRO, co-packaging? What else are you doing in this context?
We are, in fact, demonstrating capabilities here at OFC for delivering optics into the AI stack, whether it's a scale-up or scale-out solution. That includes, for instance, 400 gig optics that would go into the NIC. It includes 800 gig optics that would go into a switch. That will be used as part of a scale-up or scale-out. It would also be used in an enterprise AI application. I think that's the big thing to come still in AI. It's part of the reason why Cisco has partnered with NVIDIA, as NVIDIA brings a lot of the technology in the form of GPUs. Cisco brings the access to the enterprise customer base. With that partnership, we expect to be delivering AI solutions for enterprises that want to have an on-prem solution.
Obviously, that'd be a much smaller solution than what a training model looks like, but it will be a highly optimized AI solution for customers. We've called that the AI Hyperfabric. That will be used for enterprise customers that want to run an inference model on-prem. That would include our networking, our optics, our software managing that, and then NVIDIA GPUs and NICs.
One of the topics I feel like has been glossed over in these announcements is how does one manufacture CPO? Maybe you can help folks understand what are the hurdles to bringing this kind of technology to market?
Yeah. I think that is a bit underappreciated in a lot of the enthusiasm around CPO. My view is that that is going to be the major barrier in terms of CPO really penetrating the market in a significant way. If you think about today, the industry around silicon is very mature. There's an industry that knows how to package silicon, how to cut up a wafer, and then package that, whether it's for an Intel CPU or whether it's switch silicon. That industry is very mature. There's an industry that is also mature around optics, making transceivers and dealing with some very specific optics issues like fiber attach. Like how do you attach a fiber to a piece of silicon? That's a process that requires a lot of skill, a lot of development, and has certain yield associated with it.
When you bring those two together, we have to find sort of where's the ecosystem that does both of those together because now we're talking about packaging silicon and optics together and dealing with all of the issues that you have to deal with in packaging silicon and all the issues that you have to deal with in packaging an optic. When we talk about large-scale CPO, we're talking about something that would include between 2,000 and 4,000 fibers. The manufacturing challenges, whether it's a fiber attach problem, the process development has to be very, very, very high quality, much higher quality than what we have in the industry today.
We'll have new connectors that have to be created to allow for sort of modular approach to manufacturing because you don't want to build one of these things and then go test it and find out that it doesn't work. You need to kind of build it in a modular way. Connectors that have to be attached to the optics chiplet, and then MPO connectors to actually deliver the optic to the faceplate. It has to be fiber routing. How do you route all that fiber on this switch line card? So many, many manufacturing processes that really have to be refined here. I'd say the industry is at an early stage of that. History says this is going to be a multi-year challenge for the industry. This is not something that's going to be solved in a couple of months.
This is going to be a multi-year challenge to get the industry to a mature state where manufacturing can be done in a highly reliable way with very high yields. People look at sort of the PowerPoint slide and say, "Well, the cost should be better with this. Reliability should be better." You can kind of wave your hands and convince yourself that that's the case, but that all presumes that we've overcome all these manufacturing challenges. Reliability can be better, but it would be much, much worse if we have problems with fiber attach or problems with how the connectors are mounted. All of these issues really are still ahead of us, I think, as an industry.
How do you square that with NVIDIA's timeline of suggesting their InfiniBand switch will be out before the end of this calendar year?
I think you should probably ask NVIDIA that. I would say, generally speaking, we've built co-packaged solutions. They can be built. The question is, can they really be built in volume, at scale, and with the appropriate reliability? Building tens or hundreds of something is very different than building thousands or millions of something. I think that's really where the challenge in the industry will be.
Maybe take this to sort of the implications of what does this mean for the lasers? There is sort of this argument of, well, lasers are the most likely thing to fail. You want to have your lasers separate. How are they getting around that?
The basic architecture of co-packaging will have an external laser source. It will be a pluggable laser or a set of lasers that plug into the faceplate that deliver the laser to the co-packaged solution. That does remove the most likely failure element. The laser is, of the various piece parts, assuming it is silicon photonics that is on the co-packaged optics and silicon that is part of the switch, the laser is probably the most likely failure element. Again, that presumes that we have connectors and we can mount these things in an appropriate way that is robust. I think the architecture of this will rely on these external laser sources. NVIDIA, for instance, I think had 18 of those in the switch they demonstrated. That will help convince people that the reliability can be good.
Again, we have to overcome all these yield and manufacturing issues to make that really true.
What is your take on the implications for your transceiver business then? If we start doing co-packaging, we have fewer transceivers. Do transceivers go away? Do some transceivers go away? What is the life cycle?
Yeah. I think the transceiver business is going to be a robust and growing business, even with co-packaging. If you look at some of the industry analyst views, the pluggable market continues to grow. Co-packaging sort of sits on top of that as part of the growth. If co-packaging is wildly successful, it will eat somewhat into the pluggable market, but like 20% is the highest I've seen in terms of a forecast for that. I think we have to remember that co-packaging will find its home in those most dense applications that require a homogeneous set of optics fully populated day one. That tends to be in the hyperscale or large-scale applications. It does not occur on the WAN side. It does not occur in service provider markets. It does not occur in enterprise markets. I think pluggables will still have a very long life.
I guess maybe time undetermined, but what are the implications for your switches that would go into these AI fabrics?
Yeah. Presuming co-packaging becomes a, we start to see customer pull for it as opposed to vendor push, that would mean that we would start to build switches, probably two flavors, one that has co-packaged option and one that has pluggable optics. The question is, do we hit a wall where the only way you can get to a certain scale is with co-packaging? I think that wall, there is a potential for that wall to hit us. If you asked me two years ago, I would have said when we get to 200 gig SerDes, we do not really know if we can support pluggables. Today, that is commonplace. If you asked me a few months ago, I would say maybe it is 400 gig SerDes.
You and I were at an executive forum yesterday and saw many suppliers out there saying, "Look, we're going to solve the 400 gig SerDes problem." I do not know if that wall is really right in front of us or not. If I had to draw a line today, I would say 400 gig SerDes is probably the first point you would see CPO really deployed in any volume. The question is, is it really a hard wall or are there other reasons why CPO is being deployed for customers? I also do not think a campus customer, an enterprise customer is going to typically require the same scale that a hyperscaler would.
It sounds to me that you've given, I think, a good argument for why LPO and LRO are a good compromise for the industry. Where's the market? Because sitting at OFC, you wouldn't think that's the case. It seems like a logical argument. How would you make that argument? Where do you think the reality is?
I think the reality is that we will see LPO and LRO solutions at 100 gig SerDes, and we will likely see them at 200 gig SerDes. I think beyond that, LRO and LPO start to have some really significant challenges with things like signal integrity. I also think I'm a complete believer in the innovators in the industry. When people see what appears to be a hard wall ahead of us, I have faith that this industry innovates. Power is a huge, huge issue for our hyperscaler customers in a given application. It is probably the most significant issue. It's not the most important issue for many other applications in the networking world. We have to solve power problems. We have to find ways to incrementally improve on power. I think the industry has got a lot of thoughts on that.
Things like liquid cooling will play a role there as well. There will be other things that come into the market that help to reduce the power problems that we're facing today.
I want to pivot to a different technology that I don't feel like gets much attention, but it's this concept of an optical switch. I think Google's been public about building its own. We've seen a number of companies enter the space. Maybe just first sort of set the foundation. What is an optical switch and how are they being used?
Yeah. I think the easiest way to think about optical switching is if you were looking at a patch panel where you would manually pick up a fiber and connect it into the patch panel so that you can connect it to another fiber. These manual patch panels exist in customer data centers today and in labs for customers today. That is how we move traffic around. Somebody walks up to it and picks up the fiber connector and moves it to another port. It is a physical connection that is moved. An optical switch or an optical cross-connect effectively automates that. It uses technology like MEMS technology or, in some cases, silicon photonics technology, in some cases, actually robotics technology to actually affect that switch. It is a slow-moving dynamic typically, but it is effectively automating what was done with a manual patch panel.
I will say I have two patents in optical cross-connects, both of which are expired, which tells you how long this technology has been around looking for a real problem to solve. Historically, it's been around for quite some time. It hasn't proven in economically for a given application. I think today we are now seeing some applications where optical circuit switching may have a role to play. I think it's early stage. Google has been probably the most vocal. I am certainly aware of others that are now either contemplating deploying this or are in early stages of deploying it.
I guess one of the things I've been struggling with is I feel like it's maybe a misnomer to call it an optical switch rather than to call it a cross-connect or an automated patch panel. I guess there's some nuance to that. My understanding is they're not switching every packet.
Yes, that's very important to understand. It's called an optical circuit switch really to, I think, harken back to literally circuit switch days where things were basically nailed up and you stayed up for a phone call. That switch was just put in place for the duration of that call. That's really where the name came from. An optical cross-connect, same sort of thing where the idea is that a connection is made between two ports and that connection stays. You can think of it as literally a passive connection between port X and port Y. There's absolutely no packet processing taking place there. In fact, there's very little insight about what's going on there. You're effectively connecting one port to another port physically. The light passes through.
There might be some level of monitoring on that, like what's the signal level of the light, but there's no idea what's going on inside that wavelength. Nobody's looking at the packets to try to examine what's happening. It is a slow switch typically. It is not a fast-moving switch at a switch of packet speed, for instance. It has got very little insight about what's really being carried on that wavelength.
Are there use cases that either can flatten the network or can reduce the number of actual electronic switches, either based in Silicon One or Broadcom's Tomahawk? Can we reduce the number that somebody needs with the use of optical switches?
I think so. I think there's potentially some use cases. I would say it's very early stage. I don't think there's a lot of use cases. I will be surprised to see this become a huge portion of the switching market. Google's case, I think, is probably most well-documented and advertised at this point. That is really to improve the reliability of their infrastructure. They've got a unique AI infrastructure where, and this is somewhat of a surprising point for a lot of people, but when you have thousands of GPUs interconnected with optics, something is going to fail. Something's going to fail at least once a day. In AI, unfortunately, the challenge is that when something fails, everything stops. You basically have all these GPUs idle until you can fix that problem. That's a very expensive problem.
Google's approach really is to say, we're going to have effectively a spare bay of equipment. If something fails in one bay of equipment that includes GPUs and optics and networking, we're going to just switch over all of that traffic to another bay and restart everything. We're doing that with an optical cross-connect. You can think about that. That's literally like the alternative could have been they send somebody into the lab and start moving all the fiber connections from one bay to another bay. That's effectively what they're doing with this optical switch. They effectively improve their utilization by doing that switching. That is one application.
We've seen other customers begin to think about like an a spine-l eaf architecture, similarly saying if a leaf fails, you could effectively switch to a spare leaf and just bring on a new leaf with an optical switch, effectively creating the same infrastructure, but with a spare leaf now that's passing through this optical switch. Again, it's kind of a reliability argument. There's some thought that says, well, for AI workloads, you kind of set up the workloads between GPUs and say, for this workload that might run for hours or days, we're going to pass traffic from port X to port Y. And that's all the switch at this level in the network is going to do. It's going to pass traffic from port X to port Y to get from this set of GPUs to that set of GPUs.
I think people are looking at that and saying, do I really need a switch, an Ethernet switch to do that? Or can I just have something that blindly passes everything coming from port X into port Y? That could be the role of an optical circuit switch. Again, it's not doing packet processing. It's basically, I don't want to call it a dumb switch, but it's a dumb switch in the sense that it's not really examining any of the packets. I think that's an application where it could replace an Ethernet switch. Again, I don't see that as putting a big dent in the Ethernet switching market, but this is early stage for this technology. There's a number of startups playing in this game. There are some established players who are playing in this game.
I think there's a fair amount of investment going into this. We could be at an inflection point where this technology actually starts to find a home in various applications.
Another topic that's sort of been bandied about is the idea that Coherent Optics, sort of the classic Acacia wide area network technology, is making a move to find applications inside data centers. A little bit more talk about that. I've struggled with it on just the basic economics. The price points of Coherent are much higher for good reason. They do more, send signals further. Can you help us understand, I guess, from the demand side, what makes this more interesting? And then from a technological side, how do you take a relatively expensive technology and make it cheaper?F
Yeah. Let me geek out for one minute here and just explain why people make that claim. First of all, dispersion is an impairment that occurs in the fiber that limits how far we can send a signal. It's a characteristic of the fiber, and it limits the distance that we can send a given signal. When you double the bit rate, say go from 100 gig to 200 gig, the penalty for dispersion actually is more than double. It's actually four times. Theoretically, when you're going from 100 gig, let's say at 100 gig, you could go 10 km as just an argument. If you double that to 200 gig, your penalty for dispersion is actually four times as bad. That means at 200 gig, you might only be able to go two and a half kmilometers.
Every time you double the bit rate, you're going to have a factor of four impairment in terms of the distance. The argument for Coherent says, our approach inside the data center has been to double the bit rate, double the bit rate, and then we got to add a few tricks like PAM4. At some point, we're going to get to the point where we want to increase the bit rate, say from 800 gig to 1.6T or 1.6T to 3.2T. That dispersion penalty becomes so bad that I can't send the signal over a conventional distance. A conventional distance in the data center is 2 km or maybe 10 km for some applications. What worked at 10 km, for instance, at 800 gig is probably not going to work at 1.6T. Maybe the best we could do is 2 km.
What do you do now for the customer who says, I really need 10 km in my data center? Or what worked at 2 km might not work at 2 km anymore. Now we're talking like 500 m. What do you do? The answer is you got to add some more tricks in order to solve that problem. We're now sort of out of the bag of tricks that are the relatively easy solutions. PAM4 was one of them. Now we have to start relying on some of the tricks that we played in the Coherent world in order to send signals over very long distances. In Coherent, if you remember 10, 15 years ago, people were kind of stuck at 40 gig and said, I don't know if I can send 40 gig over a reasonable distance in the network. Now we send terabits.
We are relying on this Coherent technology, which does very sophisticated signal processing on the optical signal. The argument is that as the bit rate increases, the penalties are going to continue to increase. We may not be able to send the signal over the distance required in the data center without applying some of the tricks that exist in the Coherent world. You have to bring some of those tricks into the short-reach optics. Now your question is, the economics are not going to make sense for that. That is true. The question is, do you want to go to 3.2 Tb or not? At some point, the physics is going to work against you. The only way you are going to get there is by increasing sort of the sophistication of that solution.
Now, that doesn't mean you have to bring in a Coherent solution today, we can send the signal thousands of kilometers. We don't have that need inside the data center. People talk about Coherent Light as an example of saying, let's bring some of the Coherent technologies into this solution and apply it inside the data center so that we can go 2 km or 10 km. I think that's where we're going to see some industry investment is to say, what are the elements of that Coherent solution that you could bring in minimally in order to not have the cost go too far out of line, but solve some of these problems that you're going to have with the impairments.
Is there a way you could help us understand the relative price points?
Eight hundred gig inside a data center PAM4 device compared to an 800 gig in a ZR pluggable, what's the relative price difference?
It's factors of difference. I would say anywhere from 4-10X it could be. It's a big price. It's not like a 10% price. It's factors of pricing difference. That scares people. It's like, how do you get that? How do you get that savings while you're still trying to achieve solving the problems inside the data center?
I guess one of the sort of classic arguments I hear is it's about volume. I'm struggling whether or not this will apply. I think of the analogy of the comparison to high-definition televisions. First, high-definition television, $10,000 a piece, 10 people bought them. Now they're $200. Everybody has one. Who wouldn't buy it? Are we at this situation where Coherent technology is expensive because it's a low-volume relatively device? Is it a question of, if you ramp your volume, then it's price competitive? How do you cross that chasm?
I think certainly volume is a huge factor in the cost. Yes, when you're talking about selling tens of thousands of something or 100,000 something versus millions, the volume, the cost curve changes pretty significantly. I think there'll still be a premium for a Coherent solution. To the extent that it's adopted widely and we do get those volume benefits, the cost will come down. That factor of difference that I mentioned today will certainly get compressed over time. There's going to have to be early adopters for that. Somebody's going to have to start deploying. The early adopters will have to start deploying and paying that premium. I think we will see compression of the cost curve.
What's your sort of best prediction of how or when that happens? What's the timeline?
I think we'll see 1.6T at 2 km. As an example, 800 gig, I think we saw OIF define a Coherent solution and an IMDD sort of traditional solution for a 10-km application. At 1.6T, I think 2 km will be that breakpoint. If you need to go beyond 2 km, you're going to have to bring in some additional technology like a Coherent technology. We will see a Coherent Light, I think, for 10-km applications at 1.6T. Then at 3.2T, it could be a 2-kmr.
You see that as like two years away, three years away?
Two years away, I would say, yeah.
I want to pivot the discussion to the wide area network. We spent most of the time on the data center because that's what people care most about. I think there's been this debate about, well, okay, we get the AI thing, we get the clusters. What are the implications for wide area networking?
Yeah. Certainly for the hyperscalers that are defining what was a conventional wide area network and so now the AI WAN. It is mostly those that are using inter-data center communication to effectively create a larger scale AI network. We have seen dramatic, dramatic take there, as you alluded to earlier. In even our first half results, we have seen a lot of optics being used in that application. I think more widely, as we start to see AI applications in enterprise take hold, I think our service provider customers are going to begin to see much more traffic, whether it is because they are hosting the AI application for those customers, or even in the case where the AI application is on-prem and there has to be some reach back to a data source, for instance, we are going to see different traffic patterns.
We do expect that our service provider customers are going to see growth in their networks as well as AI starts to take hold in enterprise applications.
You have been a proponent of an alternative architecture or technology known as ZR. This is the idea of pluggables used in the wide area network. Maybe help us understand a little bit of the argument of the economics. I know we have been down this path before, but a little bit of a refresher. Where does the industry stand today on the transition from platforms to pluggables?
Yeah. I would say, first of all, we acquired Acacia four years ago this month. It's been a terrific acquisition for us. Acacia at that time was delivering technology to its customers, including Cisco, for optical system solutions, traditional transponders. Acacia had also innovated in the area of taking that technology and putting it into pluggable form factors. We saw a trajectory where that pluggable would effectively be the same form factor as that's used inside the data center. There was no custom form factor required to support Coherent. We felt that would be a tipping point for customers adopting a Coherent pluggable solution that would go directly into a router and replace a transponder. The way you can think about this is if, let's say, you have to send an optical signal 500 km.
One way to do that is you buy an optical chassis that has a transponder that can send that signal 500 km. Now you can actually take a pluggable optic, put it in the router, and directly send that signal 500 km, eliminating the chassis, eliminating the transponder. When we started to do some economic analysis on that, we saw that a transponder typically would be 200-250 W , and a pluggable was typically 20-25 W. Customers would get a 90% power savings by eliminating the transponder, eliminating the chassis, and all the associated hardware that goes with it, like controllers and fans, and replacing that with a simple pluggable that goes directly in the router.
On top of that, we felt that largely due to the volume curve that you alluded to earlier, as customers began to adopt this pluggable technology, we would start to see that become a much, much more cost-effective solution than a transponder. Apples and apples, a pluggable is going to be cheaper than a transponder today. Customers get day one CapEx savings and day one OpEx savings, power savings, and cost savings. Our prediction was that that would be the dominant deployment model for inter-data center communications, which is often 100 km, maybe a few hundred kilometers, and would penetrate into metro applications where service providers would start to adopt that as well, and even into long-haul applications. I have to say we've been very, very pleased that that has really—that's what we've seen now.
The top five hyperscalers are all deploying this technology in their inter-data center applications. That is where the volume really is. We have 300 service provider customers now deploying this technology as well in metro applications. We just introduced 400 gig for a long-haul application as well. My prediction is that this technology is going to continue to attract a lot of the industry investment. It is open. It is compliant with standards, which is new for the optical industry because historically it has been a closed proprietary solution. It gives customers choice where before they did not have choice. The economics are super compelling. I see customers over time, we will see this become the dominant deployment model for metro networks and even many of the long-haul networks.
I have the impression that this market had a pause during 2024. It sounds like it has sort of restarted and it is getting moving. Was that a supply chain-related event? Was that just customer concentration? What changed?
I think the service provider market in general has been slower to sort of recover from what was a supply chain issue. Also, many of our customers invested in 5G infrastructure did not really see a return on that investment, I think, or looking for what is the next thing that would stimulate them to build a new network. In many cases, it's a refresh, like just a technology refresh. I think we're probably into a normal technology refresh cycle here. I don't think we're going to see—there's no 5G out there. AI could be a stimulant for service provider customers that either begin to host AI applications for enterprise customers or as the enterprise market starts to build out on-prem solutions, we could start to see service providers having to build out more capacity as well.
I got a couple of questions on email that I just want to hit before we run out of time. One of which is, how do you sort of—how do you see your position in optics helping to pull through sales of either switches or routers or your Silicon One? What's the sort of attach rate and the relationship between the businesses?
I would say optics probably does not pull--it is more the opposite. I think Silicon One helps us pull through optics sales. I would also say that Cisco has, without question, the most significant standard qualification process for optics in the industry. I would put our qualification process as the gold standard for the industry in that we test optics against every single optic spec, every single electrical spec, under all operating conditions, under all environmental conditions. We subject them to really brutal, brutal corner cases. Like we will put the optic into an environment that looks like a host, and we start to vary the voltage on all the signals coming from the host. We start to vary the skew or the timing on all those signals.
We do that at various temperatures so that when we put a Cisco label on an optic, it is absolutely going to be a robust, very highly reliable solution for our customers. I would say the one area that we are now seeing more take is that customers are buying, in some cases, buying our optics for third-party applications where they'll say, "I'm going to buy Cisco optics and put it into a white box solution or a competitor product." That, I think, is a new dynamic for us. I think our qualification is helping us to give customers confidence that when they put the optic in, it's going to be highly reliable.
When we think about the evolution towards these enterprise opportunities, I think it's really intriguing. You've got this partnership with NVIDIA, run NVIDIA on Silicon One. Do you see optics getting pulled in? Will enterprises be employing your optics and AI solutions?
Yes, completely. For AI Hyperfabric, which is Cisco's enterprise AI solution, that will include Cisco switching as well as Cisco optics as part of that infrastructure.
You made a comment very early that I sort of glossed over, but somebody is coming back to me. You use the phrase hyperscalers might be taking a breather. You know the world I live in. Was that intended to be sort of something that you're seeing right now? Can you elaborate on sort of what you meant by that?
Yeah, I should be very clear about that. I think in net-net our business is up and to the right here with hyperscalers. When you sort of look at each one individually, you'll see different dynamics. Some are in a high demand and stepping on the gas, and others might be waiting for something like a fiber buildout or certain parts that they're trying to get. On average, we're seeing uptake and demand here.
When we think about sort of the business, I think we've seen the power of the purse for the hyperscalers. How do you sort of think about the idea that they're crowding out telcos as customers?
I don't really see that. I mean, I see—actually, I think the telcos benefit from what the hyperscalers are doing because if the telcos look to the hyperscalers and say, "I want to be on that cost curve," and they adopt the technology that hyperscalers are adopting, usually earlier, the telcos can get effectively sort of a significant cost advantage day one as they start deploying technology. They're typically following the technology curve for the hyperscalers. Hyperscalers might be earlier in deploying 800 gig or 1.6T. To the extent that the telcos adopt that technology in pretty much the form that hyperscalers are, they can get a very significant cost advantage. That would be hard for them to get in historically.
We're just about out of time. I always like to close with a question like the following. What do you think is the least appreciated aspect of Cisco's optical business?
I would say, first of all, I think the Acacia business has been a terrific win for Cisco. I think we've been right on target in terms of predicting how that technology would evolve over time and penetrating various segments, whether it's inter-data center or metro. The other is, I think our optics business is really sort of a crown jewel within Cisco in terms of it is a huge business for us across our enterprise customers, public sector, service provider, and hyperscalers. I think it has the potential to be a much bigger business going forward as we look at AI buildouts for enterprises as well.
Great. Bill, thank you very much.
Thank you, Simon.
A lot of fun as usual. Folks, thanks for joining us. This is Simon Leopold and Bill Gartner signing off from OFC.
Thanks, everybody.