Lumentum Holdings Inc. (LITE)
NASDAQ: LITE · Real-Time Price · USD
944.28
-50.28 (-5.06%)
At close: May 6, 2026, 4:00 PM EDT
934.50
-9.78 (-1.04%)
Pre-market: May 7, 2026, 4:31 AM EDT
← View all transcripts

2024 Optical Fiber Conference

Mar 26, 2024

Kathy Tai
Head of Investor Relations, Lumentum

Hi, good morning, everyone. We're gonna get started in about three minutes, so please find a seat and get comfortable. There's Wi-Fi tent cards on the tables. You can log into using the passcode light 2024, and hopefully, that will connect for you. It took me forever to log in this morning because the conference Wi-Fi is super busy, but anyway, good luck with that. Before we get started, I'll just have a few preparatory remarks. We will have about an hour's worth of presentation material. I have 4 executive speakers this morning, and then for our sell-side friends in the audience, we'll post the slides at 8:30 A.M., so that you can't page ahead to budget section, which I would. And then we'll take a Q&A for a half an hour.

So in just a few minutes, we'll start the webcast, and then we'll be ready to go. There's a few seats up here in the front if anyone wants to sit in the dangerous position next to me. I'm gonna get started in about two minutes. There's just a few seats left. It's great to see all the attendance at OFC this year. One, I think we're gonna go ahead and get started. So I'm Kathy Tai, the Head of Investor Relations at Lumentum. It's great to see all of you here, standing room only in our event this morning. We are working on getting some more chairs for the folks that are standing in the back.

I'm very happy to have our executive team here to talk with you today about how Lumentum is enabling the AI revolution, and this is our second annual Lumentum 2024 in Lumentum Investor Technology Event.

Speaker 13

More chairs. More chairs are on the way.

Kathy Tai
Head of Investor Relations, Lumentum

It doesn't?

Alan Lowe
CEO, Lumentum

Maybe all the people in here are disrupting.

Kathy Tai
Head of Investor Relations, Lumentum

Excuse me.

Alan Lowe
CEO, Lumentum

Maybe all the extra, wireless.

Kathy Tai
Head of Investor Relations, Lumentum

Oh, there we go. Okay.

Speaker 13

More chairs are on the way.

Kathy Tai
Head of Investor Relations, Lumentum

Okay. So I am told that more chairs are on the way. It's great to see the turnout for our event. So, first, we have a few safe harbor comments. We are going to make-- be making some forward-looking statements. Those statements are subject to some risks and uncertainties. You can read all about it on our website. We are going to post these slides at 8:30 A.M., so you can read the forward-looking statements then. And then I'll go to... Okay, then it doesn't go ahead. Okay, today's speakers. So we have four of our execs going to be speaking with you today, Alan Lowe, who is our CEO, as well as Wupen Yuen, who is our President of our cloud and networking business unit. We have Dennis Tong, formerly known as the CloudLight CEO.

Now he is the group VP and general manager of our cloud networking platform. And then finally, we'll have Wajid Ali talking about some of the financial particulars of our business. And later, at 8:30, we'll have Q&A session, and Chris will join us on stage to help address the questions you may have. When it's time for Q&A, please step up to the-

Alan Lowe
CEO, Lumentum

The market more rapidly need to address the rapidly growing needs of the hyperscale data centers and the infrastructure providers. In addition, we're not going to talk too much about this, but Industry 5.0 is real. It's really taking advantage of what photonics can do for how things are manufactured. We're making lasers today for EV batteries and for solar cell manufacturing. But more importantly, we're addressing with our imaging and sensing business really machine vision. And all of that data that comes from our LiDAR applications and machine vision are being uploaded into the cloud to then be able to make things better for those industries themselves. So it's super exciting, but we're going to really focus most of the time on AI and how we're addressing that rapidly growing market.

As I said, it's a rapidly growing market, and one of the reasons we did the acquisition of CloudLight, and we wish we had done it earlier, but Dennis was a tough negotiator. It took us a little bit longer than we needed or wanted, but you know, it's a big market today. If you look at the various parts of what we call the $4.5 billion market, receivers and data center interconnect, which is going to become more and more important as the needs for power are making data centers have to be built further apart. And so, Wupen is going to talk a little bit more about that. But there's also the transport network, both between data centers as well as what we're able to do.

Wupen is going to be able to talk about this, too, is the advanced switching inside the data centers to really drive a different level of power consumption and latency, which really is needed by the hyperscalers and the infrastructure providers. When you look at the combination of Lumentum and CloudLight, it's really a win for our customers, and I'm super excited because I've been out talking to our customers for the last five months about what we can do for them, not just on next generation of transceivers, but really, what is the fundamental architecture needed for next generation and next, next generation?

Again, Wupen Yuen will talk more about that, but I see this really as an acquisition of 1 + 1 = 6, and that's really because of the fundamental technology that Lumentum has, as well as the what CloudLight brings with respect to time to market, speed to ramp, and manufacturing infrastructure and capabilities that really provide the best-in-class capability that our customers are super excited to see. The other thing they're worried about is the geopolitical environment of the world. The acquisition provides us with a major manufacturer that's headquartered in the U.S. with capability outside of China.

When we talk to the hyperscalers, they've tolerated a lot of the Chinese-built manufacturing, and they're going to continue to do that, but they really want to make sure they have alternatives that are outside of China, and Lumentum provides that capability for them. Okay, as I said, the now new Lumentum and CloudLight together is really first to market, and as shown by the chart on the left, you know, you can see the 800 gig is 30% of the market last year. The transceivers we shipped in the December quarter, 75% of those were 800 gig. The balance of them were 400 gig.

And so, you know, really the capability that CloudLight brings with respect to development of new products at leading edge will gives us confidence, gives our customers confidence that we'll be first to market with 1.6 terabit. Thailand manufacturing group, Dennis is going to talk a little bit more about that, but we have a large campus in Thailand that has extremely capable teams and proven manufacturing, infrastructure to be able to ramp, quickly. This is what the LightCounting says that in 2028 will be, the mix of products. I really believe that 1.6 terabit will be a significantly bigger part of the market than as the hyperscalers will shift to 1.6 terabit even faster, than what this chart says.

And I think that's really because the economics make sense, but the new GPUs need more and more data, faster and faster, and connecting those GPUs and clusters together really will drive the need for higher speed than even the 800 gig. So we're excited about where we are and about how fast that market is growing. Okay, I talked about this a little bit, but maybe take it one step further and talk about when I go out and I talk to the hyperscalers, what they talk about to me and how we're addressing that. You know, really, time to market is critically important to them. Having that capability as soon as they can and having it ramp very quickly with high quality and low cost and low power consumption.

Power is such an important part of the design of new transceivers and the capabilities that I think the team, Lumentum, and CloudLight bring. We really do focus on how we drive the power down, the cost down, quality up, and the ramp as fast as we possibly can. So it's an exciting time, and customers resonate with that. I already talked about geopolitical concerns. You know, we have wafer fabs in the U.S., in Japan, in the U.K., providing technology for the hyperscalers and the infrastructure providers with a manufacturing footprint that really enables us to give them confidence that we will invest for them and be there for them when they need to ramp very quickly in Thailand and outside of China. They also want to talk to us about what comes next.

I'll let Wupen Yuen talk more about this, but, you know, we've had in-depth conversations with customers about, is CPO the right thing? Are there external light sources or what are those next generation things that really provide the data to the GPUs when they need it, at the speed they need it, which is really quite amazing. So together, I do believe that we provide a solution to these customers that is really unparalleled. I suggest that you after this session go across or down the street or to see our booth. We've got a lot of interesting things going on there. We believe we have one of the first to markets, 800G ZR.

Not just on normal ZR, but also these separate modes that address the needs of data centers moving further apart, so Extended Reach Data Center Interconnect, Metro 800G, so for those short metro hauls, very capable 800G, as well as lower speed but longer reach, so long-haul 400G and 600G. So all these modes will be demonstrated at our booth, which really is getting the traction from our customers. I already talked about co-packaged optics, but solutions beyond 1.6T, 3.2T.

We have a demonstration in our booth talking about that, which really, again, has the hyperscalers excited about the capability, not just of making transceivers, but of having technology, fundamental core technology at the chip level to be able to do the different kinds of designs that our customers need. You know, as I said earlier, power consumption is critically important for these hyperscale customers. So we're going to be just demonstrating a linear receive optics, which really cuts down the power consumption of transceivers at 200 gig per lane. So these are the capabilities that enable a 4 by 200 for 800 gig, but also an 8 by 200 for 1.6 terabit. And so we're demonstrating those in our booth as well.

And then as more and more data needs to get to the edge of the network, and we've been deploying tens of thousands, if not hundreds of thousands, of 10 gig tunable SFP+ that have been on the market for probably 15 years. We're now introducing an extended reach, 25 gig, tunable product that enables the access as well as the wireless providers to upgrade the speed at the edge of the network, because of the needs, for getting that data to the edge, as rapidly as possible, which will then again, drive demand for, metro and long-haul, network demand. I suggest you go to our booth in the data center. Okay, next. I do truly believe that we are positioned to win. I think we're making the right investments for R&D and manufacturing infrastructure.

We have a broad portfolio of transceivers today, as well as samples to customers and development agreements with leading-edge customers to provide them with what's next. In-house capability. When we talk to our customers, you know, one of the things they're concerned about is: I don't want to rely on a single source of EML supply from Lumentum, because Lumentum has a very large share of EMLs to all of the transceiver customers. So we have EMLs, we have pixels, we have silicon photonics, and a combination of what CloudLight brought to the acquisitions we've done in the past, the IPG technology, IPG Photonics acquisition, as well as the other photonics, where we brought teams of silicon photonics together, enables us to rapidly turn silicon photonics design.

So we can do whatever our customers want, given their use case and given their desire to have diverse technology as well as diverse supply chain. So we're uniquely positioned with that. As I said before, wafer fabs throughout the world, Japan, U.S., and U.K., with the infrastructure for manufacturing capability in Thailand, which really has a lot of traction, and I'm very busy with a lot of customer meetings the rest of this week. So I think we're getting the traction we need and want, and I do believe we will grow faster than that 30% CAGR, certainly, on data center and hyperscaler side. So with that, I will turn it over to Wupen to talk to you a bit more in detail. Wupen?

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Thank you, Alan. Everything okay? Okay, so I'd like to share kind of my excitement of the machine learning world. I want to give you a few fundamental concepts, right? There's a lot of, you know, jargon, a lot of technology we can talk about. I want to give you some fundamental reasons why that's the case and really what's changing. And then we can talk about some of the implementation technologies that are going to fit into this picture, right? So number one, the biggest, biggest change is really that the workload of AI machine learning now has really grown so much, so much faster than what the scale, the scaling actually happened, right? We all know that Moore's Law is reaching its limit, right?

Usually, you can just scale up, you know, go to high, higher density nodes, and you can get all the compute together on a single chip. Not anymore. Where fundamentally, that barrier is causing a major change in the optics industry. Number 1. Number 2, now you have to imagine then, if that's the case, now we are really shrinking. Imagine we have a national network today, AT&T or Verizon, with tens of thousands of nodes, right? The connections are all coming together. Now, all that idea is shrunk into this, right? Because now you have tens of thousands of nodes on GPUs. Everyone's running at a very high speed. How are we going to actually connect them together at the very lowest cost of the lowest power of the cost, right?

That whole idea, if you think about the overall picture, we talk about optical switching, we talk about, you know, these different ideas of NVLink versus optical and all these different places, optics play a role. That's all basically thinking about how you're going to transmit from the data, you know, from location A to location B at the lowest power, lowest latency. So the whole idea of portfolio technologies that Lumentum has is going to play a role now. Because everything from long haul, optical switching, WSS, wavelength routing to, you know, even pumping and high speed transmissions and high data rates, stuff like that, all going to be converging into data centers. I want you to remember that because you're going to see more and more of this actually happening, right?

The third one, actually, is even more important, is now optics is part of compute. It's not just connecting things, it's actually part of the compute. And then we call it, why it's called cloud speed? Because now, because of power compute, now the data rate evolution is now tied to how you can now handle the workload growth, right? Number 1. Number 2, then the cycle time is now shrinking. The life cycle used to be like 4 or 5 years in the cloud space, now it's 2 years. It's now tied to every release of the GPUs, right? This is a really, really big deal, right? It will certainly challenges the scale, the technology, the go-to-market manufacturability of the technology, and the scale of the company, right?

Therefore, thinking of overall picture here, I believe Lumentum actually is well-positioned because of technology base, all the way from optical switching down to the small chips, with indium phosphide, silicon photonics, lasers, technologies, to optical switching, amplification, and so on and so forth, right? So these are the kind of the big ideas I want to, you know, give to you before we jump into the details of, you know, why CPOs? Why this, why that? But these are the three big things, I'd like to share with you. So next slide, please. So we talked about this, already, right? So fundamentally, the fundamental driver of this is that large language models require so much computation, right? The Moore's Law of scaling is really stopping from, from, you know, 16 to 7, to, to 6, to 5, to 3.

The scaling has become much more challenged now, right? And therefore, to do all the compute, you are forced to break it up into multiple chips, right? It's not just the computation itself, but also all memory access, right? So that fundamentally basically meaning you can use CoWoS to co-package bunch of GPUs and memory, and like NVIDIA just announced, now you're gonna put a whole, you know, NVLinks linked up 72 GPUs all together, right? You're trying to create a large computation unit. That's how it can be very effectively doing large amount of computation. But despite that, right, you still have the need to go beyond a single rack, go beyond a single GPU. So all these connections has to be made somehow.

The optics really is the only scalable approach to accommodating this ever-increasing, rapidly increasing AI demand, right? And this is fundamental reason why you're gonna see a lot more optics coming through. If you ask me, in 1999, I got a feeling of, like, I just talk about the, the internet, you know, coming in, changing the optical industry for the last 20 years. And now the AI is gonna be supercharging that for this fundamental reason. So next slide, please. So now, you know, traditionally, the left side is the kinda traditional cloud networking, right? It's a kinda switch-based, cloud server-based. It's all well known. It has a, you know, telecom business. It carried it on base business for the last 10 years, right? Everything's going pretty well. It was a large volume and so on and so forth, right?

The AI showing up to the world now has, you know, really a big effect because we never had this much computation before, right? So this new computation node has to be carried somehow, right? So that in itself is really driving up the use of optics, right? And fundamentally also, though, there is a, you know, a very big difference, is that the AI clusters are built on non-blocking, right? So non-blocking basically means that there's a lot of connectivities. You cannot have because GPU is really expensive. Anytime your GPU is sitting idle, it costs a lot of money, right? So everybody want to make sure that GPUs are fully utilized, and all the switchings are actually non-blocking. Non-blocking basically means a lot of links, therefore, a lot of use of the optics, right?

So, so basically, that in itself is causing a jump, right, in the usage of optics. So that's kinda all the excitement we've seen so far, right, since last year, OFC to now. But there's a hidden one, which is not even talked about here yet, which is not yet in the numbers, is when you start to open up the GPU to GPU and GPU to memory interconnect, that's gonna be another factor 10 of bandwidth opened up. Today, NVIDIA, for example, is using NVLink to connect them. It is all, still all copper-based, right? But as they scale GPUs, as they scale to reach the limitation of the silicon or copper-based interconnects between GPUs, that will have to come optics as well. When that happens, that's gonna be another factor of 5-10, right?

So there are two fundamental big things. Today, it's just scale and volume and non-blocking switching and so on and so forth, you know, large up compute nodes. Tomorrow, it's going to be opening up the NVLink to the optics, right? That's gonna be a really big deal yet to come, right? That has not, is not in anybody's forecast yet, and that's where CPOs, all these low-cost, low-energy technologies will come into play. So, you know, to carry all these fast-growing traffic, right? There's a tried-and-true technology, is basically, let's just go to the next highest speed possible. Why? Because typically, when you scale to the next higher data rate, you'll use the same kind of optics, right? A single lane used to be 100 gig, now it's 200 gig. So immediately, your cost per bit drop by nearly half.

Not exactly half, but nearly half, right? Secondarily, actually, it used to be the case that when you go to a higher data rate node, your power consumption doesn't go up by 2x, right? And therefore, you actually also get a power consumption advantage as well. But that is also slowing down, right? Because Moore's Law is kind of being reached. The generation to generation power reduction is getting less and less, right? But overall, though, still, to carry the traffic when the GPU is growing so much faster, the only way to do is actually use the highest speed optics possible. And therefore, you see, you know, NVIDIA as being the, you know, leading edge of 800G. And then, you know, whoever is using our AI training workload, they're gonna use the highest speed optics possible.

That's the only way to be carrying, doing this computation in a cost-effective way, right? So we see, you know, the energy deployment skyrocket because of AI machine learning. In about a year and a half, we're gonna see 1. T skyrocket, also because of AI machine learning, right? So that speed is really going up. Now, cloud speed has a second meaning. It's also now the cycle time is shrinking, right? Used to be for connection, now it's for compute. That's the big idea here, right? Optics now is part of the compute, and now it's every two years, you're gonna see a generation. In fact, I would not be surprised at all because the computation, the competition, pressure between these GPU companies, that cycle may even get shrunk even faster, right?

So everybody's now competing to bring the highest speed GPU possible, and at the same time, we, as optics vendors, will also be pressured to bring out the latest optics possible to pair it up with the with the compute, right? So that's what we call a cloud speed. Next slide, please. So, you know, and then, of course, right, the AI machine learning will be the first wave. Think about this is going to be like waves and waves of technology, leverage, right? So first phase of the highest speed data rate, 1.6 T as an example, is going to be happening late this year, 2025. In the last two or three years, it's going to go away because it switched to 3.2 T.

But there are going to be, you know, waves and waves of, let's say, smaller users, you know, later technology adapters that will be using 1.6 T technologies, right? So you're going to see waves and waves of this kind of a adoption of highest data rate. So in aggregate, you have a very fast ramp, right? A few years of lifetime and coming down. But within each of this is kind of a several waves kind of combined together into a big wave, right? So that's gonna, what's going to happen. And this will fundamentally then grow our TAM, right? So you now have a AI machine learning-based, you know, fast ramp, TAM, followed by smaller, but in aggregate, similar amount of the TAM in combination, right, for every generation of technology.

So next slide, please. So this is an interesting one. We recently, ever since the GTC event last week, there was a question about, hey, you know, now NVIDIA is linking all these things together, you know, now 72 GPUs in a rack, you know, rather than just, like, 1 server box. What's happening here? What's going to happen to the use of optics, right? So somebody sort of says, yes, up, right? It's actually very interesting. So basically now, what we see here in the world is this: single mode optics, right? Because the bandwidth is growing very fast. The only scalable technology going forward is going to be single mode optics, right? So there will basically be more and more use of single mode optics.

Meanwhile, the copper technologies, you know, you can see NVIDIA's announcement of the copper interconnections, is still the lowest cost, lowest power per bit technology. So now they're trying to increase the use of, of copper, but that doesn't change the use of optics, because the interconnection, the distance they're being used, are still, you know, favoring optics. So fiber optics doesn't really reduce. It actually goes up, right? And the only way to kind of temper down the volume increase is by using higher data rate, 1.6 T, for example. What's actually squeezed in the middle is actually the, the multimode. Multimode used to be connecting the servers, right, to the middle of a row, right, switch and stuff like that. That's actually gradually being replaced away by, in the shorter distance, by copper, and longer distance by single mode, right?

So what's happening here is single-mode technology, which Lumentum is really, really good at, is actually going to see a larger percentage of market going forward in its transceiver form, and eventually becomes more like co-packaged optics kind of form. So that's what we're really, really focused at. Now, the multimode technology will have a different use cases. We'll talk a little bit later. Doesn't mean it'll go away, but it's going to be a different use case later. So think about basically higher data rate, larger GPU clusters. You're going to see more using more optics at the networking level, connecting level. In the short distance, probably a little more copper, right? And, you know, multimode will, you know, be squeezed a little bit, but the technology of multimode will be used in some other activities.

Speaker 15

When you say that, just kind of that NVLink, the framework that Jensen put out on the 72, that copper at the back, you say that it moves to optical because the need to get the bandwidth up or the need to get the reach beyond the 7.7 meters?

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Correct. Correct.

Speaker 15

Which one is it? Is it the need to get the reach up, or is it just getting the data rates of the copper will get too hot at the-

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah. Yeah, so probably the latter. The latter.

Speaker 15

Just the reach.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

The reach.

Speaker 15

AC cables.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah, yeah.

Speaker 15

AC connecting node to node.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Right. Right. Basically.

Speaker 15

So it's going to go all copper.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah, yeah.

Speaker 15

Right.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

We'll come back later on that questioning.

Speaker 15

Okay.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah, yeah. So, so next slide, please. Next slide, please. We can go to the next slide? Okay, so this is just a, you know, physics summary, right? So basically, what happens here, we can see, you know, single mode can cover everything, all use cases, everything, right? So it's, And also single mode, because if you build your data center, right, the cabling is actually a very big investment. You don't want to just change the cabling all the time, right? And therefore, single mode, if you want to choose to build a new data center for AI or machine learning, right, you will choose the structural cabling, you're going to use single mode, right? You're not going to use multimode for cabling.

And therefore, I would imagine most, if not all, new data centers will be built on single mode fiber, right? Because the copper has an advantage, but multimode is being squeezed middle, as I was just talking about, right? We imagine that, you know, the use of single mode will increase, and that, again, that's what Lumentum is really, really good at, with all the scale, technologies, and roadmap to support it. So next slide, please. So then, we'll talk about this again, right? So with the bandwidth growing so fast, right, really the only way to grow further, right, to grow, to scale that, is to use the highest data rate optics, right? And therefore, we're now talking about the 200 G per lane optics. That will be, you know, giving you the 1.6 T, right?

And meanwhile, actually, in the electrical domain, it's also now moving from 100 G per lane into 200 G per lane, right? So they're kind of pairing up, and that's actually the lowest cost, lowest power consumption way to carry the traffic through. So that turning point is solely challenging technology as well already, right? Because doubling the bandwidth in a short amount of time is actually not easy, right? But we do see Lumentum have technology to actually carry out to either 400 G and 800 G per wavelength technology, right? At some point, you, you can argue, you might have to use the coherent technologies, right, in order to carry further on the bandwidth per wavelength or per lane, right? So now you can see you can give you another picture here now.

The evolution of technologies is not only, you know, confined in data centers. You're going to start to grab the technologies for long-haul transmission gradually move into a data center as well, right? That's something that's could happen in the next 5-7 years, right? Another point here really is this, going forward, the electrical connectivities at 20 per lane may be more limited now. Because again, the Moore's Law is reaching its limit. 20 per lane of SerDes beyond that is gonna be very challenging. The question becomes, what do you do after that, right? And this is where optical connectivities, this is where the opportunities of opening up the NVLink or any kind of inter-chip connectivity links using optics come into play, right? That's the big idea.

We talk about 5-10x of the volume at that time, right? So that's gonna be a next huge opportunity, huge challenge for optics from cost, scale, power perspective. So next slide, please. So this kind of give you that, the flavor, right? Today, 1.6T, you know, we have the EMLs, the silicon photonics, the PDs, and the CW lasers, right, all pairing up. Today, all these are getting designed into either modules, our own modules, our customers' modules, end customers' modules, and so on and so forth, right? So this is ongoing now, right? And this will see volume in 2025. Still pluggable base and so on, right? Meanwhile, a lot of people are seeing the same problem we just talked about here. How are we gonna scale further, right?

You compare the power consumption of 800G to 1.6T, the power scaling now become minimum, right? There's a huge amount of power consumption associated with digital signal processing. You hear a lot of people talk about LPOs, LROs, you know, all the different things, and CPOs. It's all trying to solve the problem of power, right? Power consumption, right? And so that's one area where people say, "Hey, can we integrate the whole thing together, remove all these DSP in the process, and lower power consumption?" Good idea, right? We're supporting it. The external light source technologies and, you know, and silicon photonics technology will be supporting that. It will take some time to prove it out, but that's gonna be a very important trend going forward.

And that will then lead into the chip-to-chip connectivities based on optics, right? Which is, again, to overcome both the power consumption limitation, also to overcome the scaling of electronic link speed, right? Go beyond that, what we're gonna do, right? So that is the dual reason why people are talking about optical connectivity. Again, that'll be a huge opportunity for us to drive up the volume, even more, right? And of course, in that domain, there's another opportunity actually, is to connect them using VCSELs, right? You don't have to use single-mode optics in this case, because distance is really, really short, right? You could connect all these chips, high-speed chip with VCSELs. And that's another opportunity now to say, "Hey, maybe that's a good way to construct a low-cost, low power consumption, optimal connectivity in a short distance at chip level," right?

Lumentum is actually engaging in all these optical discussions, all these activities. So we're not just looking at 1.6T. We're looking at 1.6T, 3.2T, 6.4T, and optimal connectivity at chip level, how that's gonna change, you know, our business composition and the volume going forward. So next slide, please. So, actually, Alan talked about this already, right? As if you look at, you know, each GPU rack today costs like 2,000 watts or something. Crazy amount of power consumption, right? So today, the limitation of networking power is putting a big limitation on how much compute you can put into the data center.

So now you hear the word about the data center, the traffic used to be confined with data centers, now is leaking out of data centers, right? They need to be connected with, you know, dispersed data centers connect them together, not at a, you know, intra GPU level of connectivity, but still very big. A lot bigger than what it used to be. So DCI now carry a different meaning, is that it could become a connectivity also due to AI machine learning clusters growth, right? So we see a lot of opportunity there to start to drive the DCI market higher as well, right? We're at the early phase of that. We haven't seen huge amount of, you know, upside, but we can see that coming. So next slide, please.

So in that case, though, we're getting really excited about, you know, the ZR. ZR used to be a you know cost reduction, efficiency improvement, you know, management simplification play for the hyperscalers. Now, it could be looked at, hey, I need to scale my AI overall bandwidth to connect different data centers. How am I gonna do it? Lower costs and without having to, you know, putting a huge amount of DWDM infrastructure, right? So now we're seeing more and more of these kind of opportunities. Therefore, we're really excited about the 800G ZR products. Now we're actually sampling to the market, right? You're gonna see some demonstrations. And now, we call it cloudification. Basically, people start build more and more point-to-point networks, just like cloud, to building a network.

Then we're gonna see more use of the pluggable transceivers in the transmission domain, right? That's a very exciting opportunity for us. We think the AI proliferation will start to drive the higher usage of that technology as well. Next slide, please. Right. Finally, come back to my first big idea, is that, you know, in thinking about you have 10,000s of node GPUs, now you have to, you know, connect them with the lowest cost and lowest latency way possible, right? Now, the optical switching is coming into play, right? How am I gonna actually have this non-blocking, scalable, scalable, not just in terms of radix or connectivity, but also for data rate, right? When your data is changing so fast every two years, if you have to change the cabling all the time, it's gonna be really expensive.

Power consumption is also a big challenge. Therefore, how about using optics to connect them, right? The optical connectivities or optical circuit switch, right? So this technology now is being developed. And we do see that will become a pretty important technology in the AI machine learning clusters going forward. Going forward, you can even imagine we have the, you know, GPU level optical connectivity at a full data rate speed. You might need to use DWDM in conjunction with optical switching, which basically means wavelength selective switch kind of technology, plus the free space full fiber switching capabilities. That's also actually in the cards, right? So imagine the data center at a time will really become like a telecom network, right? DWDM wavelengths level of router, you know, routing, and so on and so forth, right?

You can start to put a picture together. Again, this is what's getting me very excited because all this uses Lumentum technology. You know, really wholesale from big networking to data networking, and everything that we have in our portfolio will be applicable to the future of AI machine learning evolution. So next slide, please. So, to summarize, right, the story is, in the next three-five years, you're gonna see more and more of the speed increasing, you know, from 800 to 1.6, to 3.2, to even 6.4. The technologies all in the optical chips, how we're gonna package low cost, how we're gonna scale up really quickly at every 2 years of a product life cycle.

Meanwhile, right, the AI machine learning traffic patterns are encouraging the use of optical circuit switch. And that will, you know, become a reality in the next couple of years, right? So that's kind of really in the next three-five years, right? And meanwhile, you know, the leakage of AI traffic out of data centers is actually causing the growth of the DCI technologies, which you have amplification, you have ZR kind of technologies, that's being used. We're gonna see that going up as well. And finally, when the GPU traffic becomes really opened up and carried by optics because of the limitation of Moore's Law interface speed, that's where you're gonna see optics now going to the next level of compute.

Now we're gonna see an exponential growth of optics, the use of optics, you know, five-seven to 10 years from now, right? Again, Lumentum has all that technology base, the scale, the roadmap to support this evolution, and that's what we're here for. That's what we're so excited about our future going forward. So I'll pass this on now to Dennis, to bring us back into today, and tell us how we're gonna actually implement the near-term roadmap with transceivers.

Dennis Tong
Group VP and General Manager of Cloud Networking Platform, Lumentum Holdings Inc.

Thank you, Wupen. Good morning, everyone. My name is Dennis Tong, and I joined Lumentum through the recent acquisition of CloudLight. We started CloudLight back in 2018 with a belief and a goal to set up a manufacturing platform to help... Perhaps we should go to the next slide. Yeah. With a goal to set up a manufacturing platform to help scale silicon photonics technology quickly. And based on that platform, which I will show in the next slides, we tend to differentiate ourselves by focusing on the latest generations of transceiver product. Can you go back one page, please? Yeah. So, what does that mean, right?

That means, if you look at this chart here, in the year before September 2023, 800G product represent approximately 37% of our revenue. One quarter after that, 800G product represents 75% of our total revenue. That indicate our ability to quickly migrate from one generation to another generation, based on the platform that we have set up at CloudLight. Next page. Before CloudLight, if you look at the silicon photonics ecosystem, it is pretty fragmented. Starting from the left, you have a wafer foundry to make your design. You then pass your wafer to a typical OSAT, which will do wafer-level testing, dicing. And after that, you ship the diced individual device to a contract manufacturer to make your product, be it CPO, be it pluggable transceiver.

In CloudLight, we aim to put all this process under one roof, and we believe that by taking ownership of all this process, we can enable, you know, data transparency throughout all the process and across all the process. And by doing that, we can achieve better yield management, better efficiency, and more cost-effective manufacturing. Another beauty of this platform is that once you have it set it up, because of the nature of Silicon Photonics technology, you don't really have to do that much change once you migrate from one generation to another generation. And that allow us to achieve faster product qualification and rapid time to market. Next page, please. On our platform, we are able to develop some very interesting in-house automated equipment. These are all designed by our CloudLight team.

Each of these equipment are customized for proprietary in-house precision micro-optics assembly, process. I've shown a few example here, including, you know, laser die to lens alignment, silicon optical bench to the PIC alignment, and also fiber attachment to the silicon photonics. All very critical in productizing silicon photonics. And together with our wafer level testing, also the bare die DSP co-packaging capability, we believe that we are offering a fully, integrated product development capability here. Next page. Yeah. So we talk about 400G and 800G. These are our roadmap. We launch in, you know, volume production, 800G product, last year, earlier last year.

And then, the plan is that we will launch 1.6T product, second half of this year, and 3.2T, 2025 and beyond. What I also want, want to mention is that we talk about silicon photonics product. At CloudLight, we also offer a full range of multi-mode product dating back all the way to 400G. And if you look at our track record of consistent delivery generation after generation of different product, we believe we are very confident to continue to do that to support customer needs. Yeah, next page. Yeah. So we talk about capability, we talk about product roadmap. I want to shift gear and talk about scalability. Shown here is Lumentum's manufacturing site in Thailand.

Each of these building here, the footprint of it is about 7,000 square meter, okay? Phase one is already in full operation, and phase two offer a lot of flexibility for further expansion and growth here. Yeah, next page. Then our customer go visit the site. They love it, not just because of the scale of it, but because of the way we manage the facility, including process control, Kaizen process, and the culture of zero defects. Next page. Next, please. The more exciting thing is that we are executing our plan to expand this facility per our previous announcement. Yeah, next page. Okay. So to sum up my part, we believe Lumentum, we are in a unique position to capture the AI growth. We offer a broad range of product, like what we have just described.

Wupen has covered a broad range of components, and our Thailand facility certainly offers flexibility for us to grow and capture the massive AI opportunity. Thank you.

Wajid Ali
CFO, Lumentum

Thanks very much, Dennis. I think I've been introduced to everybody, Wajid Ali, CFO of Lumentum. Thanks very much for being here this morning, and I'm really sorry that the chairs we were promised at the back haven't come through, but we'll have to talk to somebody about that. So, we'll present our financial model kind of post-acquisition of CloudLight in a couple of slides. And I think what you'll see is that we're taking a very balanced approach to the customer activity that Alan spoke about. I think all of you probably caught on to the fact that he said he's pretty much busy this week with customers working through opportunities that we're seeing given the acquisition that we've just done.

Those weeks have been very similar for the last five or six weeks as I've seen it. And you know, I think because of that and, you know, the roadmap work that Wupen Yuen and Dennis Tong's teams have been working on, you know, we've taken a look at, you know, what we think our business is really gonna look like from a mix standpoint, moving forward. I'm sure this is not a surprise to any of you, but just to kind of, you know, fine-point it and put some numbers around it, you know, about three-quarters of our business was cloud and networking, exiting fiscal year 2023, and the other 25% of it was industrial products, between our consumer business and our lasers business.

Moving forward, our expectation is that our cloud and networking platform, between all the opportunities that we're seeing, and quite frankly, what you'll see on the next slide, you know, where we're investing our dollars and deploying our capital, our expectation is that that'll be greater than 85%. Not that we don't expect our lasers business to recover and for the new ultra-fast products that we're investing in to support some growth in lasers as well. But just given the step function opportunities and revenues that we're seeing from customers, and multiple of them, just, you know, between the hyperscalers and AI infrastructure providers, those step function opportunities are really gonna shift the mix of our revenue base as we look forward.

We'll start to see some of it in our fiscal 2025 and it'll be more pronounced in fiscal 2026, based on the customer activity that we're having right now. Okay? So, you know, we're investing for growth, and we're investing for growth on the cloud and networking platform. And, you know, you hear a lot of talk from us on our earnings calls around lowering our fixed cost base.

And really, the reason for lowering our fixed cost base is not just to improve our operating performance as a company, but it's also to make room for the R&D investments that we are supporting for different flavors and variants of 800G, and for the DCI opportunities that Wupen Yuen talked about, as well as 1.6 and 3.2 generation of products. So we're not backing off from those critical R&D investments, but we are lowering our overall fixed cost base so that we can all do it within an envelope of operating expenses that's reasonable for the business. You saw the phase one and phase two pictures that Dennis presented.

You know, we've already started to take action on investments that we need to make from a capital standpoint, whether that's buildings or taking a look at what new tooling we need and what new equipment that we need in order to support the growth. And so we will see some elevated CapEx over the next 12-18 months versus our historical CapEx investments, but those CapEx investments will be very much focused in on our Thailand facility and the equipment and infrastructure that's needed to support the transceiver growth that we're seeing.

As all of you remember, we've already made a lot of investments in our Sagamihara facility, and so, although we'll probably need a little bit more incremental investments for the 200 G EML growth we're expecting to see, the majority of the CapEx will go to support our facilities in Thailand. And through those capital deployments, our expectation is that we're gonna grow top-line revenue. And you'll see that much of the operating profit improvements that we're expecting are gonna come through that growth of top-line revenue, not just because of the incremental margin dollars that flow through from those sales, but also the improvements in manufacturing capacity utilization. On the last conference call, we also stepped up our commitment to accelerate NeoPhotonic synergies.

So if all of you remember, we had come in with that acquisition, committing to $60 million of synergies. Six to 9 months later, we re-upped that to $80 million, and on the last conference call, we were able to increase that to $100 million of NeoPhotonics synergies. Again, opening up room for us so that we can reinvest in all the opportunities we see, with the cloud and networking platform, and also to keep us within an envelope that allows us to maintain a reasonable and balanced business model. So what you can see on this chart is, you know, the consensus estimates for fiscal year 2024. And what we've done is you see, two blue bar charts, on the right. You don't have to take pictures. It'll be out on the web.

And, you know, many a times, you know, as I've been invited to speak at fireside chats, you know, I've constantly said that, you know, our goal number one from a financial standpoint is to get back to double-digit operating margins. And for us to be able to get back to double-digit operating margins, we really need to be north of $400 million a quarter of revenue, just given the shift in mix that we've had, that all of you are very familiar with. And, you know, that double-digit operating profit really will come from the growth and margin dollars that come from incremental revenue, but also through some of the COGS efficiencies that our teams are working through, as well as the fixed cost reductions that we're also seeing.

The way to think about that north of $400 million of revenue is really just, you know, kind of recovery in our telecom end market, some recovery, some natural recovery, in lasers, and some normalization, in the product transition that we're going through with the former CloudLight business. So kind of once that normalizes, we should be able to comfortably, be north of $400 million a quarter again, and that should, give us, a double-digit operating margin. And then, you know, I mentioned right at the very beginning of my presentation, we're expecting to see step function improvements in revenue.

With the activity that we've got going on, for with new customers and new products, each one of those opportunities are sizable in nature, and so it won't be as linear as the growth that we might see from where we are now to the north of $400 million. We should all expect it to be much more stepped in nature, and that's why we didn't put a year on here. We just said, okay, once we get to that level of quarterly revenue, we should be able to comfortably get back into a 17%-20% operating margin cadence. That allows us to continue to invest in capital and the depreciation expenses that are associated with that capital, and it allows us to invest in R&D as well.

As you can probably tell from Wupen Yuen's presentation, he's got a lot of opportunities that he can, you know, invest his R&D in. And so we want to make sure that we've got, you know, significant space for that. And so that's why we're communicating a 17%-20% operating margin, even at a $600 million quarterly revenue. Okay, so the business model. So again, you've got fiscal year 2024 consensus estimates, and then we've taken the last bridge chart and basically put it into two columns so that you can take a look at what our expectation is on operating expenses and on gross margins.

You saw on the very first slide that I showed, we're expecting a significant shift in mix between our two different platforms, and it's really that shift in mix and the types of new products that we'll be shipping that's causing a shift in the gross margin model for the company. You can see that we've got to continue to keep you know stringent controls on operating expenses while balancing the different R&D projects that we have in order to maintain operating margins that we've got listed on this sheet. Again, this will be on the web at 8:30. Okay. So, I think I've talked about all of these things.

We've got some structural cost savings that the team is working through between the NeoPhotonics synergies, as well as some of the fixed cost reductions that we're actually making, you know, this quarter and next quarter. Those should start flowing through into our P&L. That will really allow us to give us some opportunity to invest in the R&D projects that we've got to fund the roadmap that Dennis and Wupen talked about.

We've got the balance sheet in order to be able to not only leverage our operating cash flow to support the CapEx that we need, but given the fact that it'll probably be a little bit elevated for the next 12-18 months, we have a sufficient balance sheet to be able to do that and remain comfortable from a CapEx standpoint as well. Okay, so with that, I'll move it over into Q&A. Alan, are you okay? Kind of-

Alan Lowe
CEO, Lumentum

I'm great.

Wajid Ali
CFO, Lumentum

Oh, okay.

Alan Lowe
CEO, Lumentum

I'm great.

Wajid Ali
CFO, Lumentum

All right, thanks.

Alan Lowe
CEO, Lumentum

I think we're all going to sit up here.

Wajid Ali
CFO, Lumentum

All right.

Alan Lowe
CEO, Lumentum

You can keep that.

Wajid Ali
CFO, Lumentum

You want me to keep that?

Kathy Tai
Head of Investor Relations, Lumentum

So I've asked that if you have a question, please make your way to the microphone, the standing mic in the center. Say your name and your affiliation, and we'll take you in the order of people lining up.

Alex Henderson
Analyst, Needham

Perfect. So Alex Henderson, Needham. I was hoping, Wajid, you could talk about when the step functions are likely to happen. It seems probable that the telco is, at the earliest, 1Q-25 calendar year. But the optical pieces, there's the chip piece for, you know, the, the optical elements, and then there's the, you know, the ramp and capacity produce, AI products, whether that's 800 gig or 1.6 terabits. When are those three likely to kick in? My guess is that it's probably December 2024 for the, for the chips, and for the AI products, ramping into the March 2025 quarter. Is that kind of the window? Thanks.

Wajid Ali
CFO, Lumentum

Probably, Wupen Yuen is better to start on that, and then I can-- I can back him up.

Alan Lowe
CEO, Lumentum

You got a mic.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Oh, good mic. So, can you repeat again the question about the specific questions?

Alex Henderson
Analyst, Needham

Basically, when do the step functions happen?

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Um.

Alex Henderson
Analyst, Needham

on the cloud, like, revenue, based on our capacity?

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

In the 200 gig chips.

Alex Henderson
Analyst, Needham

The 200 gig chips.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah, the 200 gig ramp, let's start with that first, right, Alex? It's going to be, I would say, second half of next year in volume, right? So calendar 2025, right? So today is all about sampling and getting qualification ready, but I would say the real volume for 1.6 is going to be second calendar, second half calendar year 2025. The 400 or 800 G or kind of current generation products, I think it's a little bit uncertain at this point. You know, I wouldn't put a date on there yet. A lot of activities going on right now, but I would... You know, I think we'll probably need for more update from us a little bit later.

Alan Lowe
CEO, Lumentum

Yeah. I mean, I think the sampling of the chips are happening now.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Now.

Alan Lowe
CEO, Lumentum

Sampling of the transceivers will happen this summer, and it'll take some time to qual and then ramp into production.

Alex Henderson
Analyst, Needham

Telco?

Alan Lowe
CEO, Lumentum

Sorry?

Alex Henderson
Analyst, Needham

The telco.

Alan Lowe
CEO, Lumentum

Telcos. Oh, telco? You know, I think that it's a matter of the carrier spending, right? It's slow, and the inventory is there. So I still think that's probably more of a, you know, at least the first half of fiscal 2025, so you know, the September or December quarter before we have normalization. You want to translate that into money?

Wajid Ali
CFO, Lumentum

No, I think you covered it.

Alan Lowe
CEO, Lumentum

Okay.

Tri Lam
Analyst, Arcsight Partners

Hi, Tri Lam from Arcsight Partners. Quick question on the gross margins going from 33%-34.7%. You talked about product mix lifting that up. What about the impact of customer mix, right? Customer mix, deal size, probably drives more margin than product mix, right?

Wajid Ali
CFO, Lumentum

Yeah, yeah. I was trying not to... You picked it up very well. Yeah, I was trying not to highlight that, but yes, the product mix is for new customers... and new customer margins are better than prior customer margins, at least from a deal activity that we're seeing right now. And so, that in combination with the fact that 200G EMLs are a chip business, and so that will give us more of a tailwind on margin, and the 800G ZR+ products will also give us a tailwind in margins. But yes, on the kind of classic 800G and 1.6T products, we're seeing better margin opportunities than what we've historically seen.

Tri Lam
Analyst, Arcsight Partners

Due to customer mix as well?

Wajid Ali
CFO, Lumentum

Correct.

Tri Lam
Analyst, Arcsight Partners

Okay, great. And then second question is, you know, you talked about the increasing in the numbers of transceivers, but I think you're kind of assuming that the spine remains 64 ports on the spine. Like, you look at NVIDIA, they just released 144 ports, which means you can now collapse, you know, like a 9,000 or so GPU cluster. That used to be required 3 networking tier, now you can collapse it to 2. You know, if more and more of these switches, of these spine switches coming at 144 or even higher nodes, wouldn't the number of transceivers you need to build address the same number of clusters? Because of the way that NVIDIA does NVLink, it doesn't have to follow that natural topology-

Wajid Ali
CFO, Lumentum

Yeah.

Tri Lam
Analyst, Arcsight Partners

that you would have these spines. So therefore

Wajid Ali
CFO, Lumentum

Right.

Tri Lam
Analyst, Arcsight Partners

You can actually get less transceivers if you build more dense spines, right? So isn't that... Would that be-

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah. So basically, I agree with that, right? So today, it's like a 3-to-1 ratio between transceivers versus

Tri Lam
Analyst, Arcsight Partners

Yes.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

-the GPU.

Tri Lam
Analyst, Arcsight Partners

Yeah.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

3-2.5, right? And with that condensing, 3-2 layers become, like, 2.5-3. Okay. So yes, but still a big jump, right?

Tri Lam
Analyst, Arcsight Partners

Okay.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

With GPU continue to be deployed, and remember, that's only for one GPU.

Tri Lam
Analyst, Arcsight Partners

Yeah.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Right? GPU vendor. There's actually a bunch of others.

Tri Lam
Analyst, Arcsight Partners

Sure.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

whether called... You know, that in aggregate, the volume is still huge, right?

Tri Lam
Analyst, Arcsight Partners

Yeah.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

There's still a step up of the so-called front-end network, traditional data center networking, right?

Tri Lam
Analyst, Arcsight Partners

Okay.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

This is a step up, and you know, people are trying to use less transceivers, less links, but the step up is still there, right?

Tri Lam
Analyst, Arcsight Partners

Okay.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah.

Wajid Ali
CFO, Lumentum

One thing to add to that. The objective in doing that is to be able to build more.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah.

Wajid Ali
CFO, Lumentum

Right?

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Connect more.

Wajid Ali
CFO, Lumentum

The goal isn't to save money or to reduce. The goal is to be more efficient.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

More dense for bigger networks

Wajid Ali
CFO, Lumentum

... more scalable, so you can build a much larger cluster. So I think not lose sight of. It's not a zero sum, it's actually the intention is to lessen or minimize what is a limiter so that more can be added overall.

Tri Lam
Analyst, Arcsight Partners

Okay, so more dense, but bigger networks.

Wajid Ali
CFO, Lumentum

Yes.

Tri Lam
Analyst, Arcsight Partners

Okay, got it.

Wajid Ali
CFO, Lumentum

I think that's the key point.

Speaker 15

Hey, guys. Thanks for hosting the event today. I appreciate it. So, Alan, I, I saw you at the event yesterday as well at Optica, and I think you heard on stage an NVIDIA representative talk about the cost differential between copper and optical. And I think Wupen showed a slide about 800 going to 1.6 T and kind of showing single-mode fiber coming into both the high-speed Radix side and also on the front-end side from server switch. What makes you feel like you can get the cost profile down? I think yesterday it was like $0.50 a gig to, you know, $0.05. Like, that differential is pretty large. How can the cost profile come down enough for you guys to take those sockets where copper exists today in such volume?

Wajid Ali
CFO, Lumentum

Do you want to try?

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Do you want to start with it? Okay, so I think that's the biggest barrier, right? So today, talk about these two steps, right? The step number one is optical transceiver. We talked about that already, right? To bring it down to $0.50 per gig to, like, $0.05 per gig and having, like, 2 picojoule per bit of, you know, power consumption is challenging, right? And therefore, it has to come down to optical integration intimately, right, with the silicon. Right? That's where silicon photonics really does come in, right? But how to achieve that, I don't think anybody knows it yet, right? So, so that's, that's one areas of evolution or innovation that's really sorely needed by the industry. By us, you know, we're definitely part of that equation. Another approach to doing that, we talk about the, this VCSEL-based interconnect, right?

That's a way, because VCSEL today already has scale. Right? It has the sensing-based scale, right? It has a pretty good power consumption compared, you know, on the power-per-bit basis, energy-per-bit basis. And therefore, that's a potentially attractive solution to achieving it. Can we get to, like, $0.05? I don't know. Right? But it's at least in the right direction. So that is where the optical innovation will have to happen, you know, to scale beyond the limitation. We talk about the Moore's law, the electrical bandwidth.

Speaker 15

Yeah.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Very good question. Yeah.

Speaker 15

Yeah. The other one is just for Wajid. So, like, you showed, you know, various scenarios for fiscal year 2024 that had pretty wide range, like street $450 run rate, $600 million run rate. Does some of that need to happen to get to the $450 or the $600, or can you just describe what are the variations between kind of your outlook?

Wajid Ali
CFO, Lumentum

Yeah. So there wasn't any year put on it, and a lot of it is because of the timing of the telecom recovery. So the way to think about getting us back to north of $400 million a quarter is recovery in our telecom business, some normalization just between the product transitions we're seeing in the former CloudLight business, and kind of normalized recovery in our lasers business. So we certainly expect that to be a fiscal year 2025 mode. Is it Q1? Is it Q2? That's the part that we really don't know. And then the step function from $425 million up to the $600 million is really the opportunities that we see from a customer standpoint that we're currently bidding on and taking a—just taking a portion of those.

If we took more of those, then it wouldn't be as balanced a financial model. But you know, even kind of one of the customer activities can comfortably double our datacom business, even if you take a look at some conservative numbers. And so that really needs to happen in order for the $600 million a quarter to happen. Helpful. Thank you, guys.

Alan Lowe
CEO, Lumentum

Yeah, maybe just give you a little bit more color. A lot of the discussions I'm having with the hyperscalers today is how do I reserve that capacity you're putting in place today, both at the EML and in the back end? And so that's a good discussion to have. Now, they have to sign up, and we have to execute our product development, and then, then, you know, the business will come. So I- I'm, I'm confident it'll come. The question is, is it, you know, the middle of 2025 or the beginning of tw- calendar 2025 or, or the end, right? So that's kind of the question of when are they going to be ready for 1.6T, because that's really what's going to ramp up strongly in Thailand.

Wajid Ali
CFO, Lumentum

Thanks, guys.

Ananda Baruah
Analyst, Loop Capital

Hey, guys. Hey, guys, thanks. Ananda Baruah with Loop Capital. Thanks for doing this. This is actually super helpful. Two, if I could. On the new customer quals for CloudLight, what are the meaningful hurdles that you guys are working through the thresholds? And is there any reason... I guess, really what I'm wondering is kind of probability weighting the opportunity to get qualed at the hyperscalers, the remaining ones, and probability weighting, way to think about probability weighting, like legitimate inclusion at volume once the quals occur. I'll stop there. I have a quick follow-up as well. Thanks.

Alan Lowe
CEO, Lumentum

I'll give you my perspective, and then maybe Dennis can chime in as well. I mean, the gating item for 1.6 terabit is the DSP. So today, we don't have the DSP. We're going to get it soon. Now, it needs to work. I think on the optics side, the silicon photonics, CW laser, and the EMLs, that's—we're ready. So it's a matter of putting it all together and letting Dennis do his magic.

Dennis Tong
Group VP and General Manager of Cloud Networking Platform, Lumentum Holdings Inc.

I, I will echo that opinion. I think it's an industry-wide challenge that we are all gated by the readiness of a DSP. Optics front end, I think we are there, and I think the 200 G per lane product, be it 4 x 200 or 8x 200, I think we are making very good progress in terms of getting customers to qualify it.

Ananda Baruah
Analyst, Loop Capital

Great, thanks. And then, on the differentiation, how would you guys describe the CloudLight differentiation? It sounds like the manufacturing process is part of that. Is there anything else? And how well—like, what... Do you consider yourselves having, like, any sort of, like, meaningful moat, you know, on the manufacturing differentiation as well? Thanks. It can't be replicated in any near-term period of time. Thanks.

Alan Lowe
CEO, Lumentum

Well, I'd say that most of the transceiver manufacturers today buy their lasers, right? Many of them, most of them, buy them from us. And so that vertical integration of EMLs, VCSELs, CW lasers, silicon photonics, all within our house, gives us a certainly a cost differentiation, but also a time to market differentiation. So I think that's a moat. Is it sustainable for the long term? I think cost, it is. Time to market, I think, you know, eventually they'll come along because we're gonna continue to enable them with our laser chips. So I'd say that's kind of, and what CloudLight has done with respect to the manufacturing infrastructure and capability is quite amazing.

So I think it's a combination, but it really comes down to how you can design something for lower power and ramp it up as fast as you possibly can.

Wajid Ali
CFO, Lumentum

I think. Sorry, I'm not the technical guy, but.

Alan Lowe
CEO, Lumentum

Yeah, you're right.

Wajid Ali
CFO, Lumentum

But, you know, the capacity investments that our customers see us making, and, you know, Alan mentioned earlier that the discussions he's having with leading hyperscaler customers are around capacity reservation. I think that that is giving us an edge from a competitive advantage standpoint, because we are investing, and we've got something in place. And so, the time to market for our customers and security of supply is really important, too. So I'm sorry, you were going to answer the rest. I had to say that.

Dennis Tong
Group VP and General Manager of Cloud Networking Platform, Lumentum Holdings Inc.

Yeah. So people start to talk about scaling up the volume of silicon photonics. We, at the platform I show, we actually start to build that 5-6 years ago, and it is, in our opinion, a proven platform. I talk about shipping volume of 800G optics is coming out of that platform. And we are still perfecting it, making it more efficient and cost competitive.

Ananda Baruah
Analyst, Loop Capital

Thanks a lot.

Chris Rolland
Analyst, Susquehanna

Hi, guys. Chris Rolland, Susquehanna. I have a few for you, Kathy. You can cut me off if I go over. I guess, first of all, as we think about 1.6, what do you see Lumentum's share, let's say, in the optics market, the laser market? And then, Wupen, you had some interesting things to say regarding kind of the single-mode, multimode VCSEL versus EML debate there. I think 200 gig VCSEL is still a viable lower-end market. Maybe talk about how that shifts, and put that into, you know, answering that question: where do you see Lumentum's share in the laser market at 1.6?

Speaker 16

Yeah. Yeah, I'd hate to—I would say our share is going to increase given the challenges in going from 100 gig per lane to 200 gig per lane at the optical component level. We have, as Alan alluded to, a very healthy market share at current lane speeds, so I think it's only going to increase. I don't know if you want to add to that.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

... Yeah, so, on the kind of single mode laser side, right? The 100G lane is actually very challenging. It's when you go to a higher data rate, you know, all these RF problems becomes exacerbated. So we're already solving those problems with customers, and we solution solve them at the chip level. So we think our share is gonna grow, in a single mode space. And then on the multi-mode side, I think 100G, like you said, you know, we heard the announcement this week, right? I think that connection between the server to the, to the, you know, to the switch, for example, like, you can still use multi-mode devices, right?

But I think just the way that now the GPUs are scaling, right, and the distance limitation, I think corresponding share of the single-mode device is going to increase, and that will be at the expense of the multi-mode devices, right? In the current architecture of the connectivity.

Speaker 16

And I think maybe to add to that point, that, in these AI clusters, right, I mean, you've heard about copper and lots of excitement about the announcements last week. The reality is you're only talking about a meter, 2 meters, so even at these higher speeds, copper cabling can scale. The issue is then, as you go more than that, a few more meters, and all of a sudden, the window where multimode for a single-mode, crossover, where multimode is incapable of going at 200 gig line rates, all of a sudden it pushes towards single-mode. So sort of this window of where it's applicable when you're talking about an AI cluster, which is a little bit different than a broader data center, really collapses, and so we expect to see a lot more single-mode cabling.

Now, I think a point that's come up that I want to reiterate is, we're talking about cabling, but as Wupen also alluded to, down at the chip level, being able to communicate at 200 gig per lane, electrical, going even from chip to chip or within a circuit board, for those of you probably not attending the technical sessions over the last few days, but this, this is sort of the industry big deal of why pushing optics closer to the chips isn't to replace that one meter of cabling within the rack. It's the fact that to replace, say, centimeters of circuit board at 200 gig per lane, is where optics is starting to push much more closer to solve that problem. As it solves that problem, it's already in the optical domain. It'll probably stay in the optical domain.

Chris Rolland
Analyst, Susquehanna

I mean, there are three big players in the laser market. Do you hazard a guess as to your share at 1.6?

Speaker 16

I don't think we want to speculate on share. I certainly think we have more than 30%-40% market share.

Chris Rolland
Analyst, Susquehanna

Okay. For my next question, perhaps Wupen, you mentioned the WSS opportunity. I know last year Google was showing off an optical switch. Can you talk about optical switching opportunities with WSS, and if CloudLight has some revenue opportunities there over time?

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah, WSS is still early, right? We're not at the routing at the wavelengths range, right? Right now, it's too early. I'm just giving you the kind of the, a mind roadmap, right, going the future. I think, you know, eventually, that, that DWDM level connectivities, any point to any point, will probably become needed at, within data centers, right? So that's-- but that's years out, though. I wouldn't speculate on when that's gonna become the case, but I do think that the optical circuit switching, at the, at the spine level, right, the few hundred, I think few hundred kind of, radix, I think that, that could happen in the next, couple of years, right? Because that's a good way of scaling, AI machine learning without, you know, really...

I mean, it fits the traffic pattern, and it also lowers the cost, lowers the power consumption. So I do see that happening in the next couple of years, but WSS is probably years out.

Chris Rolland
Analyst, Susquehanna

Okay.

Alan Lowe
CEO, Lumentum

Yeah, so switching an entire fiber is what we're working on that goes into the data center.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Correct

Alan Lowe
CEO, Lumentum

Within the next, you know, 12 months. And, and I'd say that wavelength selective switches, when coherent comes into the data center, I think that's when wavelength selective will potentially play a bigger role, but that's still ways out.

Chris Rolland
Analyst, Susquehanna

Okay, understood. And lastly, vertical integration of your various products into the CloudLight transceiver. If you could talk about those opportunities, and then inorganically, would there be an opportunity to acquire some products to put into that?

Alan Lowe
CEO, Lumentum

Well, I'd say, you know, November, middle of November, we started shipping them samples, and so we're building EML-based transceivers, where most of their products had been silicon photonics or multi-mode. So that's happening, and I think that, you know, as we introduce 1.6 terabit, both the team collaboration on the silicon photonics PIC and the EMLs, based on different customers' requirements, will be going into those products, you know, in calendar 2025 in a more meaningful way. So I think we'll get a pickup of margin with as a result of that, as opposed to having CloudLight, like, go on the open market and buy those products. VCSELs, evaluating the VCSELs as well, for multi-mode applications, but, you know, PDs as well. I mean, that's all happening.

I'd say probably more of a, you know, I think, I think Wajid had it on his between 425 and 600. There was a product end feed, a vertical integration, and that, that's probably in the same timeframe that those products would come in and give us a bump in margin.

Speaker 16

Yes, so to add to that. When it comes to whether it's Silicon Photonics, indium phosphide components, gallium arsenide components, that's capabilities we already have within the company. So the time it's really a time factor of developing the needed components if we don't have them, or qualifying them with customers if we have those components for that level of vertical integration. Obviously, you heard about Thailand from a factory and manufacturing. The missing piece, yeah, there may be other little optical widgets that we don't have today, because to be in that business from a component standpoint wasn't necessarily attractive. Now that we have an in-house customer, if you will, those are kinds of things we're looking at.

Wajid Ali
CFO, Lumentum

Next slide, guys. Thanks.

Speaker 12

Hi, management. Good morning. This is Qiqi from Point72. So just, clarifications, a couple of clarifications on the GPU to optical ratio. You mentioned previously, it's a 3-3.5, to, to the current structure is 2-2.5. So just wanna clarify whether the 3-3.5 is referring to the H100 HGX previously. Version now is more like NVLink GB200 architecture is going to be 2-2.5?

Speaker 13

I wanna be very careful about those kinds of numbers because-

Speaker 12

Yeah.

Speaker 13

They depend incredibly on the size of the GPU.

Speaker 12

Clusters.

Speaker 13

cluster.

Speaker 12

Right, right.

Speaker 13

You're mentioning specific customers. I mean, there are other customers.

Speaker 12

Okay.

Speaker 13

But I think that the answer is that, you know, the transceiver, individual transceiver bandwidth is less than the bandwidth of the IO on a GPU, so there's gonna be multiple. Now, it depends on the level of networking levels, if you will. There was a gentleman that asked a question about recent advancement in switching, being able to collapse the number of layers. Yes, that reduces, but then the whole goal of reducing the number of layers in a cluster is to be able to then stack more clusters together. So I think you're gonna see numbers that continue to be multiples, and they're gonna go up and down, depending on the size of the cluster, the relative transceiver line rate relative to the, GPU.

Speaker 12

Right. Just, just for the 2-2.5, roughly, that number, is it referring to 800 gig or 1.6 T?

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

It's really the—like the gentleman talk about, right? When you have the... It doesn't matter what the data rate is, right? When you have the network architecture on 3 layer to 2, 2 layers, right? So I think that's the ratio. Whether 400, 800G, 1.6, it doesn't matter. It's how you actually put the topology together, right? It's more a structure question rather than data rate question.

Speaker 12

Okay. Okay, got it. So, like, the ratio could vary a lot if it's a one—like, the cluster of the GPU, if it's 5,000 cost, 10,000-

Speaker 13

The larger the cluster, the more links.

The more links.

Which means more transceivers per GPU.

Speaker 12

Right. Okay, got it. Thank you.

Speaker 13

Yeah.

The goal is to build bigger clusters. Right, Keith, don't lose sight of that key point.

Karan Juvekar
Analyst, Morgan Stanley

Hi, Karan Juvekar from Morgan Stanley. So on the slides, you mentioned sort of a pretty great sequential ramp on 800G sequentially. Just in terms of 1.6 terabyte, do you sort of see the ramp in production timeline similar to 800G? And I guess, revenue opportunity, do you still see a similar step function as you saw with the 800G, or do you see any differences between the two?

Alan Lowe
CEO, Lumentum

Yeah, we expect a very similar ramp signature, if you will.

Karan Juvekar
Analyst, Morgan Stanley

Yes.

Alan Lowe
CEO, Lumentum

And I think this is also an AI and machine learning signature. It comes in a burst, and I think we have shown our ability to cope with that, switching from 400 to 800, and we expect to do the same thing at 1.6 T.

Karan Juvekar
Analyst, Morgan Stanley

Okay, that's helpful. And then on the CapEx investments that you mentioned, I guess, is there a lag to the timeline of how that turns into revenue? Is there anything where the CapEx is sort of prioritized that we should think of and how quickly that can turn into revenue?

Dennis Tong
Group VP and General Manager of Cloud Networking Platform, Lumentum Holdings Inc.

I mean, we order it, it comes in four-six months later, and then we put it in service. So that's really the time that it kinda, you know, impacts us. So it can be anywhere from 6-12 months.

Karan Juvekar
Analyst, Morgan Stanley

Okay. So the CapEx investments, getting to that $425 million or $600 million quarterly, those aren't baked into those, are they?

Speaker 13

Those are already baked in, in the depreciation calculations, yeah.

Karan Juvekar
Analyst, Morgan Stanley

All right. Thank you.

Speaker 13

Yeah.

Speaker 16

I have a clarification on the timeline that you guys saw for 1.6 T. I think you said on the last earnings call, expected to have 1.6 T transceivers later this calendar year. In a prior answer, I think you guys said, kinda first half of calendar 2025. Was that a the 1.6 T products you see ramping initially, those are silicon photonics based and EML based, you don't see until the first half of calendar 2025, or can you level set that difference?

Alan Lowe
CEO, Lumentum

No, I think, you know, based on specific customer needs, we'll be doing both sampling this summer of 1.6 terabit silicon photonics and EML-based transceivers. The question then becomes: How fast does that turn into qualified transceivers that are ready to ramp? And that's really probably more a calendar 2025 thing, as we iterate and have the customer qualify and be ready for that 1.6 T, which is not trivial.

Speaker 13

Yeah, remember, the context of the question was, when is that step function in revenue? And the point is that it does. Once you have the chart and it's done, it looks like almost a light switch being turned on. There is a delay between when we launch and get into the customer and when that light switch turns on.

Alan Lowe
CEO, Lumentum

And by the way, we're getting ready to turn on the light switch at the end of this calendar year. So we'll be ready for those customers if and when they're ready to go in January.

Speaker 16

And there's at least 2 different major compute platforms that'll be utilizing 200 gig per lane by the second half of this year. The kind of bottlenecks to scaling the Lumentum 1.6 T trajectory, like, is that more a function of kind of CloudLight specific qualifications at customers? Or is there another industry ecosystem bottleneck, whether it's switching or network interface cards that kind of pushes the timeline from compute to optics high volume production?

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

Yeah, it's the latter. It's the latter. The 100G transition to 200G is a very, very big deal, right? So everything has to work. And then, you know, I think the, the optical portion of it is probably relatively deterministic because we already have been working on it for a long time, right? But the, the transition, when you integrate everything together, switch and everything together, that can be pretty challenging. So therefore, I guess, the turn on, everybody is increasing the turn-on timing, but I think there's uncertainties there because of the whole ecosystem has to be ready.

Speaker 16

Got it.

Wupen Yuen
President of Cloud and Networking Business Unit, Lumentum

And that's the big one, so we'll have to watch very closely.

Speaker 14

Okay, thank you.

Kathy Tai
Head of Investor Relations, Lumentum

Yeah, thanks, Zach. We'll take George as our last question.

George Notter
Analyst, Jefferies

Hi. Thanks, George Notter from Jefferies. I guess I'm curious about what's realistic in terms of market share with CloudLight? You know, you've got a major customer right now. I think you're the third supplier, if I remember there. I think you guys are doing some customization of the product rather than shipping maybe more standards, or, you know, standard off-the-shelf type product. Can you just talk about, you know, are we competing for third sources and fourth sources among these other cloud provider customers, all of the Google experience? Or is it, you know, do you have an opportunity to kind of display some of the other transceiver manufacturers that are historical players in the space?

Alan Lowe
CEO, Lumentum

I think it comes down to execution, right? And the doors are open, and it's up to Dennis to execute. And if we execute, we'll be first supplier. And we're not striving to be the third person in the door, where, you know, prices are not as good as the first one in the door. So that's certainly our goal, and I think we're lining up to be able to do that.

Dennis Tong
Group VP and General Manager of Cloud Networking Platform, Lumentum Holdings Inc.

I will not describe ourselves as a third supplier. I think it depends on the product.

George Notter
Analyst, Jefferies

Sure.

Dennis Tong
Group VP and General Manager of Cloud Networking Platform, Lumentum Holdings Inc.

I tend to think that in certain product, we are actually doing quite well. Yeah. And, yeah, and the rest of it, to broaden, further customer portfolio is up to our execution, and I think we have all the way to, right, tool to do that.

Speaker 16

And maybe, George, one other way to kind of come at your question is, today or last year, there's a very limited number of transceiver vendors that are shipping the kinds of volumes that we're talking about. So now you've asked the question of: Can we steal share from those folks? And I think that's the right way to look at it. Do they have the right profile? And as we've highlighted, between being vertically integrated, the manufacturing footprint outside of China, and you know, for changing the nature of who CloudLight is by being one company, and changing the nature of who Lumentum is by being one company.

I think those are factors that play into our success, whereas I don't think any of the other top three, let's say, transceiver vendors, have any substantive change in who they are or what they're doing. In fact, the technological challenges and the geopolitical challenges probably put those competitors in a much weaker position moving forward.

George Notter
Analyst, Jefferies

Is there anything different about this market that kind of speaks to higher margin structures? Like, I get the vertical integration discussion, but you know, as I think historically about the transceiver business, you know, it's been tougher margin business. You guys exited at one point. Like, I you know, I get it, the scale here is different. You know, AI is here, it's a big driver of all this. But you know, I think embedded in that 17%-20% operating margin kind of profile that you put out there at $600 million a year, I mean, you're implying this business is significantly higher margin than you know, maybe it has been historically. I guess I'm just curious in your comments on that.

Alan Lowe
CEO, Lumentum

Yeah, I mean, I think it's, you know, not as high as our old gross margins used to be, but the operating margin should be fine, especially with the scale, right? So we spread our R&D and SG&A over much more volume and much more scale. So I think that helps us at the operating margin level. And I do think that there's a big benefit, 'cause our ... If you remember, right, our EML chip business is fairly high margin, right? And so that's a product in-feed that now Dennis gets at cost. So I think from that perspective, that should help significantly on the transceiver margins.

George Notter
Analyst, Jefferies

Thank you.

Kathy Tai
Head of Investor Relations, Lumentum

Okay, great. Thank you, George, and thanks for everyone for attending. If you would like to come to our booth, we're going to have demos at noon today, and tomorrow at 10:00 A.M. So, we'll be open for investor booth tours at that time.

Powered by