Ladies and gentlemen, welcome to the ST Cloud AI Update conference call and live webcast. I am Moira, the conference call operator. I would like to remind you that all participants will be in listen only mode, and the conference is being recorded. The presentation will be followed by a Q&A session. You can register for questions at any time by pressing star and one on your telephone. For operator assistance, please press star and zero. The conference must not be recorded for publication or broadcast. At this time, it's my pleasure to hand over to Jerome Ramel, EVP, Corporate Development and Integrated External Communications. Please go ahead.
Thank you, Moira. Thank you everyone for joining the ST Cloud AI Update conference call. Hosting the call today is Remi El-Ouazzane, President, Microcontrollers, Digital ICs and RF products Group. This live webcast and presentation material can be accessed on ST Investor Relations website. A replay will be available shortly after the conclusion of this call. This call will include forward-looking statements that involve risk factors that could cause ST results to differ materially from management expectations and plans. We encourage you to review the safe harbor statement contained in the presentation and also in ST most recent regulatory filings for a full description of these risk factors. To ensure all participants have an opportunity to ask questions during the Q&A session, please limit yourself to one question and a brief follow-up. I'd like now to turn the call over to Remi El-Ouazzane.
Good morning and good afternoon, everybody. Today I'm going to share with you some of our insights into tomorrow's advanced data centers and AI clusters and how our company, ST, is positioned to contribute to them. I'm going to focus both on the optical interconnect and power technologies. In this presentation, you will see a lot of compute power and interconnects numbers, but it's good to remind ourselves that this is not just about making ChatGPT faster. High performance computing has a potential to revolutionize society by accelerating advancements across various fields, be it in healthcare, in climate science or in agriculture. This is really what it's all about. I will skip the forward-looking information and go straight into the first part of this presentation.
A handful of global hyperscalers drive the explosive AI data center growth, they are actually projected to invest more than $700 billion of CapEx in 2026 and in excess of $ billion in 2030. AI servers are creating a structural, I may say once in a lifetime, growth opportunity. You will see why ST is positioned to sharply increase its content and as such, its revenue in data centers. Indeed, it's a combination of a lot of technological differentiation and well-positioned products that is going to allow us to go and engage on that market and support its growth. Before we get going into some amount of details, we wanted to go and put some numbers behind the opportunity. I think at this stage there is no better way but to talk about our SAM per GW of infrastructure.
We see around $230 million. That number is approximate in nature. Should you write $200 or $250, I don't think it will change much. What matters here is the breadth of our offering. This $200 million of SAM is backed up by about 400 products we have that are tuned to address the AI data center business. We can address every critical step that is actually relevant to us as an IDM. That is power conversion, high-speed connectivity, control, monitoring, and obviously security. Each new AI campus built at this scale or each step up in rack power and rack bandwidth, will automatically translate into more potential dollar opportunities for our company. Let me deep dive into the first part of my presentation that is all related to power.
As you know, the rapid growth of AI workload is driving unprecedented power demands in modern data centers. As AI models become increasingly more complex, computationally intensive, the infrastructure supporting these workloads must evolve to deliver amazingly high power densities. We are coming from a world where traditional 54 volt power distribution system are absolutely reaching their physical and electrical limits. These legacy systems struggle to efficiently supply the massive power requirements of emerging MW-scale AI compute racks, and there are 3 main challenges that they are facing. One is a growing XPU compute. Those chips are getting hungrier and hungrier. From 500 W to 1 kW to 2 kW to 3 kW and more. Delivering enough current at low voltages requires massive amount of copper. Size and space is the second problem.
The thicker the cables and the more conversion stage we have. Essentially, there's less room you have for actual GPU content in a rack. This actually goes against compute density, which itself goes against high TCOs, higher TCOs and lower ROI. The last piece is thermal management. Every time you convert power, you lose efficiency. Efficiency, in this world, translate into heat, which themselves will require actually more power to cool down. It's a very vicious circle. That's why to address those challenges, the industry, as you know, is actually moving to more or less 1 MW per rack. Why?
Simply because by increasing the voltage to 800 volt, in that case, the current required for the same power level is reduced, which in turn decreases resistive losses, I square R, for those who remember their physics classes, and reduces the amount of copper cabling needed. This transition not only enhance energy efficiency, but will actually simplify the overall power infrastructure, enabling more compact, more scalable data center designs that can actually meet the growing demand of AI workload, while actually reducing operational costs, and quite importantly, environmental impact. To unlock 1 mega plus density, ST is putting a combination of its most advanced technology, be it silicon carbide and gallium nitride, as well as our smart power processing like BCD using galvanic isolation. It's actually not really an upgrade. It is truly a revolution.
For ST, in particular ST, it's a massive inflection point, giving us a chance to become an active player in this power conversion market. You go to the next one. What is the offering that we are supplying for this new paradigm? At the grid level, we leverage our leading position in wide bandgap material. In the power supply, you need to provide 800 volt AC to DC power shelves application using silicon carbide and GaN, and setting the efficiency and power density baseline. Please do know that that stage is actually sitting on a sidecar together with the BBU and also actually with circuitry that is allowing the rack to go and modulate itself as a function of GPU workload.
When you get this 800 DC that is actually being sent to the rack, we provide compact solution within this intermediate bus conversion with high power density, leveraging notably our GaN FET, and allowing us actually to go from 800 volt down to 54 volt. Now, I'd like to remind you that this is not new news. We actually announced that last October, together with NVIDIA, and we've announced that actually our team really designed two parts: a hot swap protection circuit, which is actually a 1,200 volt silicon carbide, and the power converter. The power converter in itself convert the 800 volt throughout the rack into this 54 volt needed by each server.
To provide this capability that we've announced with NVIDIA a few months ago, we are providing actually a smartphone-sized footprint that uses actually 650 volt gallium nitride transistor in a stack half-bridge configuration. On the last mile, which is actually what you see on the extreme right on that chart, we are developing silicon-based digital multi-phase controllers and smart power stage down to sub 1 volt, often 0.8, compliant with market leading XPUs. In that case, we can actually cover the complete data center power chain, those three stages, embedding a full set of power technology into integrated solution that support customers from the grid all the way to the core. I insist that actually this transition from 54 to 800 volt is actually creating a very nice inflection point for us actually to go and shine.
This is it for the power section of my presentation. I'm going to move now to the connectivity piece. Connectivity is the other critical lever to make AI factories truly scalable. As you know, in an AI data center, the interconnect is a high-speed data link that allows thousands of GPU to, if you wish, talk to each other. It's efficiently directly drive energy consumed per bit and job completion time. It's actually a major cost and performance bottleneck. To build an AI infrastructure, you have to think on 3 dimension: scale across, scale out, scale up. Scale across allows to connect different data centers over long distances. You can actually make the statement that this has been opticalized, if I can use such verb.
Scale out is actually becoming fully opticalized, and it's actually all about connecting thousands of those racks together to work as 1 giant cluster. The next big battle, which is actually an order of magnitude more connection, is scale-up, and it's all about making a 1 rack more powerful by packing in more GPUs. Today, scale-up is all about copper, but scale-up is actually, we will talk about that migrating step by step to optical. All of those 3 directions, scale-across, scale-out, scale-up, the trend is the same. It's all about using light of fiber instead of copper to move massive amount of data in a very power-efficient manner and between AI server. This is where cloud optical interconnect and our ScaleX approach, as you see in the title, become a key enabler for scalable AI infrastructure.
Now, let me show you what are the needed semiconductor pieces to address these trends, because it is not a coincidence that actually we are becoming a very large player in this market. Maybe I should start with a recap on the way these things work. At both end of each and every fiber in a data center, there is a transceiver that converts a signal light into an electrical signal and vice versa. Most of the time is a pluggable object, which one would plug onto a switch or server, allowing to build a flexible interconnect network. On the left-hand side, you can see that actually such transceiver is made of three important semiconductor components.
To be truthful, I conveniently removed one, which is actually the DSP in charge of modulating the signal and other functions, but we do not actually produce such product. This is actually the property what Broadcom or Marvell essentially are doing. We have actually focused on the pieces which are the core of this optical engine, and that actually we're very engaged on. First is an MCU that control the transceiver operation. Second is a EIC, stands for electronic IC, that is essentially driving the optical source, in that case a laser, but is also actually amplifying signals in reception, RX mode. Last is a photonics IC, often called a PIC, doing the actual conversion of light to electronic and vice versa. If you actually zoom in on a NPO and a CPO devices or construct, if you wish.
NPO standing for near package optic, then CPO standing for core package optic. It's all about devices or constructs that do bring the fiber directly to be packaged together with a GPU or your switch unit. As a matter of fact, for NPO and CPO, the exact same building blocks are being used. Again, albeit in a different configuration and with different performance targets. Now you can understand where we come from. ST provide to this system all the related needed silicon technology. Silicon photonics for PIC, BiCMOS for EIC, and I will tell you why BiCMOS is by far the best technology, and tailored STM32 MCUs. Now, let me show you more details on the ST technology behind that in the next slide.
Silicon photonics for PIC is designed to support 200 Gbps per lane and to scale to 800 Gbps and 1.6 terabit per second optical interconnect. This is actually our PIC100 process. This platform is manufactured in our core 300 millimeter facility. You may wonder, what's the big deal with your 300 millimeter? Well, using 300 millimeter, and in that case for PIC100, a 40 nanometer class lithography give you 3 big, huge advantages. First, better critical dimension control, which is actually the smallest element in your silicon design, and a uniformity that 200 millimeter wafers cannot touch. Second, is a much higher yield and more predictable device performance, a key to CMOS digital yield levels.
Last, and in this world of hyper demand, 300 millimeter give you more dice per wafer, supporting the volume required by the hyperscalers and lowering cost per function. The second part of the shuttle is BiCMOS. Why BiCMOS? You will have to trust me on this, and we can go into details. It's just that actually the figure of merit, which is actually the frequency max of a BiCMOS process, is critical in an Electronic IC device for high throughput transceiver in the context of data centers and AI infrastructure. Here we also have, by far, also on a 300 millimeter basis, we do have the best process on the planet. Last but not least, you need control. You need to be able to control this transceiver, and this is actually done with a microcontroller. You could ask me why a microcontroller.
It just so happen that actually the NVM provide super fast, of super low latency in terms of loop closure, and actually is favored as such microcontrollers to be actually the engine control for pluggable or NPU or CPU. Here, obviously the STM32 MCU, the world-leading MCU platform, provide actually all the benefits to this market as well. Now you understand that we provide the possibility to integrate all of those things in advanced packaging technology, so that actually you can create an optical engine form. From process technologies to peak EC MCUs to an optical engine assembled by us end to end. As a matter of fact, I will challenge you to find another company able to do that today. It's actually this combination of products and technology that give us a strong, scalable position across pluggable NPU and CPU solutions.
You may have seen this morning that we've announced something, and what we announced this morning was two-fold. First, we have announced that we are entering high-volume production for PIC100 and 300 millimeter for leading hyperscalers. Again, it's a combination of our technology platform, which is very unique, with a backside implementation that is providing best-in-class performance for silicon waveguide, and build on the superior scale of a 300 millimeter manufacturing line. The combination of both give us a unique competitive advantage to support the AI infrastructure super cycle. We have planned capacity expansion that will more than quadruple our output by 2027 versus today's basis, and we have further expansion plan in 2028. This acceleration of capacity is not done in a vacuum. It's fully backed by long-term capacity reservation commitments from customers.
The other thing that we have announced as well, which is worth noticing, is that actually we are preparing the next step of the roadmap of our PC100 process, which we call PIC100 TSV, full silicon via. It's actually going to allow future generation of near package and co-package optics solution that require this level of integration to bring optics closer to the compute for AI, not scalar cost, not scalar, but for AI scalar. It will be difficult to not talk in this presentation today about something that we have announced a few weeks ago. We have recently expanded a strategic collaboration with AWS through a multi-year, multi-billion dollar commercial engagement, serving several product category. It's a major milestone to position us in the AI revolution.
This collaboration will establish ST as the strategic suppliers of advanced semiconductor technology and products that AWS will integrate in its compute infrastructure, enabling them obviously to provide better high performance compute instances, reduced operational cost, and obviously the ability to scale compute intensive workloads more effectively. ST will supply, among other things, specialized capabilities across high bandwidth connectivity, advanced microcontrollers for intelligent infrastructure management, as well as analog and power ICs that deliver the energy efficiency required for hyperscalers data center operations. It's because of this increasing demand in the AI data center, our ability to provide the right products and technology portfolio. Last but not least, multiple deals we have closed over the past few months or are about to close, that we have recently increased our revenue expectations, like Jean-Marc explained last week in San Francisco.
In data centers, including cloud optical interconnect and power and analog for AI server. With all those dynamics, we now believe we can be nicely above $500 million of revenue in 2026, and already well above $1 billion in 2027. A few words to conclude. ST optical technology are critical for AI infrastructure, for pluggable transceiver, which amount for most of the market today, but also for the future fast ramping near package optic opportunities. Second, ST has also the power and analog technology, wide bandgap and silicon-based galvanic isolation and smart power, to address the 800 volt inflection point that is actually currently ongoing. We are, as usual, supporting collaboration with a broad ecosystem, including research labs, ODM, module vendors, and as you've seen, the largest of all the hyperscalers.
I repeat, with the current market dynamics, we believe we can now achieve nicely above $500 million of revenue in 2026 and well above $1 billion in 2027. Thank you very much. We are now ready to answer your questions.
We will now begin the question and answer session. Anyone who wishes to ask a question or make a comment may press star and one on their touch-tone telephone. You will hear a tone to confirm that you have entered a queue. If you wish to remove yourself from the question queue, you may press star and two. Participants are requested to use only handsets while asking a question. In the interest of time, please limit yourself to one question only. Anyone who has a question or a comment may press star and one at this time. The first question comes from the line of Joshua Buchalter from TD Cowen. Please go ahead.
Hey, guys. Thank you for taking my question and appreciate you guys hosting the presentation. I was hoping maybe you could provide some more granularity on the composition of the $500 million of business this year and greater than $1 billion next year. You know, any details you can give us on the split between silicon photonics and power and then, you know, even within power across, you know, high voltage, medium voltage versus stage two? Thank you.
Not really. In that case, sorry. We just announced this morning we are ramping in production on silicon photonics, which will start to grow and ramp in revenue this year. We are in volume shipment for power devices in the AC-DC conversion and in the battery backup units. The 800 volt DC architecture is pretty new and it's not involved in production yet. As you know, we've built reference design and proof of concept board that we've launched last year in October together with our colleagues at Nvidia. For low power DC conversion, we are in production with key customers on the 50 volt to 12 volt.
While we have validated samples for a big part of the 12 volts to go offshore, and it's not been yet, put into production. That gives you a bit of lay of the land, but we have decided actually not to break it down at this stage.
Okay, understood. Thank you. If I could follow up, you know, maybe you could speak to which sockets you think are most applicable to silicon carbide and GaN in particular in the medium and high voltage. You know, how meaningful can that be? And how important is having compound semis to winning business across the power tree? Thank you.
That's a great question. Clearly, if you look at the this new architecture, from the high voltage grid, to 800 volt DC. You will see actually a fast adoption of SSTs, Solid-State Transformers, where actually silicon carbide will be critical. If you look at the 800 volt to 54 DC, gallium nitride, I think, will play a critical role as a wide bandgap. GaN has unique properties that lead to a low output capacitance and lower on-state resistance, which make it actually an excellent material when dealing with such a high frequency operation.
The moment actually you land in the 12 volt down to 0.8, I think that you are in the land of silicon at this stage, most of the time, if not all the time. We are moving away from wide bandgap material. We have not broken down the $230 per GW of data centers. You know, the slide that I've shown earlier, it's on purpose. We are actually looking at it holistically, and we have decided at this stage to not break it down.
Thank you.
thank you, Josh. operator, next question, please.
The next question comes from the line of Sandeep Deshpande from JP Morgan. Please go ahead.
Yeah. Hi. Thanks for letting me on, and thanks for doing this event. My question is on the, first, on the power side. There have been other players in the power market for the data center for many years before ST. What is the technology that ST is offering that differentiates itself and thus helps it to break into more market share in the power market today? Why? ST was not a player in the power market before these recent announcements in any significant way.
Yeah. I, Thank you, Sandeep. To this first part of the question, you're absolutely right. I think that today, if you look at the 54 volt architecture, we have been actually marginal as a player. This transition that is happening is giving us actually an opportunity to come in. If you allow me, I will repeat what I said earlier, because actually I may not have come fully clear. First of the it's all about, you know, the amount of power density you can pack in a cubic, in a cubic, in a volume, if you prefer.
In that case, I think we actually have something to put on the table. First, there is a hot swap protection circuit that is going to be critical as part of the 800 volt to 54 volt solution. Here, our 1,200 volt silicon carbide devices and our BCD, you know, bipolar, CMOS, DMOS controllers with galvanic isolation are honestly the perfect fit for that technology. We do believe actually we are a differentiation here, which has been proven to be the case. On the power converter, excuse me. There are two elements. There is a primary side, the secondary side, I would say. The primary side is all about what we do on gallium nitride that I've explained earlier.
On the secondary side, there is a lower voltage, gallium nitride transistor, another volt-ish, a lower voltage gate drivers that we master and actually our STM32G4 microcontrollers that we can actually package together to provide actually the appropriate solution for the secondary side. Those two stages, what I've explained, the hot swappable part and the power conversion part, backed up by our silicon carbide and gallium nitride and MCU technology. Honestly, we are quite competitive. It just so happened that before they were not really needed, because actually you were getting into the server rack at 54 volts, right? Because actually those racks were consuming far less power.
This opportunity that is actually popping up is creating an infection point that, like I said, I insist on the word infection point that give us an opportunity to play in and to compete. I think the other
Understood.
Also .
Sorry. Just one quick follow-up. On this $230 million market that you're talking about in 1 GW per data center, in the data center, how much of the SAM is ST actually addressing today in terms of market share? Based on, you know, your AWS win and any other wins you've already had.
We have not documented that. We are focused on our SAM. Sorry. You are asking me, let me rephrase because I may have misunderstood your question, Sandeep. Are you asking me what is the SAM of a 1 MW rack?
No, I'm asking you in that $230 million of that SAM. You know, your SAM is $230 million in a 1 GW rack, right? That is what you're saying.
Yes. Correct.
In that $230 million, how much are you already addressing? Because, you know, there is some share you have and some share you still have to take, right?
I understand, actually. You know, you could assume that actually, by this year, in terms of product availability, by this year, we can cover the entire $230 million in terms of product availability.
Okay. Thank you.
Thank you, Sandeep. Moira, next question.
The next question comes from the line of Adithya Metuku from HSBC. Please go ahead.
Yeah. Hi, guys. Can you hear me?
Yeah.
Hi, guys. Can you hear me?
Yes, we can.
Yeah. Thank you. Thank you. Just two questions, please. Firstly, just a clarification. When you have these, EICs and PICs, do they tend to come from the same vendor in a transceiver or in a CPO? Also, could you talk a bit about what market share you intend to have maybe with a medium-term view or a long-term view, in PICs, with your technology and with EICs as well, with the BiCMOS tech you have?
Okay. That's a very good question. There is technically, technologically no forcing function that forces you to use the same vendor for the microcontrollers, the EIC and the PIC. What we see more often, assuming no supply constraints whatsoever, is more best of breed approach. You have to compete to have the best microcontrollers, the best photonics ICs, and the best electronic ICs. I want to make that point first and foremost.
Obviously the dynamic is a bit changing for the next 2-3 years because for those technologies you can be guaranteed that actually there will be a lot of supply challenges in the industry, and obviously that gives us in exchange, an ability to provide to our customer a better service, when we are controlling actually the three pieces, if I can put that this way. In term of market share for PIC, which was your other question, undeniably, we want to become the market leader. I think market leadership starts at 30%.
Got it. In the EICs?
Actually, on the EIC, this mission has been accomplished. We are the market leader today.
Got it. What share do you have, if I may ask?
As per the same definition I just gave you.
Okay. Got it. Okay. Thank you.
Thanks, Aditya. Moira, next question, please.
The next question comes from the line of Gianmarco Bonacina from Banca Akros. Please go ahead.
Yes. Good afternoon. Just a follow-up on the addressable market. I understand you gave some per GW, but what's your expectation on the total addressable market in $ billions? If you can give a range on this for 2027 and also for outer years. The second question is, given that the bulk of the growth would come beyond 2027, can you also share if you expect to grow faster than the addressable market, which I think in the press release this morning you indicated some mid-teens beyond 2027. Thank you.
I'm going to need you to re-ask your first question to make sure I got it, because it's a spin-off Sandeep's, and I failed him the first time, so I don't want to fail a second time. Could you please re-ask it again?
No, I just wanted to. Yeah, a follow-up on the sum, not per GW, but in total billions. What's your internal expectation for the sum in billion dollars in total?
Okay.
for ST in 2027 and maybe 2030.
No, it's a great question. Actually, to tell you the truth, it's on purpose we've not done it. I have to tell you this, I'm going to give you some assignment, which is actually to go and modelize by yourself the amount of GW of AI data centers being deployed. The reason why we did it like this, in all transparency with you is that actually it is changing constantly, and as such actually the dollar value also is constantly changing. We have decided that actually it was better to normalize by GW.
Okay.
You know, modelize your way based on your own assumptions. I have to admit that actually it is, you know, even ourself when we look at our forecast right now, on every 3 months basis, they change so materially that I feel that, it's, we may be underestimating the total amount of GW of data centers being deployed as we speak. Forgive me for not directly replying, but at least I wanted you to understand my rationale for why we did it that way, okay?
Okay.
You had a second question?
In-
Please.
In terms of the growth beyond 2027, given that it seems that you are still, let's say, scaling up your product line, that is it fair to assume that beyond 2027 you will still grow, let's say non-linearly above the expected market growth because you're still, let's say, probably not, at your normal market share in 2027 or not maybe, but yeah.
That I think is a fair assumption, especially. It's actually a fair assumption on multiple fronts in photonics IC, because actually we are exactly doing exactly what you said in terms of catching up to where we want to be from a market share standpoint. We have also, I must admit, a lot of near package optics engagements, which are going to be such an accelerant for us beyond 2027 that we are, you know, we have that we are counting on. Also when I look at my colleague Marco, with what he's doing on power and analog, I think actually Marco is spending a lot of time on this new 800 volt architecture.
I expect that 800 volt architecture to start to ramp in 2027, and as such, I think we should also see the benefits of that beyond 2027. Your point is correct for those two reasons, I would say.
Thank you.
Thank-
Thank you, Gianmarco. Moira, next question, please.
The next question comes from the line of Amelia Banks from Bank of America. Please go ahead.
Hi. Thank you for taking my question. I was wondering if you could provide some more granularity around how concentrated the long-term capacity reservations are. Are they with a few hyperscalers or are they more diversified than that? Thank you.
It's actually a more diversified thing, Amelia. I think you were asking this in the context of what we've announced on silicon photonics. You know, the value chain is a very interesting value chain because actually you have chip-level actors, you have module-level actors, and you have hyperscalers. We have actually, with those capacity reservation agreements, we are scanning the entire value chain.
Okay. Great. Brilliant.
All right.
Yeah. Thank you. Then just, on a follow-up, just in terms of again, with the silicon photonics announcement, you're stating that the capacity will more than quadruple by 2027. I'm just wondering if you could clarify how much of that is relating to sort of front-end versus, back-end manufacturing. Thank you.
Today, this announcement is, 100% front-end.
Okay. Perfect. Thank you.
I may add one on top of that, Amelia, to go and help you this because it's something that I did not mention in my presentation, but it's something very important for you to understand. As you know or may not know, the way we build our Crolles 300 fab is actually by gateways, which mean that we can add trenches of capacity as we go. That's a unique advantage. Why? Not only because of the modularity, because when we add capacity, our customers do not need to requalify. It's actually a copy exact of the previous gateway. We can add capacity without adding our customers to requalify. I want to insist that actually this is a value proposition today on photonics I believe that only ST is able to provide, which is not the case of competitors of ours.
That makes actually the life of our customers way easier. One of the reason why beyond the 300 millimeter asset, one of the reason also they have a huge interest in what we're doing. I'm closing the parenthesis, Amelia. I wanted you to have this information as well.
Thank you.
Thanks, Amelia. Next question, Moira.
The next question comes from the line of Stéphane Houri from ODDO BHF. Please go ahead. Mr. Houri, your line is open. You can proceed.
Yes, sorry. Thank you for taking my question. Actually, I wanted to have a bit more visibility on the CapEx that has to be involved for this kind of growth. You've been talking a little bit about Crolles 300. Looking at your CapEx budget, if the growth continues as you expect, what kind of addition to the CapEx you will have to do? Is only Crolles 300 involved in this CapEx expansion? Thank you very much.
Good question, Stéphane. I would walk backwards. Right now, yes, only Crolles 300 is actually the location where we have actually concentrated our silicon photonics activity for one of the reasons that I've described before, because of this ability to go and systematically expand without needing our customers to get requalified. Can it be the only answer? Could actually we see eventually silicon photonics in Agrate 12-in? Absolutely, yes. There is nothing preventing us from doing so. In terms of dollarizing the CapEx, we will not do it.
What I can tell you though, to help and go in your direction, is that actually to support the growth that we are seeing now, we came in October 2025 with a CapEx plan with I believe what is $2 billion-$2.2 billion of CapEx in 2026, and so on and so forth for 2027 and 2028, which we have not shared. What we are obliged to do at this stage is, and we are now in March 2026, to accelerate part of our CapEx, sticking to our $2 billion-$2.2 billion envelope, remixing it because we need to go and accelerate silicon photonics. That is something that we are, quote-unquote, because it's for a good reason, forced to do. I have no absolute CapEx number dollar to show you.
Okay. If I can have a quick follow-up. Actually, when you speak with people like Infineon, they describe for the power AI business of a market of $10 billion there by 2030, and they think they can have a market share of at least 30%-40%. Your AI opportunity, let's say more diversified with silicon photonics, power AI, et cetera. Can you maybe give us some visibility on by market, what kind of size do you see and what kind of market share? Thank you.
This is what Gianmarco was also asking me earlier. I, you know, I've been shying away from throwing those numbers because you know what happened, we share numbers, then actually there are actually modelization being done. Then actually you go into the market share. The market share can seem horribly high. I think this is actually what happened to our colleagues in Infineon recently. Actually you go and question whether all those numbers are stable. Actually they may be stable or they may not be stable because this market is evolving at such a pace. We've shied away from that. We'll let you go and make your own model in term of an overall AI data centers deployment.
We provide you these $230 million only inclusive of what we do on power and cloud optical interconnect, which give us a proxy. Right now that's what we feel the most comfortable to share with. Together with, obviously, the numbers that we have shared in term of being, you know, nicely above $500 million of revenue in 2026 and well above $1 billion in 2027. All of that, you know, being backed up by the deals, the deals that you know of, the deal with NNS that you know and many more deals that you don't know yet.
Okay, thank you very much.
Thank you, Stéphane. Moira, next question, please.
As a reminder for any further questions, please press star and one. We have a follow-up question from Adithya Metuku from HSBC. Please go ahead.
Yeah. Thank you, guys. Just two more questions. Firstly, Remi, I just wondered if you could talk a bit about the scale-up opportunity. You know, some of your peers in the laser markets have talked about that being a much bigger opportunity, like a by a factor of magnitude. I just wondered if you could give us some sense of how much your SAM would expand from this $250 billion or $230 billion you've talked about, if you include the scale-up opportunity, and when could this ramp happen? Is it a 2027 story? Is it a 2028 story? You have, you know, your peer Broadcom saying one thing, Credo saying one thing, NVIDIA saying another thing. What, what is your view here? I've got a follow-up.
Yeah. Yeah. Yeah. I It did not escape me that there is a lot of spin right now on copper versus non-copper in term of what is going to be the winning recipe. I will to tell you the truth, we are involved everywhere in one some shape or form. Obviously, the dollar content for us when it comes to silicon photonics is way higher. But I will try to remain objective.
I think that the death of copper from a scale-up standpoint is only a question of when, not a question of if. It's due to gazillions of parameters, but think of port consumption in term of port consumption per bit, and do think as well in term of, you know, the ability to go and pack way more compute density in a rack. I think it's a question of when, not a question of if, and you will see that, you know, similarly to EML, for connectivity, it will be people will try to push and move to 448 Gbps service. Actually some are doing that, but it's only bi-directional. There would be a lot of tricks to try to survive, but it's gonna happen.
In term of opportunity size, it's pretty simple. You multiply by 2. That's more or less. You take pluggable, you multiply by 2, that actually give you the end, the near package optics and co-package optic market size. Okay? That's as this is as big as it is, it will become. NPO versus CPO. Near package optic versus co-package optics. Actually, I'm a believer for the next five year that the bulk of the business will be on near package optic just because of RAS. Reliability, accessibility, serviceability in a data center. The fact that actually, you know, it gives hyperscalers way more flexibility on RAS than actually what they can do with CPO. NPO will be the winner for the five years to come.
When will you see NPO was the second question. This I will tell you very firmly that you will start to see NPO from second half 2027 for AI cluster. That is gonna happen. Is it going to be a hard switch, 0 to 100? The answer is no. because you will see actually new architecture being introduced, both with copper technology and also with optical technology, before actually they move fully to optical technology. When could you think that a rack for scale-up will be 100% optical based, and at what point there will be no more copper? Okay, you know, pick your poison between 2029 and 2030. Let's take 2030 to be on the safe side. Starting to ramp second half 2027, 100% coverage in 2030 is my opinion.
Got it. That's very clear. Just to clarify, you said the opportunity would be 2x, from... Is that twice the $230 million you gave? If I include scale-up, then basically it should be $460 million. Is that the... Have I understood correctly?
That's a good question, but it's actually a very clever one, which we I cannot answer because I didn't give you the breakdown of the 230. When this happen, I think the 230 will grow for sure. If we look at the 230 we gave you, it's only based on pluggable optics. That's a good clarification question you just asked, I think. We have not included the growth of NPO, this will come on top of the 230. I, now, you know, now I'm stuck. You know, I cannot give you any numbers because then you can look.
I will have to find a way next time to answer this question more elegantly.
Okay, got it. Then just, secondly, on the bottlenecks in networking today or optical networking today, where are they? You know, is, you know, from what you see today, where are the bottlenecks?
I think there is a bottleneck of today, bottleneck of tomorrow. I think if you ask anybody today where are the bottleneck, they may tell you laser. I think that if you ask people what could be bottlenecks tomorrow, they will tell you photonics. Why? 'Cause lasers are lasers, are lasers, are lasers. They are used anywhere. You may need more of them because it's EML and it's 1 laser per lane. You may be less of them because it's silicon photonics and it's 1 laser for 2 or 4 or 8 lanes. Lasers you need. What's happening is that the transition to 1.6 T is completely accelerating, and 1.6 T at 80% will be photonics.
Now, if there is an explosion in term of GW deployment, at 1.6 T, the pressure on photonics will be very high, which kind of explain why we're doing what we're doing and why we are thinking of doing even more because of that very context.
Got it. Very clear. When you say tomorrow, is that 2027 or 2028?
Yeah. That decode. I will not be able to give you something more accurate than what you just said.
Okay. Got it. Thank you.
Thank you, Eric.
Moira, are there any other questions?
There are no more questions at this time. I would now like to turn the conference back over to Jerome Ramel for any closing remarks.
Yeah. Thank you, Moira. I think this is ending our call. Thank you all very much for being with us. We remain at your disposal for any follow-up questions. We look forward to hosting you on March 16th for our conference call, ST Intelligent Sensing Enabling Physical AI with Marco Cassis. Have a nice day. Thanks.
Ladies and gentlemen, the conference is now over. Thank you for choosing Chorus Call, and thank you for participating in the conference. You may now disconnect your lines. Goodbye.