Good morning. It's my pleasure to welcome you to Corning's investor event at the New York Stock Exchange. I'd also like to extend a welcome to everyone joining us by webcast. Before we begin our formal presentations, I'd like to remind you that today's remarks contain forward-looking statements that fall within the meaning of the Private Securities Litigation Reform Act of 1995. These statements involve risks, uncertainties, and other factors that could cause actual results to differ materially. These factors are detailed in the company's financial reports. You should also note that we'll be discussing our results using core performance measures, unless we specifically indicate our comments relate to GAAP data. Our core performance measures are non-GAAP measures used by management to analyze the business.
You can find a reconciliation of core results to the comparable GAAP value on the investor relations section of our website at corning.com. Now, we have an exciting agenda for you today. First, Wendell Weeks, Chairman, Chief Executive Officer, and President, will kick off with an upgrade to and an extension of our Springboard plan and announce a new phase of accelerating growth for Corning. Second, you'll hear from Michael O'Day, Senior Vice President and General Manager of Optical Communications, who will detail our latest growth opportunities within enterprise and Photonics. Third, Ed Schlesinger, Executive Vice President and Chief Financial Officer, will share a financial perspective on what you've heard today. Lastly, Wendell will come back on stage to close out the formal presentations.
From there, we'll move to Q&A, and following Q&A, attendees at the stock exchange will have the opportunity to connect with presenters and other Corning leaders at the demo exhibits. We hope you enjoy the day, and we look forward to engaging with you. Now I'll turn the podium over to Wendell Weeks.
Welcome, everyone. It's great to have you here with us today, and obviously, as you've seen, we have a lot of exciting news to share. Even better, you're gonna get a chance to see some of the key innovations that are driving our success, and more importantly, you're gonna get a chance to meet some of the people who are help bringing it all to life. Let's jump right in to the headlines. Corning is entering a new phase of accelerating organic growth in 2027, driven by growth across our market access platforms. We are upgrading and extending our Springboard plan to achieve a $40 billion annualized sales run rate by the end of 2030.
Today, we will focus on our overall corporate outlook for this exciting period, and we'll also take a deeper dive into the technical trends driving our new Photonics map, as well as the stronger growth in our enterprise networks market access platform. Before we get started, I'll also note that we just announced a long-term technology and commercial partnership with NVIDIA. Needless to say, this partnership creates a significant opportunity for growth, for new innovations, and for new advanced manufacturing platforms, including many here, right here in the U.S. It also highlights our opportunity with our new GenAI OEM customers, and you'll hear more about our Photonics map throughout the day. We have a lot to talk about. Let's get started with the upgrade to our Springboard plan.
Two and a half years ago, when we introduced Springboard, we shared both our internal plan and a high-confidence plan. As a reminder, our internal plans, which I will focus on today, are the output of the strategic planning process that we run with each of our market access platforms. These are our actual business plans, and we set our objectives and our compensation based upon those plans. When our businesses submit plans to corporate, they factor in a variety of probabilistic outcomes. They try to account for the known unknowns. We then apply a corporate-level risk adjustment to translate our internal plans into a high-confidence plan for our investors, which Ed is going to cover in more detail later today.
At the corporate level, we seek to probabilistically adjust for factors including macroeconomics, changes in government policy, and timing of multiple secular trends and our related innovations and their potential success. When we introduced Springboard, we were running at a $13 billion annualized sales run rate in the fourth quarter of 2023. We shared our plan to capture a significant sales opportunity driven by cyclical and secular trends. We shared our internal plan to capture a $5 billion revenue spring by the end of 2026, leading us to an $18 billion revenue run rate. Importantly, we also shared that since we already had the required production capacity and technical capabilities in place to deliver the sales growth, and the cost and capital were already reflected in our financials, we expected to deliver powerful incrementals.
We shared a target to improve operating margin from 16% to 20% by the end of 2026. We said we planned to grow EPS faster than sales. We also shared our plan to add $8 billion by the end of 2028, leading to a $21 billion run rate. We also shared by market access platform where our growth would come from. We provided updates as we reached key milestones. Two years into Springboard, we've outperformed our plan, and we've transformed the financial profile of the company. We grew our sales run rate by 35%. We expanded operating margin by 390 basis points to 20.2%. We grew EPS 85% to $0.72. We expanded ROIC 540 basis points to 14.2%.
We also nearly doubled free cash flow in 2025 to $1.72 billion from $880 million in 2023. Overall, we established a new launch point for highly profitable future growth. On our January earnings call, we had just closed out 2025 at a $17.6 billion run rate. We upgraded our internal Springboard plan to add $6.5 billion by the end of this year, which would bring us to a $20 billion run rate. We also upgraded our internal plan to add $11 billion by the end of 2028, which would bring us to a $24 billion run rate. That is where we pick up today. In 2027, we are entering a new phase of Springboard with accelerating organic growth.
We are upgrading our Springboard plan and extending it through 2030. First, we're going to need a bigger scale for the upgrade. We are again upgrading our Springboard plan to now reach a $30 billion run rate by the end of 2028. That is a new Springboard of $17 billion, a significant increase from the $11 billion Springboard that we just shared with you all in January. Just by the way, it's more than double our original $8 billion Springboard for this time period. We are also extending our plan. We believe we can become a $40 billion company by the end of 2030. That's a $27 billion Springboard from the start of Springboard. Here's the complete plan. Springboard is entering a phase of accelerating organic growth.
We delivered a CAGR of 15% in the first phase of Springboard. Entering 2027, we expect to grow at a CAGR of 19%. That is a 400 basis point acceleration. That is why we're here today. We're providing a significant upgrade. Let's just take a moment. That's a lot to take in. That's a very big set of numbers and a pretty significant amount of change. We have growth drivers across all of our market access platforms, but we're not gonna dive into everything today. We plan to continue our established Springboard approach of just very frequent updates for our investors with deeper dives into individual maps as they hit significant milestones. Today, I'm gonna just share some of the macro drivers of the plan, and we are going to dive into our enterprise and Photonics maps.
Ed will share how we think about our high confidence plan, profitability, and capital allocation. Let's start with some of the key assumptions in our plan. For 2027 to 2030, we incorporated a forward rate of JPY 150 per US dollar to account for the weaker yen. We planned for flat TV, IT, smartphone end markets, and the impact of higher memory price. We planned for declining ICE demand offset by increasing Corning auto content. We also plan to capture a larger solar opportunity with an upgraded sales outlook, overcoming near-term ramp challenges. We included new innovations and form factors in Gorilla Glass, and we see accelerating growth in fiber to the home and data center interconnect in our carrier map. With that broad context, let's unpack our Springboard upgrade just a little bit further.
In the broadest terms, this is what we think our company will look like. First, as we just shared in our assumptions, we expect consumer electronics, solar, carrier, auto, and life sciences all to grow. In aggregate, we are planning for a mid-single-digit CAGR for those MAPs. As I said, today we're gonna do a deeper dive into the technical drivers behind our opportunity for growth in enterprise and Photonics, and you will also hear more about this from Mike in just a moment. Now, what I'd like to do now is address some of those drivers in a more macro way. Our focus today will be on GPU cluster size increasing very rapidly in scale-out, the optical scale-up network beginning, and Corning Optics moving inside the box.
Starting in enterprise, we have the opportunity to grow faster than the rate of GPU growth, driven by the technical factors that increase optical in the data center. At the most basic level, assuming no changes to the network, we would grow as GPUs grow. Remember, in these networks, each GPU must be connected to every other GPU, and that is what establishes the neural network. You all will have your own opinion on what the rate of growth of GPUs will be. The insight that we'd like to share today is some of the potential network changes that offer us the opportunity to grow faster than GPUs in our enterprise map to begin, and we will cover the technical drivers, the logic, and the impact of each. The first driver is cluster size growth.
The logic is that cluster sizes greater than 130,000 GPUs will require a third optical layer. As clusters grow, that is good for our content opportunity. When clusters get larger than 130,000 GPUs, a third switch layer is added to connect all of the GPUs to each other. Let's take a deeper look at how this actually works. As shown here, once cluster sizes get above 130,000 GPUs, we exceed the network scale capability that can be achieved with a 512 radix switch with 2 layers. That adds a third layer. Basically, 3 layers divided by 2 layers yields 50% more content.
Mike will be up in a moment to talk about how we think about the mix of larger clusters for data center builds in the future based upon our own models. Overall, cluster size growth is a positive impact relative to GPU growth. Let's turn to the second driver. The second driver is bandwidth growth. Historically, GPU and ASIC bandwidth doubles about every two years. We link those together through a combination of lane rate and number of lanes. Typically, this is a neutral to positive impact depending on SerDes cycles. We increase bandwidth either by increasing the lane rate or SerDes, which would have a neutral impact on fiber content, or you can increase the quantity of lanes, which has a positive impact on fiber content.
You can see when we move from Hopper to Blackwell, the SerDes stayed the same at 100 G, but the bandwidth needed to double, thus requiring that we increase the fibers from 8 to 16, doubling the amount of our potential optical connectivity content. As we are moving into the Rubin era of GPU architectures, we see a jump in SerDes to 200 G. Thus, we're able to keep the lane quantity consistent, resulting in a neutral impact on fiber content. Feynman likely won't be the primary system until the 2029-2030 timeframe. There's still a lot that we don't know about it, but we do know that its bandwidth will double, and if it follows past patterns and stays at 200 G, the number of lanes would double as that bandwidth doubles, and that would double fiber again.
If 400G SerDes is available and reliable, the fiber content would be neutral or no change. Likewise, there are always other optical schemes which can be used to try to increase fiber efficiency, such as BiDi, which can also reduce the need to increase optical content. All of this is yet to be adjudicated. We'll know a lot more in a year or so, but the main takeaway is that bandwidth is neutral or very positive for us. The third driver is scale-up. Today, this is 100% copper, but optical is beginning to penetrate the scale-up network, and this adds an entirely new optical network. While the timing of adoption and penetration are very difficult to predict, the size of the opportunity for an increase in optical content is quite large.
First, let's consider together what has been announced regarding optical scale up. Recently, NVIDIA announced a Vera Rubin Ultra configuration which will scale up to 576 GPUs in 8 separate racks. Each rack will have 72 Rubin Ultra GPUs, which are interconnected with copper and then extended rack to rack with direct optical connections. This is a transition step to optical that is effectively a hybrid system, and this hybrid system is what has been announced as an approach to scale up. Optical is now playing a role. The percent of optical ports has not yet been announced publicly. What has been announced is the scale out bandwidth of 1.6 terabits per second and the scale up bandwidth for the individual GPU, which will be 14.4 terabits per second. With those two pieces of data, we can bracket the opportunity.
At the lowest end, we can assume 100% of the scale-up network will be done as it is today, and that's with copper. What this translates to is the same opportunity we have today, which is no fiber in scale-up and 16 fibers per GPU in scale-out. At 200G SerDes, this will translate into 8 lines for scale-out and 72 lines, lanes for scale-up. Let's compare that to a fully optical scale-up system. We take the same 14.4 terabits per second bandwidth for scale-up and the 1.6 terabits per second bandwidth for scale-out, and we divide them by 200G SerDes. This will translate into 72 lanes and 8 lanes respectively, each requiring 2 fibers.
This results in 144 fibers needed to support the scale-up bandwidth and 16 fibers to support the scale-out bandwidth. When we combine these demands, we get a total fiber content of 160 fibers per GPU, which is 10 times the amount of fibers of the current scale-out network. What do we know for sure? Well, we know that neither of those two cases will be the hybrid system that was just announced. It will be somewhere in between. To be exact on the opportunity, we would need to know both the % of optical ports in the offering and to what extent these new hybrid optical scale-up nodes penetrate the AI factories of the future.
Regretfully, I cannot share with you the first because it's confidential. No one knows for sure what the answer is to the second, which is how successful will these be? It is clearly a very large opportunity for us. This is a topic that generates much technical debate. You will be able to get your own point of view by engaging with experts. When I put all of these technical drivers together and focus on the near term, we calculate that the demand for optical content per GPU in our enterprise map will increase by 1.3 to 1.5 times by 2028. As we head into 2030, you can see as I have shared, that number could head much higher.
Much of this is driven by the scale-up opportunity very quickly increasing, which leads us to our next incremental opportunity to that enterprise growth, it takes us inside the box. I just walked you through how scale-up creates a significant opportunity for us in our connectivity business and enterprise. Scale-up also supercharges our opportunity to bring our optics expertise inside the box, that is what our new Photonics map is all about. The Photonics map serves as our platform for serving a new class of customers. We're bringing optics inside the box for a new generation of technology, for Co-packaged optics and what's called Near-package optics. These technologies will get their start in the scale-out network. It is clear that scale-up drives a dramatic increase in size and scale.
Optical scale-up is new tech that will likely have an exponential adoption curve, which is great, but that also leads to significant timing challenges that are very difficult to predict. When this starts and its rate of penetration drive very large range of potential outcomes. Based on our assumptions and our discussions with customers, we believe we have the opportunity for a new $10 billion Market-Access Platform by 2030. Essentially, new inside-the-box optical functions create the opportunity for Corning passive photonics to manage light. Historically, we've had no inside-the-box content, and what's happening here, as you'll hear from Mike in a moment, is that because of the potential for improvement for latency, for faceplate density, power, and reliability, customers are looking for the opportunity to move away from pluggables and toward Co-packaged optics and Near-package optics.
As you can see in this diagram, light creation, modulation, and delivery of the encoded optical signal move inside the box at the Silicon photonics optical engine. Everything you see here in yellow is potential Corning content where none existed inside the box before, and this creates an opportunity for Corning to supply these passive photonics required to move and manage the light. Obviously a very exciting time. Let me pass it over to Mike to explain more. Mike.
Thank you, Wendell. Good morning, everyone. I'm excited to be with you today during a moment of extraordinary opportunity in our Optical Communications business. I'll walk you through how we're going to capture these opportunities and deliver our upgraded Springboard plan. Today, we'll focus on our enterprise and Photonics market access platforms that fuel the majority of our near-term growth. We'll cover 3 drivers for our upgrade. First, as AI models grow more complex, they need larger GPU clusters. This growth requires a new scale-out optical layer. Second, inferencing workloads put latency at center stage, and this creates a new optical network called scale-up. Finally, we are bringing Corning content inside the box all the way to the chip with co-packaged optics. This is the foundation for our new Photonics map. Let's jump in.
We'll start with the first driver I mentioned, network scale-out and the rapid increase in GPU cluster size. The development of larger, more complex AI models is driving a 10x increase in parameters each year. This exponential growth in AI workloads is outpacing individual GPU memory and compute capabilities. Training the models faster and inferencing more tokens per second requires massive parallel processing, where immense models and datasets are distributed across multiple GPUs. This parallel processing is what drives scale-out of the network, creating the need for bigger GPU clusters. Per NVIDIA, there are now 4 main scaling laws driving the need for more intelligent AI infrastructure. In this case, intelligence means bigger, smarter, and more efficient clusters built from more capable GPUs. The first two scaling laws are both training related and have been around for some time.
Pre-training, where models are trained on enormous datasets, and post-training, where models are refined for specific tasks and improved reasoning. More recently, inference has started to drive the need for larger GPU clusters. We see this in test-time scaling with longer thinking to generate better results from mixture of experts models, and the latest development, which is agentic scaling, where AI systems communicate with each other directly. Both require larger, low latency domains and massive memory at scale. In response to these scaling laws, leading tech companies are moving from AI data centers containing tens of thousands of GPUs to AI factories with hundreds of thousands of GPUs, and eventually cluster sizes of more than 1 million. Not that long ago, many clusters could fit into a single modestly sized data center.
As cluster growth continues, we are starting to see extremely large campuses purpose-built to house AI factories in a single location. The sampling that you see on the chart of leading clusters over the years shows how quickly cluster size has grown and how projects in the pipeline continue to scale higher. This leads to our scale-out opportunity hypothesis. Not only will we see more and more GPUs deployed, but they will aggregate into increasingly large clusters. If true, we would expect to see more power coming online with evidence that campuses are growing in their ability to support larger clusters. This is exactly what we are seeing. Most sources project that new incremental AI power coming online per year will double from 2025 to 2028. Likewise, the rate of GPUs deployed over this period roughly follows the same trajectory.
This is especially interesting for Corning because as clusters grow faster than the net switch bandwidth, we need to connect those large GPU clusters by adding another layer to the network. Most scale-out networks today have two optical switching layers. A two-layer network starts at the GPU, and every GPU has a fiber connection to the leaf switch at the end of the row. This is called the GPU to leaf link and is layer one. Layer two connects the leaf switch to the spine switch, and this link typically travels down the data hall to aggregate all the rows together in an overhead cable tray system. With the latest switches, a network can remain at two layers up to 130,000 GPUs.
However, when a cluster grows beyond this, it forces a third layer in a non-blocking architecture. When this third layer is needed, it means more Corning, increasing our content by 50% as we go from 2 layers to 3. You often see this third layer connecting many data halls as you stitch together multiple 2-layer networks. In doing so, you create a massive cluster of GPUs. To quantify how much of the market needs an additional switching layer, we analyzed leading data center construction datasets. These show how much power is being concentrated into large campuses that can support bigger GPU clusters. We then built a proprietary model from this data and assumed that campus power could support a single campus GPU cluster. Based on recent giga-campus announcements and the requirements of frontier AI models, we expected this amount to rise in the future.
You can see what we learned. We will continue to use our model to project the real-world growth in cluster size and therefore the growing more Corning opportunity. Now, let's turn to our next optical network, scale-up. The optical scale-up network drives significant improvements in latency, and when it comes to AI inferencing, latency is more critical than ever as AI nodes increase. Inference is the new AI workload, and the industry's understanding of the infrastructure required to support inferencing has rapidly evolved due to 4 recent developments. First, mixture of experts models, which require more memory and bandwidth for complex communication and synchronization. Pre-fill and decode, which needs lower latency and higher throughput. Reasoning models requiring more GPUs in the low latency domain, and agentic systems with massive memory and storage requirements.
These new inference workloads must operate in a larger low latency GPU domain with significant amounts of memory to maximize tokens per second. This is why node sizes are increasing. What is a node? A node is a collective of GPUs acting as one large accelerator interconnected by a high bandwidth, low latency, all-to-all scale-up network. Today, a node is confined to a single rack with a maximum of 72 GPUs connected through a copper scale-up network with about 120 nanoseconds of data transfer latency. To create larger nodes, we need to connect more GPUs.
You might ask, "Can I connect more NVL72 racks through the scale-out network to create a bigger node?" Yes, but communication between racks via the scale-out network incurs a greater than 10x latency penalty of 1,500 nanoseconds or even more, which causes data transfer and synchronization delays that reduce GPU utilization and overall cluster performance. This is what limits the current low latency domain to a 72 GPU node. Another approach, the subject of much debate right now, is extending the low latency copper network, scale-up network between racks to create bigger nodes. You might ask, "Well, will that work?" Well, it might be feasible for a 2-rack, 144 GPU system, and maybe even a 288 GPU scale-up network.
The latency requirements, data rates, and distances involved for 576 GPUs and eventually 1,152 GPU multi-rack nodes push these lengths beyond 100 gigabit meters per second, and that's when the application crosses what we call the electrical to optical divide. At that point, copper simply runs out of gas, and we move beyond the practical limits of copper, and the transition to optical becomes inevitable. The new Rubin Ultra NVL576 platform solves these problems by leveraging a new optical scale-up network between racks using the Dragonfly architecture you see in the bottom left. This expands the low latency domain from 72 to 576 GPUs. Latency within this multi-rack node is only 320 nanoseconds. That's a greater than five times improvement compared to the scale-out network.
The addition of these optical links to the scale-up network creates an incremental opportunity for Corning. What does that look like? Well, everywhere you see yellow, you see Corning. When we add the optical scale-up network to interconnect all the switches in every GPU rack, well, we add a lot more yellow. As you heard from Wendell, there is much debate across the industry around the timing and implementation of optical scale up, what we call outside the box, especially around the year 2028, which is considered the starting point by many. You'll have to make your own assumptions here, but here is how we are thinking about it. First, customers have to make a basic yes or no decision. Will they adopt any multi-rack optical scale up?
If the answer is yes, they have adopted optical scale-up, this is where views on timing vary significantly. Our point of view is that adoption begins to accelerate in the 2028 to 2030 time period, by 2030, that yes or no decision for optical scale-up will shift more towards yes due to the factors I just covered. Second, we need to consider this. When people talk about optical scale-up, sometimes they're talking about the number of ports that will be optical. The first phase of adoption will likely have hybrid optical and copper in the scale-up switch, but this varies widely by customer. Even with a portion of the ports being optical on the switch, this adds significantly more opportunity than the third scale-out layer that I just told you about.
You'll likely be hearing a lot more about optical scale up as we expect significant technical developments in the near future. As Wendell shared earlier, these technical drivers behind scale out and scale up create an opportunity for Corning to grow our enterprise segment 1.3 to 1.5 times faster than GPU growth. This is what drives our confidence in upgrading our enterprise Springboard plan today. Let's move to our newest opportunity inside the box. Our newest growth engine is co-packaged or near-package optics, part of a newly formed market access platform that we're calling Photonics. This new Photonics map will leverage Corning's best-in-the-world expertise in fiber, cable, and connectivity to capture emerging GenAI growth opportunities by bringing Corning optics into the box. What is driving the adoption of CPO switches? Latency, power, density, and reliability.
We discussed the importance of latency in the previous section, as you know, power is usually cited as the biggest bottleneck for AI deployments. Space also matters. You don't want to take up valuable rack space with switches that are bigger than they need to be. Reliability is perhaps the most critical. Think of this. If a link stops working, it can make a GPU go idle, which can bring down an entire node of over 500 or 1,000 GPUs, that can impact the utilization and performance per watt of the entire cluster. What's happening? In a traditional switch with pluggable transceivers, light creation, modulation, and the delivery of the encoded optical signal take place outside the box at the pluggable transceiver where Corning products typically connect today. Historically, we've had no inside the box content.
In a co-packaged or near packaged optics switch, the functionality of the pluggable transceiver moves inside the box and is performed by a Silicon photonics optical engine. This creates an opportunity for Corning to supply all the passive photonics required to move and manage the light. In this diagram, everything that you see in yellow is Corning content, whereas none existed inside the box before. You'll get a chance to see this technology a little later in our demo room. Now, we are innovating in this space faster than ever before. We are tackling those pieces in the priority that our customers indicate they need, creating an exciting time for our business. We're introducing new products, connecting with new customers, and establishing a new market access platform. Let's look a bit more closely at the potential of this business.
Several variables determine what happens from here. First, it depends on when CPO is launched. Like optical scale-up, there is much debate among market participants, especially in the near term. Most would agree that it will begin and scale-out as early as next year. Second, you must decide how much of the market will need optical scale-up and decide to adopt it. Many think that it begins in earnest in 2028, and by 2030 co-packaged optics influx towards becoming the predominant solution for scale-up switches. We will see. Regardless of when, count on us to be ready. We have gone through a few of our assumptions today, but not all of them. In total, this is very difficult to assess with traditional modeling techniques. Ultimately, you'll have to decide your point of view on the timing and speed of adoption.
If we're correct on these two things, we see the inside the box opportunity adding an incremental $10 billion of revenue by 2030. I hope you come away today with a clear sense of the magnitude of the opportunity ahead of us. Our Springboard upgrade reflects the size of that opportunity and our confidence that we'll capture this growth faster than the market. Here's why. At the top of our list is our commitment to innovation. As fiber counts grow inside the data center, space becomes a premium. Today, we have the densest inside plant cables in the industry by 20%-30%, and over the next year we'll more than double that leadership with new innovations we've been working on for the new links, and we'll do it with our advantaged smaller diameter fiber and new ribbon technology.
Next is our manufacturing scale and cost leadership. We operate the largest optical fiber factory in the world, and we just broke ground on what will be the largest cable manufacturing facility in the world, both in North Carolina. Most importantly, we serve our customers not only with unmatched product solutions, but with enhanced engineering support and a diverse global supply chain that enables us to deliver custom solutions at scale faster than our competition. More than ever, our customers are depending on us, as you've seen over the last year with announcements from partners such as Meta, as well as the exciting partnership with NVIDIA that we announced this morning. We're honored to deepen our long-lasting trust-based relationships with our customers as we support their growth.
Lastly, as we celebrate 175 years of innovation at Corning, we are entering a period of extraordinary momentum as we do our part to advance the most important technology trend of my lifetime. What can you expect from us along the way as we experience a continued period of growth? We're gonna continue to live our values. We stay humble, and we will focus on continuing to delight our customers while taking care of our people. Above all, we'll keep our eyes on the growth ahead of us, looking to the future as we build understanding of our customers and markets, innovating to solve their future problems, and earning a leadership position in everything that we do. Thank you again for the opportunity to be with you here today. Now, I'll turn it over to Ed.
All right. Thanks, Mike. Good morning, everyone. It's great to be together today and see many familiar faces and welcome some new ones to our story at a really exciting time for the company. What I plan to share with you today is the following. First, we are entering a new phase of accelerating growth. I will provide some context around how we're thinking about that growth from an internal plan perspective and how that translates into our high confidence plan for the same time periods. Second, I will talk about how we plan to invest to capture all of the growth and what it means for free cash flow. Finally, I will provide some perspective on what this means for our improving financial profile.
When we initially launched Springboard in Q4 of 2023, we set out a very compelling growth plan. We're clearly outperforming those original expectations. We've transformed our financial profile. We increased our annualized sales run rate from $13 billion in Q4 of 2023 to $17.6 billion in Q4 of 2025, achieving our target a year early. We also set a target to improve our operating margin to 20% by the end of 2026. We believed profitability would grow faster than sales, leading to an improving return profile. We achieved our operating margin target a year early. Operating margin expanded by 390 basis points to 20.2%. We grew EPS 85% to $0.72. We expanded ROIC 540 basis points to 14.2%.
We also nearly doubled free cash flow in 2025 to $1.75 billion from $880 million in 2023. Clearly, Springboard has been a tremendous success over the first 2 years of the plan. We are well-positioned to create significant value as we go forward. With that context, let's move to today's Springboard upgrade. Our internal plan is to grow our annualized sales run rate to $30 billion by the end of 2028, and $40 billion by the end of 2030. As a reminder, our internal plans are the output of the strategic planning process we run with each of our market access platforms, our actual business plans. We set our objectives and compensation based upon those plans. When our businesses submit plans to corporate, they factor in a variety of probabilistic outcomes.
They also try to account for the known unknowns. For example, we have included a weaker JPY rate in our planning starting in 2027. We also accounted for flat demand in TVs, smartphones, and IT, and technical substitution from ICE to BEV in the auto market. We also built a high-confidence plan for the same time period. Our high-confidence plan is to grow sales to an annualized run rate of $27 billion by the end of 2028 and $35 billion by the end of 2030. To arrive at our high-confidence plan, we take our internal plans and risk adjust to translate the opportunity into an investable thesis for all of you. At the corporate level, we seek to probabilistically adjust for factors including macroeconomic slowdowns, changes in government policy, timing of multiple secular trends, and the rate of adoption for our related innovations.
As you heard today, one of the most significant areas we're adjusting for is the timing on scale-up of the network. This impacts both enterprise and Photonics. We have significant technical development work to do as optical enters scale-up within the AI network. The overall size of the opportunity is dramatic, but calling the timing is challenging, and we will get smarter about this with each passing month. The timing of when scale-up happens could have a significant impact to our numbers and determine whether we track to the internal plan or the high-confidence plan. As Wendell shared earlier today, if we track to the internal plan, we expect sales to grow at a 19% CAGR, which doubles the company from the end of 2026.
In our high-confidence plan, we took $5 billion out of sales for 2030 from our internal plan and $3 billion out for 2028. Those are big adjustments. Even with those adjustments, we still double the company from the end of 2025. Either way, we expect to double the size of the company. To deliver the accelerating growth we expect in Springboard, we need to invest. Typically, when we grow organically, we invest significant capital upfront, which means we take risk before the revenue and free cash flow shows up. Then we have strong returns, and we generate a high return down the road. Now, historically, we've had years where free cash flow doesn't grow as our investment increases.
What we're seeking to do with this new Springboard phase of accelerating growth is to have an improving financial profile, even as we invest to double the size of the company. To accomplish this, one of the things we've done is sign a number of large, long-term customer agreements. What we're doing with these agreements, given our strong technology position, is to more appropriately share the cost and risk of our required expansions with our customers to ensure we generate strong returns on our investments and secure our planned cash flow. Our agreements include appropriate measures to ensure the revenue is there, that we get funding for our investments or some combination.
For longtime followers of Corning, you would recognize the model as similar to our extremely successful Gen 10.5 agreements with our display customers, and most recently, Apple's $2.5 billion commitment to produce 100% of iPhone and Apple Watch cover glass in our Kentucky facility. The results will be attractive, and we expect cash flow to grow through the planning period, even as we invest to grow sales. Let me shift gears a bit and talk about the financial implications of the upgraded plan. Clearly, we are operating from a very strong financial profile. We've gone from a $13 billion sales run rate to a $17.6 billion run rate in 2 years.
We've moved our operating margin above 20%, a target we set just 2 years ago. One of the things we are most proud of is that we've improved our return on invested capital into the mid-teens. As you know, Corning is a capital intense company. That is quite an accomplishment. We are generating a lot more free cash flow. Now, we're entering the next phase of Springboard, accelerating growth. As we've been sharing with you over the last few quarters and today, we have a number of really large long-term customers that underpin that growth. Our plans are to grow sales to a 19% CAGR from the end of 2026 to the end of 2030.
Of course, one of the questions we get from investors is now that you've delivered on your 20% operating margin target a year early, can you continue to improve? I think the simple answer is yes. As some of you have pointed out, we have a drag from the ramp of our solar business, which is included in our current results. This means we're actually already running above the 20% now. We're not prepared to set a new target with you today, but we are highly confident that we will successfully ramp our solar business, and we'll provide you with an update as we progress. Additionally, we're investing to build a new Photonics map, and we'll learn how things play out in that space over time.
As CFO, I would like to see more data in both of those areas before we would set a new target. We'll come back at the end of the year, and we'll give you an update. In the meantime, you can expect us to run at or above a 20% operating margin even as we continue to invest. With respect to EPS, we've been growing faster than sales over the first 2 years of our plan, and we expect to continue to do that. Now, I wanna pause again on ROIC. With a mid-teens ROIC and a 19% sales CAGR, we will create a significant amount of value. We have agreements in place that will help us improve our sales to asset ratio, provide tremendous value to our customers, and allow our shareholders to share in that value.
We will continue to invent to create more value for our customers. We expect to continue improving ROIC into the high teens. We will come back at a later date to provide you with a more detailed update on how we're thinking about ROIC as well. Finally, we delivered $1.7 billion in free cash flow in 2025. Historically, we would not expect cash flow to increase as we invest to capture organic growth. This time as we invest, we expect our operating cash flow to grow at a rate that exceeds the increased capital spending, and therefore, we expect free cash flow to grow as we grow sales. Overall, we will continue to further enhance our financial profile. That brings me to capital allocation. Our capital allocation philosophy is to prioritize to invest for organic growth opportunities that drive significant returns.
Overall, we believe this approach creates the most value for our shareholders over the long term. Our investors have confirmed they see the value in this approach. Clearly, right now, we are investing. We also seek to maintain a strong and efficient balance sheet, and we're in great shape. We have one of the longest debt tenors in the S&P 500. Our current average debt maturity is about 20 years, and we have no significant debt coming due in any given year. Of course, we will continue to return excess cash to shareholders. Now, let me bring it all together before we close and go to Q&A. A little over 2 years ago, we launched Springboard, and it's been extremely successful. We have fundamentally transformed the financial profile of the company.
Today, we've outlined a powerful set of growth opportunities and a compelling upgrade to our Springboard plan to further enhance our financial profile. Excitingly, we are building a much larger company with faster earnings growth, significantly higher returns on invested capital, and substantially more free cash flow. Thank you for coming today. With that, let me turn it back to Wendell.
Thanks, Ed. Thanks, Mike. Well, those were both great. Thank you. Hey, before we shift to Q&A, in case you haven't noticed, we're celebrating a significant milestone this year. It's Corning's 175th birthday. You're not gonna spring into song? I mean, what the hell.
175. Think about that for a moment. Here we are, 175-year-old company. We're one of the 25 oldest companies on the S&P 500. Here we find ourselves once again in the center of one of the most important technological revolutions in history, not at the periphery watching it happen, but actually helping to create it. I just wanted to take a moment and say, like, how cool is that. I mean, it's just so cool, and it's why I still skip to work every day. Yeah, that's a funny image. For most companies, man, it would be extraordinary. Well, for Corning, well, let's just say it's not our first rodeo.
This morning, as you walked in, if you were paying close attention, not to your phone, but as to what was around you passed by many of actually the original products for some of our most influential inventions. You would have passed by the very first cathode ray tube for early TVs. You would have went by the very first low-loss optical fiber and the glass for the first iPhone, to name a few. Actually, when you look at that iPhone, it's, like, hard to remember how clunky that thing was way back then when we thought we were helping change the world. Amid all of these innovations, you may not have noticed this really humble lantern with a bright red lens. That lantern was actually our very first technical invention, and that is how we got our start in life-changing innovation roughly 150 years ago.
At that moment, we really established the sense of purpose and approach that has guided Corning ever since, passed on from generation to generation. At that time, railroads used glass signal lanterns to direct train conductors. Red meant stop, and white meant safety or go. Because these were just ordinary glass lenses, just painted red or kept transparent or white, they caused problems. They could be masked by steam, snow, and dirt, and they were unreliable as temperatures fluctuated. In cold weather, red glass would crack, therefore break, and appear white. Rather than signaling a stop, what conductors saw was go. The results were devastating. Collisions, derailments, fatalities, thousands of people. It was a nationwide safety failure rooted in glass.
As we looked at that problem, as we looked at that obstacle, we saw an opportunity, and we set out on an innovation journey. We hired our very first scientist, who was actually not a glass guy, who was actually an expert in human perception, because even back then, what we realized is our first job is to understand the problem deeply from the customer's point of view. We established our very first in-house laboratory for the study of signal lenses and signal colors. We improved the design of lenses on the railroad signal so they would be less susceptible to trapping debris. We analyzed the way light refracted through the series of exterior bevels, allowing beams to spread out in all directions.
Our team repeated the phenomena with bevels on the inside of the lantern, and this led to Corning's very first patent for the semaphore lens in 1877. There it is. Our innovation helped lanterns to have more luminosity, not only focusing the light, but also avoiding dirt and snow accumulation. Railroad signal engineers actually came to Corning to conduct the field test, and soon after, the Railway Signal Association, that's those cats right there, right, adopted standard specifications for the colors and test methods that were based on the work that Corning did. When they're all meeting there, that's in Corning, New York. That is actually the origin of what you see today in those red, yellow, and green traffic lights that still you pass.
Just as important, we also brought on our very first head of manufacturing back then to develop low-cost, high-volume manufacturing to bring this new series of innovations to the burgeoning railroad industry. This led to tremendous benefit, right, this project. Because we wanted to do very large scale, very low cost, and allow the industry to adopt tremendously more safe transport. We saved thousands of lives. Interestingly, we continue to feel sort of the echoes of those folks and Corning's work 150 years later sort of through our daily life today. That fundamental model of invent, make, and serve governs Corning at this very moment. We invest in innovation to invent products that enable transformative technologies. We invest in manufacturing platforms to make them at scale, at the lowest cost, and better than anyone else in the world.
We invest to serve our customers, to serve our communities, and our investors by bringing those innovations to life, all while serving our people and providing the type of jobs that they can build lives around, that they can build communities around. We do all of this so that our people can feel a deep sense of mission and purpose. Thank you for being on this journey with us, and we look forward to another 175 years of life-changing innovation. Now let's go to Q&A so you can help us understand how we can serve you better. Please join us. I'll go over here. Now that you've all learned about railroad signal lenses, pretty cool, right?
Okay. For Q&A, we would ask that you raise your hand, and we will bring a microphone over to you. With that, I'll start with Asiya.
Great. Thank you. Great presentation. Learned a lot today again. If I can just ask, you know, you guys announced some pretty significant production capacity increases as part of the press release. Just help us understand, you talked about, you know, obviously fiber opportunities scale up, scale out, and then Photonics further out into 2030. How are you thinking about that CapEx increase that you talked about, if you can help us understand relative to the opportunity that was laid out today? Thank you.
Ted, why don't you tackle that one?
Sure. For this year, we have guided $1.7 billion in capital. We certainly could spend a little bit more than that. I would expect us to ramp from that level as we go into 2027, 2028. I won't go out farther than that, at least for now. I think the most important thing is we expect operating cash flow growth to exceed that capital spending. We expect free cash flow to continue to grow even as our CapEx goes up. Another important thing, which we've talked about a number of times, you've seen it in some of our announcements, is that we look to we seek to share the risk and the reward of the investments we're making with our customers.
This helps us de-risk the outcome of our investments, so most importantly, it improves the certainty of return on those investments, but it also helps to pay for some of that capital as we're putting that in the ground. We will continue to use those tools. A number of our agreements include that. You know, we won't share things that our customers don't want us to share, but if you think about operating cash flow outpacing CapEx, I think that's a good way to think about it.
Just the 50% increase in fiber production capacity, that is going out through 2030 through the Photonics platform, or is that just near term through 2028?
If you want to take that one.
I'll offer. On the 50% fiber increase in capacity that was announced today, we've been adding fiber capacity to support the growth that we've experienced over the last couple of years related to GenAI, and that continues through the end of the decade as we sign agreements like you saw this morning to support the growth in both scale out and scale up networks.
Just, Wendell, if I could, you talked a lot about the GPU opportunity and the growth in content related to that, so did Mike. You know, what about ASICs? Like, how should I think about the content opportunity as we start to see, you know, custom ASIC? Thank you.
Right. First of all, we tried to take a pretty deep technical set of drivers and make it very understandable. Did we succeed to do that?
Yes.
Okay, good. The way in which we built it sort of simply is, you are right in that the opportunity per ASIC also grows. In general, if you just step back, fundamentally the bandwidth per GPU NIC or the GPU bandwidth doubles every 2 years, as does the bandwidth for ASICs. Those same trends that you see are driven by the same. We see the same opportunity in both. What is the right thing to track? It really is the rate, lane rate. If you want to understand 1 thing on what gives you sort of a big multiplier, it would be, progress in SerDes. What SerDes is, that's what we use to go from a parallel set of signals that then we serialize those and create a faster single signal.
A good way to think about it is if you had like 32 channels of 1 gigahertz, you're gonna take and turn that into 1 channel of 32 gigahertz. By the way, this is analog, so it's challenging. This is non-trivial. That rate increase, the more you increase that rate, that will tend to make bandwidth growth neutral for us on either ASICs or GPU. Usually, the bandwidth per ASIC or bandwidth per GPU doubles every 2 years, and SerDes doubles every 4. Therefore, you get these periods where, and bandwidth of the total system's doubling every 2 years. You get these periods where the amount of fibers or the amount of lanes grows. Like, a big question for us for 2030 is will the SerDes stay for the Feynman class or the Tomahawk 7 class, if you wanna know ASICs, Feynman class for GPUs.
Will that use a 200 SerDes or a 400? If it follows past practice, Feynman and Tomahawk 7 will use 200, and that would double our opportunity for Mike's enterprise business, right? That's very significant. We don't know that yet, and so we'll keep a close eye on it, and it's hotly debated. You won't have a problem finding different points of view.
Thanks, Asya. Next, let's go to Wamsi.
Thank you. Wamsi Mohan, Bank of America. Great presentation, and great to see you guys at the middle of this innovation cycle again. The question really, Wendell, maybe is, when you think about this opportunity ahead of you, the industry growth rates are very significant and Corning is obviously getting some share of that industry growth rate. As you think about this competitive landscape in this new era of growth, what are some of the underlying assumptions you have with respect to how your growth would be relative to industry growth? There is this element of pricing that seems to be fairly significant when you think about what some of your Japanese competitors are doing.
I'd be curious to think through beyond sort of the great content increase that you've alluded to, how you're thinking about pricing as well.
I'll handle price. I'll let Mike handle how does he view how our skills stack up and our ability to grow relative to others with optical capabilities. On price, where we choose to focus our efforts to improve our profitability is by inventing products that lower the cost of our customers dramatically, and then we split the value with them. That's the way we like to do it. That this part of serving our customers is that what we wanna do is constantly improve their delivered quality and constantly improve their economics. Depending on the relative advantage that we create, creates a value capture opportunity. That is our ideal way to play, and that's where we focus most of our efforts. We don't focus on raising the price of our commodity product sets.
Over here is an annuity in driving significant gains across big numbers and delighting customers. Over here, I have a demand-supply exploitation. When you wanna create the type of customer franchises we seek to create, you know, memories are long. That is our approach to that. We will improve our profitability directly linked to our ability to invent, serve, and then make it a lower cost. How are you gonna do versus your competitor, brother?
Well, I'd like to maybe take that question. Relative to our competition, the way we think about this is we compare ourselves in the form of cost, capacity-
In product, are we better, equal to, or worse? Maybe even a new category that's emerged over the last couple of years of how we serve our customers. Let me just briefly talk through this. As you know, we are a vertically integrated passive optical supplier. We make the fiber, the cable, and the connectivity. There's inherent value competitively to being integrated in all three, because we can invent, and we can make, and we can apply our capacity and our people in the areas where most of the demand may sit. From a cost perspective, we strive to be the lowest cost in each of those categories, fiber, cable, and connectivity.
We continue to work on that, and that is one domain of having an advantage is can you make things at an equal or lower cost than our competitors? I would say generally, we feel like we are in a good position there. Capacity, as I mentioned, my presentation, the largest U.S. fiber maker in the world, cable maker in the world, in terms of our capacity. We continue to invest, as we grow our business and add to the key pieces of capacity that continues to give us the most advantage. From a product perspective, you know, our competitors are not idle. They continue to invent, and they have worked on whether it's fiber, cables, and connectors, but so have we.
You know, we've been active in that space really over the last four or five years, a new fiber, a new cable, and a new connector that really focused on density, creating better optical products, optical performing products in a smaller footprint. The reason that's necessary is for what we just shared today, the amount of fiber connections inside of a data center or inside of a conduit matter a lot more now than they have ever mattered before. Where we have and where we are creating product differentiation matters a lot right now.
Across those categories, I like where we are positioned, both from a cost, capacity, and a product differentiation perspective, to be able to compete not only in our home turf, but all around the world where we choose to compete and pursue opportunities, whether it's with carrier customers or hyperscalers. The last category I'll just touch briefly on, which has been a bit of a transformation in the Optical Communications business, is how we engage with our customers. We leverage all of the capabilities that we have to serve them better, whether that's in pre-sale, engineering design, to post-sale, to creative ways to shorten and shrink the supply chain, and manage our global supply chain to be able to serve them better than our competition.
I think that's an increasingly important category that we're competing quite well on today, and we will continue to extend that advantage.
Thank you.
Mike likes his hand, is what he's saying.
Next, go to Steve.
Hi, good morning. Steven Fox with Fox Advisors. Two questions. First for Ed. You explained well why you wanna hold steady on the operating margin target for now. There also seems like there's a lot of opportunities to expand margins. Without putting numbers on it, can you give us sort of some hints into whether it's mix, drop-down, OpEx, how margins can expand over time? I had a follow-up.
Sure. Significant increase in our operating margin over the last couple years. If you remember back to the beginning of Springboard, we had capacity in place. We filled some of that capacity, and that's driven a good hunk of our margin improvement. I would build a little bit on Wendell's answer on price. We've also moved up the value chain. We're selling more solutions, or the price or the margin on some of our new innovations is at a higher level than some of the products that replaces. That's also been sort of a mix shift up in our operating margin, especially in Optical Communications. Their net income margin has significantly expanded over this time period as well. I think that's kind of where we are today. We still have some businesses where we have capacity that we could fill.
As those businesses grow, that should be accretive operating margin opportunity for us. I think we will also continue to be able to sell a better mix of things, specifically in enterprise and in photonics. As we, you know, start to add some of those sales, that should also be a good mix improvement for us. We seek to leverage OpEx, so to have sales grow faster than OpEx, so that could also be a leverage point for margins. You know, I think we are very I think of a target as something we want to deliver. You know, we have a very high conviction that we can deliver it, and we wanna deliver it, versus, you know, how we guide you and how you think about our margins.
I want your takeaway from today to be we're at 20%, we can sustain 20%. We should be above 20%, even with the drag we have in ramping some of our new businesses. Before we go the next step and set a new target, we just wanna have a little bit more data so that we pick the right target, and we feel we can achieve it with high confidence and sustain it for a period of time.
Great. That's very fair. Wendell, you made a good case for why the Photonics forecast could be lumpy, hard to predict right now. What about the enterprise piece? I mean, all across the supply chain right now, you're seeing, like, an inflection point up for the second half of the year because the generational changes in racks. I know there was a sort of a steady curve there. Can you describe maybe a little bit better how enterprise might grow over these next three years and where the lumpiness could be or the other inflection points? Thanks.
I think, in Mike's piece for that through 2028 and what I shared, that sort of growth rate per GPU, that range, that sort of 1.3-1.5, I agree with you. That has a tighter range to it, and we can feel pretty comfortable about our ability to grow at that rate above GPU growth. In the near term, I am with you, Steve. It's when you go out to 2030 in enterprise that you've gotta wrestle with what do you think happens on the fine min Tomahawk 7 class bandwidth operations, 'cause that will be the next edge of the platform. For that, there's just some open questions, and it gives you a little wider range to deal with. Finally, how big do clusters get?
We're happy to continue to share on those items. We wanted to give you the pieces so that you could develop a point of view. As Ed said, like, we're going to just keep getting smarter every month, and we'll be really open with you, Steve.
Okay. Thanks. Mehdi?
Yeah, great. Maybe two questions. Mike, maybe for you, just how you're thinking about the carrier opportunity. You know, that's traditionally been lower margin. You guys have endless kind of demand right now on the enterprise side. Does that factor into kind of how you think about allocating resources between enterprise and carrier? Then maybe just a clarifying point. I assume no, Wendell, just given the amount of times you've said Tomahawk, just does the NVIDIA relationship, like, do they have right of first ability to product so that that limits kind of your ability to work on CPO projects with other vendors? Thanks.
Yeah. Maybe I'll start with carrier, your question about carrier. We're actually feeling very good about our carrier business right now. I'll start with that because a couple of things have happened. Our home base and where most of our carrier business happens is in North America, and we're positioned with the 2 largest fiber to the home builders and have been historically, and they've recently announced their desire to pass another 50 million homes between now and the end of the decade, which creates nice continued steady growth for fiber to the home in our carrier business, coupled with the fact that BEAD has finally actually happened. We've got our first orders. We've actually shipped our first products for BEAD, which has been long awaited in the industry, and we're pleased to see that.
We are well positioned for continued growth with broadband connectivity, particularly here in North America. As a result, I think with regard to your question around margins and carrier, and allocating, how we're allocating things, you know, carriers have been a long part of our Optical Communications business for many, many years. We are not making choices to win with one customer and abandon the rest. We are working with all of our customers as we, as we have for decades to ensure that we both enable the build-out of GenAI, but also get the unconnected connected with broadband service.
One other comment. I just wanna make sure. We include data center interconnect in our carrier business as well. That'll clearly be a growth driver, and it's really important for us to support that. You know, even though it ties to a different secular trend, just wanna make sure folks understand that we track it in our carrier business.
Yeah.
Great. All of our customer technical relationships are confidential, so I can't share those details. What I can say is Hock Tan and I are old friends, and you don't really have to worry much about him getting what he needs.
Okay. Samik?
Hi. Thank you for the great presentations today, and Wendell, thanks for the story about the lamp. I'll remember that, definitely.
Thank you. You get two questions.
Thank you. I'll ask both at the same time if you don't mind. Still trying to flesh out maybe a bit more details around the NVIDIA announcement you had this morning. My impression was the choices in terms of fiber are made by the hyperscalers. In addition to maybe NVIDIA sort of investing in Corning, are there any other ramifications in terms of maybe early visibility into their product ramp, et cetera? Like, what else is included as part of this sort of engagement that you're a partnership that you're announcing? Is it still fair to think that hyperscalers eventually make the decision in terms of the fiber?
Secondarily, maybe this is more for Mike, the inside the box opportunity that you outlined, can you just sort of overall bucket that bit more in terms of I'm assuming you're talking about the Fiber Array Units, the polarizing fiber, how to think about that in terms of maybe content per GPU? And do you expect sort of all the pieces to be adopted pretty much as a solution, or do you expect more sort of phased adoption of the different components? Thank you.
Sure. Let me take a whack. I'll probably do both in some way, shape, or form. I wanna make sure I understand the first part of your question. Are you asking is in the commercial and technical partnership and the equity relationship that we're entering into with NVIDIA, did I hear you say that was you had a hunk of focus on what's happening at our hyperscale customers and connectivity? Did it cover that? Or photonic?
Yeah.
Just be a little more specific with it.
Yeah, I can clarify. Does the technical engagement with NVIDIA change in terms of visibility into their roadmap relative to just being this being a capital commitment from them in terms of your capacity build?
Yes. Yes. Because it's the way to think about NVIDIA here is it's really underpins our Photonics map. As you understand what it is you're putting inside those box to interact with either the switch ASICs or the GPUs, right? That gives you deep insight as to what has to happen to the overall system to deliver the light between those pieces. Yes, you can expect us to be working to fundamentally reinvent the optical systems here as we go forward through the coming generations of product. Did that address your question? I wanna make sure I got it. Okay. As far as the content for Photonics, what are we gonna do in there, et cetera, I think in the demo a lot of that's gonna help you.
We're gonna actually pop the top off of a switch and sort of show you the various things in there. We're deliberately not doing a per GPU here, because now we've got to identify which parts of that whole system are we going to do, right? It'll be different switch architectures and different ones of our innovations. We pretty deliberately walk by that one and instead say, "When we put it all together qualitatively and quantitatively, we think we can build a $10 billion map in 2030.
Okay. John?
Thanks. John Roberts, Mizuho. I always thought Corning's color was blue, but yellow seems to fit you pretty well.
Well said.
This is an optical meeting, but maybe you could give us some comments on the solar business, how it evolves beyond the $2.5 billion out to 2030. Kinda what's the roadmap there?
Let me do it?
Sure.
Yeah. John, a few things. First and foremost, we actually see the demand being really strong, and we would expect to be able to go over $3 billion of sales over that timeframe, probably sooner than that. You know, we may make decisions around whether or not we wanna do more than that through capacity, but those decisions have not been made. Right now we've got sort of 3 components in that business. We have polysilicon, we have modules, and we have wafers, and we had capacity adds and, you know, things to get done to be able to ramp those businesses to where they need to be to support that sales growth. In polysilicon and modules, we've made excellent progress. They're essentially running at or above the corporate average on our profitability on that operating margin target we set.
For wafers, it's probably the most complex of the things we need to get done, we made our first wafers back in the third quarter, I think it was September or so was the timeframe, and we've actually ramped significantly to making, you know, hundreds of thousands of wafers a day. We gotta get to, you know, more than a million, several million per day over time. You know, we're making good progress, but we expect that to continue to take some time. That's what's causing our P&L drag, and that'll kinda resolve itself as we go forward. You know, I think we're pretty excited about the, call it the market environment, the GenAI environment, which really underpins the success of this business. I think that is something that has actually continued to improve over the last year or so. Matt?
Hey, thanks so much. Thanks for the presentation. It's Matthew Niknam from Truist. Two questions. One quick follow-up for Ed, just on the last question. Solar, is the timeframe to exceed $3 billion, is that 2030, in line with the rest of the Springboard plan? Bigger picture question, are there any supply chain headwinds, any data center delays, that you're experiencing or your customers are experiencing today that you've baked in to the new Springboard plan? Thanks.
On the first one, yes, for sure within the window of growth that we've shown you here, we expect to be able to get over $3 billion in solar. Wanna do the data center one?
Yeah. With regard to the data center question and the supply chain, of course, I think many of the components that build a data center are in high demand right now, including our own gear, if you will. The one that gets talked about the most is certainly power. I would just tell you from the projects that we are engaged in with our customers all across building these large AI campuses, delays happen certainly weekly, monthly, all the time, but largely have been overcome and the construction continues and the demand for our products, and we see that through the demand of our products, and we are continuing to build and ship as much as we can make to keep the construction cycles going.
If you want a little more insight into that, maybe when we go to the demo room, look for a tall redhead with a beard. We've got a man, Aaron Stark. He tracks this really closely for us, so you should try to pepper him with questions.
Okay.
Yeah.
Thanks. We'll do one last question, in the back. Josh?
Mm-hmm. Yes. Which, the back row or the other back row?
Yeah. Hi, Josh Spector with UBS. I try to squeeze in two questions here if I can, and they're unrelated, but I'll ask them at the same time. Is that first just the Photonics map that you laid out, very helpful with the range of scenarios, but can you help us understand what your base case is so we could judge whether that's conservative, aggressive, whether you wanna talk about that as penetration rates or whatever the easiest way is to communicate that? Separately, on the whole yen-dollar assumption, are you assuming that display earnings can maintain, meaning you get pricing to offset that from a bottom line perspective, or is it too soon for us to be talking about that part of the equation?
Let's do the second one. You're actually in luck today. Maybe you two could link right after this. You have in the room John Zong, who runs our overall glass innovations piece, and you ought to have a good discussion with him. You gotta take advantage of him being here and talk about display. On the first one, we're being deliberately vague about that, mainly because we don't want to share confidential information of our customers. If we give you more of the factors, we will end up disclosing, inadvertently, you could back into what architectures are being used. We're just gonna be really cautious, that's why you saw me base everything and saw Mike base everything is, this is what's been announced and then how you can think about what's been announced.
We're gonna let our customers lead in talking about that piece. As we grow up in this business a little bit more, they'll be a little more open, and then we can be a little bit more open. What you can count on us to do is turn their publicly disclosed information into an easy rubric for you to be able to understand what it means for us. I apologize for the vagueness today, Josh.
Okay. Well, that concludes our Q&A session. For those of you in the room, I encourage you to make your way back to the demo area, where you'll be able to connect with more of the Corning leaders. Look forward to seeing you out there. Thanks, everyone.
Great. Thank you.