Good afternoon, everyone. Welcome to BizLink's Q2 2025 Earnings English Conference call. This is Mike Wang, Senior IR Manager. I am joined by Roger Liang, our Chairman, Florian Hettich, our COO, and Charles Tsai, our CFO. Our results were just released and are available on our website, where you can download the latest earnings materials as well as access them from MOPS. We'll begin with Charles for financial highlights before we switch to Florian for operational highlights, and then end with Roger for corporate highlights. We will then conclude with Q&A. You may type in your questions now in the public or in the private chat, and we will answer as many of them as we can. There will be no quantitative forward-looking comments.
Before we continue, please kindly be reminded that today's discussions may contain qualitative forward-looking statements based on our current expectations, which are subject to significant risks and uncertainties and may cause actual results to differ materially from those contained in these qualitative forward-looking statements. We are not obliged to update these statements, which are to be used for information purposes only. Please refer to the safe harbor notice in our earnings deck for more details. I would like to remind everyone that today's call is being recorded. This recording and these prepared remarks will be uploaded onto our IR website within 24 hours after the conclusion of this call. We sincerely appreciate SinoPac Securities for hosting today's call. With that, I will turn the call over to our CFO, Charles.
Thank you, Mike. Let me begin with some thoughts on sales transformation. Over the past few years, we have undergone a significant shift in both our sales composition and the nature of our customer engagement. In 2022, high-performance computing and capital equipment together accounted for just under a quarter of our total sales. They now represent around mid-40% of our first half 2025 revenue. This is not just a shift in end market focus. It reflects a deeper change in who we are as a company and the type of value we're creating. Our growth is increasingly being driven by segments, their specification-rich, mission-critical, and multi-year nature. These businesses are structurally more resilient, necessitate stronger customer relationships, require greater and earlier design engagement, and yield stronger per-unit content and profitability. We're no longer scaling purely on volume. We're scaling on customer intimacy and engineering intensity.
This shift is clearly reflected in our top 10 customer list, which looks very different from just two or three years ago. Where we once served more transactional, high-volume OEM customers, we're now increasingly engaged with global hyperscalers, semiconductor toolmakers, and leading-edge platform developers. These customers are not just looking for cables or parts. They're looking for reliable design partners who can co-develop infrastructure that meets extremely demanding technical reliability and power performance specifications in advance of their monetization. These programs are complex, but they also offer enduring partnership and visibility, giving us ongoing opportunities to help our customers grow. The ability to solve problems collaboratively during NPI and design validation stage builds trust. That trust leads to share gain, higher content persistence, and repeat wins across generations. This foundation is exactly what Florian and Roger will elaborate on shortly.
How we're leveraging this transformation to become a system-level enabler across next-generation AI infrastructure, from data to power integration. We're seeing an important divergence this quarter. HPC demand remains strong. While capital equipment activity continues to expand, but with a more measured quarter-on-quarter pace. This reflects what we believe to be the later stage of the current semiconductor equipment investment cycle. Customers are still utilizing N5 N4 toolset at high capacity, validated by TSMC's raised full-year USD sales guidance. But with CapEx now flat, the near-term outlook for new tool installation appears to be more muted. For us, the implication is twofold. First, SPE demand is not disappearing, but may be normalizing. Customers are continuing to outsource only subsystems like electrical distribution systems and fluid distribution systems ahead of expected node transition in 2026.
However, box build and system integration activity now accounts for over 40% of our capital equipment sales, reflecting not just delayed tool orders, but a structural shift towards system-level outsourcing. That said, with more fab already built around the way, we could see a more gradual CapEx cadence through 2025 and 2026 before a new node-driven upcycle begins. Secondly, HPC deployments, by contrast, are entering a new full execution phase. GB200 rack builds are accelerating. New enterprise development deployments are layering in, and interconnect orders are closely tracking real-time infrastructure buildups. AI-related revenue now comprises over 50% of our HPC segment, underscoring how this demand is not just persistent. It is deepening across customers and programs. These divergences highlight a key structural dynamic. AI infrastructure buildups are unfolding in multi-year waves, multi-phase waves, not synchronized CapEx bias.
That activity may now be entering a digestion phase, but AI rack deployment and the associated data and power infrastructure are surging forward. We believe this staggered cadence across HPC and SPE is not cyclical noise, but reflective of how AI infrastructure is maturing into a long-cycle mission-critical asset class. We remain confident in our long-standing engagement across both domains. We are also prudent about CapEx timing assumptions, especially for front-end equipment. Meanwhile, HPC continues to track real-world compute deployment, not just roadmap aspirations. On gross margin improvement, while our gross margin continues to improve gradually, our operating margin has reached a new high this quarter, highlighting the scalability of our model and the efficiency gains that we're unlocking across the organization. This reflects earlier period investments that now allow us to grow without a proportional increase in overhead.
Over time, we expect gross margin to benefit from a richer mix driven by growth in high-margin platforms, such as AI infrastructure, semiconductor systems, humanoid robotics, electrical mobility, and healthcare systems. However, in the near term, the launch of several strategically important AI-centric products is creating modest gross margin pressure. We're leaning into this program proactively, accepting temporary compression to secure early design wins that set the stage for higher accretive growth. This offering carries higher upfront engineering content, more complex materials, and lower initial volumes. These are typical of new platforms undergoing customer qualification and ramp. We view this as a natural phase of the product lifecycle. These are high-value programs that strengthen our position with key customers and are expected to become margin-accretive as they scale.
That said, the trajectory of gross margin will also depend on how favorably this new platform ranks and more broadly on the evolution of our overall product mix. As such, while near-term gross margin upside may become more gradual, earnings growth will continue to be driven by higher value mix, tighter execution, and disciplined scaling. Gross margin remains a resilient foundation while operating margin is where the structural leverage of our platform is increasingly visible. On operating margin improvement, over the past few years, we made deliberate investments to accelerate our growth trajectory, resulting in temporarily elevated OpEx to sales. Those investments are now yielding results. This quarter, our OpEx ratio settled at approximately 14%. This is a reflection not just of cost control, but of the operational leverage we have built through scalable infrastructure and smarter execution.
We have strengthened our global design support, expanded cleanroom-enabled manufacturing, and institutionalized program capability across regions. This foundation now allows us to scale both standardized and highly customized solutions efficiently, supporting long-cycle programs in AI infrastructure, semiconductor systems, humanoid robotics, electrical mobility, and healthcare systems without structurally expanding overhead. Looking ahead, we'll continue to strive for optimal balance, investing where it counts, while gradually optimizing our OpEx ratio to where that of our model peers by fine-tuning operations. As we scale further, we'll also see opportunities to enhance administrative efficiency through technology, doing more with less and compounding the benefit of earlier investments. Operating margin reached record high this quarter, even as gross margin continued to expand. This demonstrates that we're not just growing, but doing so profitably and sustainably. We're earning more content per system, engaging earlier in customer roadmaps, and capturing synergy across platform and programs.
This is a phase of self-reinforcing growth. Each new program contributes more to revenue and margin than it does to fixed costs. The 14% OpEx ratio marks an important milestone, reflecting a level we're comfortable with and believe is sustainable. At the same time, we will continue to invest selectively in future growth opportunities, while our ethos is our ability to maintain financial discipline, strengthen customer intimacy, and preserve the capital efficiency that underpins long-term value creation. On cash flow and capital discipline, as we finance our growth and scale only high-value programs, we're also reinforcing the fiscal discipline that underpins our model. Over the past five quarters, our cash conversion cycle has held steady at approximately 100-110 days, a clear reflection of strong operational control, prudent supplier and customer management, and an ability to efficiently convert earnings into reinvestable cash.
This consistency enables us to self-fund growth while preserving liquidity and financial agility. Our rising CapEx and working capital reflect intentional investment into long-cycle programs with high visibility, particularly in AI infrastructure and capital equipment, rather than short-term volatility. Importantly, we're not reacting to external pressure or funding gaps. We're acting from a position of strength. To support this next phase of multi-year growth, we're evaluating a range of financing options, including potential convertible bond issues in the second half of 2025. This may be our largest to date, matching the scale of the opportunity ahead, but it remains just one of the several tools available to us. Thanks to our healthy cash flow profile and strong balance sheet, we retain full flexibility in how and when we choose to raise funding.
We have already demonstrated this discipline through our ability to complete multiple tuck-in acquisitions with minimal external funding while expanding our engineering and manufacturing base in parallel. Going forward, any financing decision, whether through debt, equity-linked instruments, or internal reinvestment, will be made selectively, responsibly, and with the same long-term mindset to deepen customer intimacy, grow share in high-return verticals, and compound earning power without compromising financial resilience. It is also worth noting that our approach to M&A continues to reinforce, not compromise, our financial discipline. We typically execute acquisitions that are margin-accretive, cash-generative, and tightly aligned with our strategic growth platforms. We integrate quickly, preserve talent, and maintain a decentralized structure that allows acquired teams to stay close to customers while benefiting from our global scale. We do not pursue distressed turnarounds.
Instead, we focus on well-run businesses with strong management teams, complementary capability, and the potential to accelerate mutual growth through platform synergies. Most of our M&A needs originally come from our business unit, ensuring strong internal sponsorship and operational fit. That is why you're seeing OpEx efficiency improve as we grow, organically and inorganically. This approach will continue to guide us as we evaluate future opportunities with a proactive yet selective mindset that values both financial discipline and long-term technological alignment. In sum, this quarter reflects a business that is not only growing but compounding its ability to grow. We have built a scalable, capital-efficient platform that translates upstream design complexity into long-cycle earning power. Our sales mix continues to shift towards specification-rich multi-year platforms in AI infrastructure and semiconductor systems, fueled by deeper customer intimacy, earlier engagement, and stronger per-unit economics. Operating leverage is now embedded, not incidental.
The investments made in recent years in engineering, cleanroom infrastructure, and program support are yielding results without materially expanding overhead. This is visible in both our record operating margin and our ability to scale complex programs with fewer incremental resources. At the same time, our cash discipline remains intact with a consistently stable cash conversion cycle and a strong balance sheet. We're able to self-fund growth while preserving flexibility. Whether through selective M&A, reinvestment into high-confidence platforms, or financing tools like ECB6, we're approaching capital deployment with confidence and control from a position of strength, not necessity. This phase of our transformation is defined by quality, not just quantity of growth. From rack-to-rack AECs and high-voltage DC power systems to subsystem integration and design stage core development, we're participating in some of the most technically demanding infrastructure programs globally, and yet, this is not a peak.
It is a platform. The most exciting opportunities still lie ahead. Florian will now provide updates on our latest quarterly operational takeaways.
Thank you, Charles. So let's start with our view on the high-performance computing market. We continue to see steady momentum in next-generation data and power systems as AI cluster architectures evolve. In the data side, active electrical cables, AECs, are increasingly being adopted in place of legacy DACs, helping delay the need for more expensive optical cables in many high-performance environments. Customers recognize AECs as offering a practical balance of reach, latency, power efficiency, and cost, especially as inference clusters scale in size and complexity. Our 800G AEC shipments are underway and align with a broad ramp in the switch and router ecosystem. Market leaders, including both branded and white-box suppliers, are beginning to show more consistent data center demand, particularly in the hyperscale-oriented 800G platforms.
ODMs and integrators, many of whom support the switch platforms our AECs connect into, are also seeing stronger traction. This points to a synchronized architecture-wide shift toward next-generation interconnect and power infrastructure, suggesting we are still in an early stage of the AI infrastructure buildout. Our role in this evolution is to work closely with our customers and ecosystem partners to develop practical and scalable solutions. We have collaborated with hyperscalers and other partners to extend copper's usable reach while maintaining high signal integrity and thermal stability. This positions us to support the transition from in-rack to rack-to-rack topologies, particularly as GB200-class platforms change the physical layout and coordination needs of AI systems. While we are not defining the industry's overall architecture, our contribution helps enable these changes. Multi-sourcing remains standard. Most hyperscalers qualify multiple multiple vendors for critical components. Our partner-focused approach helps us maintain and grow share.
We are increasingly recognized as a reliable and capable enabler in system-level designs with similar co-design expertise applied in upstream areas such as electrical and fluid subsystems for advanced packaging and EUV tools. Our differentiation comes from long-reach performance integration capability and cross-domain engineering capabilities that are more difficult to replicate than component specifications alone. As platforms' volumes scale, some customers have expanded our share. We have secured meaningful new design ins across enterprise, hyperscaler, and emerging AI accounts, including two hyperscalers we expect to grow with over time. On the power side, capacity provision is becoming the limiting factor for many deployments. Hyperscale data centers are increasingly designed around available power rather than rack count. This elevates the importance of our high-amp AC power whips and HVDC busbar systems. In some programs, we are co-specified with major power subsystem vendors with certain new architecture expected to ship from 2027.
Our position as neutral embedded infrastructure partner helps drive alignment across the supply chain. We are also seeing compressed design-to-deploy timelines with customers, especially in the GB200 xAI type or sovereign initiatives, locking in infrastructure partners ahead of silicon availability. Our ability to address both high-speed data and high-density power requirements is a differentiator, though we remain mindful that scaling production, as we have seen in Malaysia, comes with operational challenges we continue to address. Now, moving from the high-performance computing to semiconductor equipment manufacturing and our view on that. Upstream on the AI silicon supply chain, we are starting to see early activity as chip makers prepare for process transitions from N5, N4 towards N3, N2. While the overall CapEx environment for 2025 remains measured, TSMC's outlook is flat, and ASML's EUV tool volume subdued.
Long-lead subsystems outsourcing in areas like electrical distribution systems (EDS) and fluid distribution systems (FDS) is gradually building. This activity is less about near-term order momentum and more about ensuring readiness for future ramps. EDS and FDS platforms are where complexity is rising fastest, creating opportunities for us to contribute meaningfully even when top-line CapEx is steady. Lam Research and KLA's recent results and guidance, both ahead of the expectations, suggest that underlying subsystems demand is holding up well, supported by investment in memory, advanced packaging, and foundry logic. Subsystems activity typically precedes tool delivery by several quarters, providing us with early cycle visibility. This allows us to stay closely aligned with customer roadmaps and adjust our capacity investments and program timing accordingly. Today, over 40% of our capital equipment sales are in box build and system integration, reflecting a shift toward more platform-level outsourcing.
As AI adoption growth and node transition increase the demands on power, thermal, and fluid management, these subsystems will become even more critical for for tool performance, fab readiness, and yield. While tool orders may remain soft in the second half of 2025, our early and consistent participation in integration programs position us to support customers as they prepare for the anticipated N3, N2 ramps in 2026. Well, let's come to my conclusion. We are contributing to the next wave of AI infrastructure by moving from component delivery toward deeper participation in architecture-level collaboration. Whether it's rack-to-rack AEC, long-reach HVDC systems, or integrated thermal electrical mechanical solutions, our teams work closely with hyperscalers and other partners to address the practical challenges of high-density data center deployment, signal integrity, thermal headroom, and power provisioning. These are not abstract considerations.
They are the physical limits that influence how quickly, efficiently, and reliably AI infrastructure can scale. Increasingly, we are being engaged earlier in the design process, allowing us to provide solutions that align with evolving infrastructure requirements. Our value comes from both what we build and how we work, with neutrality, flexibility, and cross-domain engineering expertise. As GB200-class systems become more common and AI clusters push toward higher rack densities and faster speeds, infrastructure requirements will demand more than standard off-the-shelf solutions. They will require capable partners who can coordinate across data and power domains. This is the role we aim to fulfill, supporting customers in bringing their designs to reality.
From hyperscalers and eventually to sovereign AI initiatives, our participation is expanding, and based on current design activity and multi-year planning cycles, we see a sustained role for BizLink in enabling the physical buildout of AI and other advanced infrastructure. As customer requirements rise and external conditions evolve from technology and technology transitions to shifting trade dynamics, we are adapting in ways that reinforce our ability to support long-term growth. Our manufacturing footprint, our EDS, and supply chain engagement are being aligned with the needs of next-generation platforms. While our results are the clearest reflection of this execution, our direction remains steady to work alongside our customers as these structural shifts take shape. Roger will now provide updates on our latest quarterly corporate takeaways. Roger, please.
Thank you, Florian. Let me first start with where we expect to head over on the HPC.
So our HPC business is expanding alongside the rollout of next-generation AI platforms to a broader set of customers. We believe infrastructure design is entering a phase where scale considerations are built in from the outset, driving interest in rack-to-rack interconnects, major power, and persistent compute availability. GB200-class systems are contributing to this architectural challenge change, and our AEC and the power solution are being incorporated into some of these layouts. While our role is as a contributor rather than a market architect, early engagement in these projects supports our ability to grow with customer needs. Proof-of- concept deployments are showing that pairing GB200 with well-planned data and the power layouts can improve data center performance in multiple dimensions: speed, energy efficiency, and thermal balance. Customers know that these improvements can boost profitability and free up capital for reinvestment, which can in turn drive the next wave of projects.
Although interconnect demand can fluctuate quarter to quarter, the trend toward inference-centric low-latency networks remains clear. Power infrastructure continues to be a stable component of HPC demand, with many customers over-provisioning capacity to accommodate future growth. This provides better forward visibility. Recent design wins in both existing and the new accounts, hyperscaler and otherwise, refresh shift to architecture requiring new capability. Not simple legacy refresh. Our expertise across thermal, electrical, and mechanical integration remains a differentiator, especially in projects with five-plus year investment horizon. Beyond hyperscaler, sovereign and regional AI programs are generating inbound interest. These customers often move faster and rely more heavily on external integration partners. As AI workloads become more complex, extending into multimodal and the agentic use case, compute intensity rises, and the supporting infrastructure must evolve. Some large-scale programs are already securing power and silicon capacity through 2028, indicating long-term commitment.
While AI is an important growth driver for us, it is only part of the picture. Our broader business portfolio, which includes automotive, industrial, and the capital equipment, provides resilience and diversification. This balance helps ensure that when one market segment grows, others can help support performance. Finally, power architecture is undergoing structural change, with customers incorporating on-site generation and storage to overcome grid constraints. We are engaged in delivering parts of this transition from a hybrid DC integration to part-level design. This rollout is foundational, although we position ourselves as a collaborative enabler rather than a sole driver of this change. Now, let me turn to where we expect to head over on the SPE. As the AI compute platform advances toward 3nm and 2nm, technical requirements in power, thermal, and the fluid system are expected to become more demanding, particularly for EUV and advanced packaging equipment.
Although we do not expect a broad CapEx upswing in the second half of 2025, customers are already securing long-lead subsystems like EDS and FDS ahead of next node rack. This reflects a more front-loaded approach to tool readiness, pulling subsystem activity earlier in the cycle. Lam Research and KLA have both noted resilient demand in their segment, Lam citing NAND and the GAA-related upgrades, while KLA pointing to DRAM, back-end packaging, and the logic program tied to TSMC's roadmap. This update indicates that even with mixed signals at the overall wafer fab equipment level, the subsystem layer remains active and strategically important. EDS volume tends to follow broader tool demand, providing a consistent base across most wafer fab equipment categories. FDS, contrary, while more concentrated, is expanding in both scale and value due to higher process density and the greater fluid handling needs at the advanced node.
This shift aligns with our capability and allows us to participate in long-cycle engagement, not exposed to short-term trade and to other timing. Our early involvement in key 3nm and 2nm chip programs gives us forward visibility and positions us to scale participation as we accelerate into 2026. While we are not relying on a broad tool recovery this year, our embedded role in customer retooling plans continues to deepen. At the same time, we remain mindful that the scaling production and integration capacity for this complex subsystem carries its own execution challenge, which we are addressing through close coordination with our customers and supply partners. In conclusion, independent industry research recently ranked BizLink as the world's number nine interconnect solution supplier. For us, this is less a milestone to celebrate than a marker of how far our transformation has taken us.
It reflects the trust our customers place in us and reinforces our responsibility to keep executing with discipline as we support the next era of infrastructure growth. As the AI infrastructure buildout progresses, we are seeing more than simply higher volume. It is a broad rethinking of how hyperscalers and tool makers plan for data movement, power delivery, and the reliability at scale. GB200-based proof-of-concept deployment from early adopters like xAI to major platforms at Meta, Google, and Microsoft are demonstrating tangible improvements in data center performance and power efficiency. These gains are supporting continued adoption and enabling further system-level enhancement. In this environment, our role is to participate early and constructively in solving complex infrastructure challenges, spanning high-speed AECs, HVDC power integration, and the integration of thermal, electrical, and mechanical systems at the rack level.
By engaging alongside customers and ecosystem partners from the outside, we help ensure that solutions address critical constraints holistically rather than in isolation. These engagements often carry multi-year visibility and span both data-centric compute platforms and the upstream manufacturing ecosystem. The growth ahead will be driven not by a single product or customer, but by coordinated system-level planning, long-term investment horizon, and improved execution. Our neutrality, collaborative approach, and the cross-domain engineering capability allow us to contribute meaningfully to some of the most advanced infrastructure programs in the world. Based on the breadth of current design activity, the visibility provided by customer involvement, and our improved ability to execute, we see a sustained and constructive role for BizLink as an important enabler in AI and other advanced infrastructure buildout. Now, let me turn the call over to Mike.
Thank you, Roger, Florian, and Charles. This concludes our prepared statement section.
Now, let us begin the Q&A section. Please type in your questions, and then we will answer as many of them as possible in the time remaining. While we were going through the prepared remarks, I see that actually there are quite a lot of questions about the AEC side of our business within HPC. So I'll just do everyone a favor and just read very quickly through some of these. What is our view on the competitive landscape in the AEC market, especially as more companies have announced plans to enter this space? How do we forecast the potential market size of the AEC market over the next two to three years? Beyond AEC, what other major data transmission opportunities do you see that we could thrive over in the next one to two years? Next question is in Chinese, so I won't read over that one.
But lastly, any potential opportunities we observed recently that we can further enlarge AEC's total addressable market? How are we seeing the competition landscape for AEC so far? So in summary, just what what we're seeing in AEC, what we expect to see in AEC is kind of just the main summary of the questions that we've received so far. For this one, I'd like to hand it over to Roger.
Wow, there's so many questions about, focus on the AEC. Okay, let me try to answer as much as I can. We view the AEC market as a structural growth driver in AI data center buildout. While tray-to-tray connection will continue to expand, the next leg of growth clearly shift to rack-to-rack deployment, where AEC are becoming the preferred solution before opting for a longer reach. This transition significantly expands content value.
And although more companies are announcing plans to enter, the barrier to high-speed AEC design, manufacturing, and qualification remains very high. So only a handful of the credible suppliers are positioned to win share. That said, we see it as inevitable that other suppliers will eventually join the ecosystem, which should help expand the overall market in the virtuous cycle of growth. At the same time, we continue to invest in the DAC development. We expect healthy growth here as well, with DAC making a comeback in short-reach tray-to-tray connectivity, where they remain the most cost and power-efficient option. In parallel, we also see opportunities in adjacent areas such as the CPO and CPC uptake and advanced high-power busbar, which broaden our role beyond cable into full system-level solution. In short, AEC is moving from a tray-to-tray story to a rack-to-rack growth engine.
DACs continue to provide the steady contribution, and our broader role may position us to capture opportunity across both data and power as AI architecture evolves.
Thank you, Roger, and thank you to those that have asked about the AEC business so far. Taking things in sort of a bigger picture, I see a couple of questions to talk about our HPC business overall. Let me just read some of the questions. For everyone's benefit, please discuss the competitive landscape for rack busbars. Do you view PSUs, so power supply unit vendors, as potential competitors, given some of them have demonstrated such capabilities? Does management expect market share uptrend for power offerings for AI server racks in the coming years? What are the key drivers? Something else also AI-related. We're about to see the next generation of GPU and ASIC servers next year.
Could you help us summarize your power and data cables opportunities and potential changes in content value and profit margins? Also, if you could comment on the competition, will specification upgrades further strengthen our competitive advantages? And then finally, how should we expect to increase demand of power cables/connectors from HPC and BBU, so battery backup unit trend? Overall, just again, the bigger picture for our HPC business, including a bit of power in there as well. So for this particular question, I'd like to hand it over to Florian.
Okay. Thanks, Mike. Also here, a lot of questions. Well, first of all, I think that the market for busbar and power distribution and the whole landscape is evolving very quickly. The AI racks scale now from tens of kilowatts more towards hundreds of kilowatts, so this is a significant change.
We also see PSU vendors experimenting with integrated systems, but in our view and in practice, their role is usually more complementary rather than directly competitive to us. So designing and manufacturing high-current busbar at scale, this requires specialized materials, also know-how about thermal management, and also mechanical integration expertise. And we see that this is a little bit different from PSU core competencies today. For us, the bigger picture is that power, as I said already in the prepared statement, is becoming a critical bottleneck in the AI infrastructure. And our ability to deliver safe solutions, efficient solutions, but also scalable interconnect solutions, I think, position us very well. As racks migrate to 400 volts or eventually also then to 800 volts HVDC, the demand for more advanced technology, busbars, power whips, and rack-level systems, this will increase significantly.
So very important is that our ability to offer also high-speed data solutions alongside the power solutions is not only simplifying the customer supply chain, but also increasing our share of wallet. And this is also keeping us closely aligned with the innovation cycles and also driving next-generation AI architecture. So I think this is important, the link between the data and also the power side. So being closer to the action allows us then also to continuously strengthen our competitive advantage and also position us in a long-term scope better. So looking ahead to the next generation of GPU and ASIC server, I think that was also a question here. We expect actually both on the power side, but also on the data side, that the content value per rack will rise materially. We will see spec upgrades. These will increase technical barriers.
And this will also in turn reinforce our advantage in engineering, also qualification process, but also system-level integration. And this will help us to support healthy margins and give us new opportunities. So therefore, we expect our share of power content in AI racks to rise over time, also supported by higher rack power density. Earlier engagement is also very important here with hyperscalers on the architecture design, not only for the next generation, but also for the generation after that. And the unique advantage, as I said, of combining data and power expertise, I think, is also in one part and is also one of our future differentiators. So I hope that answered those questions. Back to you, Mike.
Thank you, Florian, for the insight, as well as to those in the audience that posed those couple of questions.
Looking at something else besides the HPC and AI for the moment, looking at spending, really, what's the CapEx intensity outlook through 2027? And then kind of in a similar vein, capacity and CapEx, your AI-related business seems to be going faster than expected so far. How would we allocate our capacity through in-house expansion or seeking for acquisition opportunities? Is our free cash flow sufficient to meet customers' rising demand in the next two years? So for this particular question, I'd like to hand it over to Charles.
Okay. Thank you, Mike. I think as for CapEx intensity, historically, actually, we're in single-digit % of demand. But that, I believe, will trend higher through 2027. Why? Because we will scale, you know, alongside the AI assets. But we view this as strategic investment rather than simple capacity addition.
The priority is not just to add floor space, but to align our global footprint automation and technical capability with the need of next-generation platforms, and we will continue to invest in growing our R&D growth and various support functions, ensuring our engineering, manufacturing, and supply chain organization can all scale together. Funding will come through a balanced approach. Primarily, our solid cash generation supplemented where appropriate by other sources. Our past use of new ECB and DAP helped create this link that exists today, and taking a long-term view is essential in how we envision the company's future. This effort has also attracted new stakeholder attention from investors, customers, and talent, reinforcing our ability to grow and compete at the highest levels.
Ensure we'll deliver investing to stay ahead of the curve, ensuring that we can support rising AI demand, capture greater cost value in both power and data, and strengthen our competitive positioning for the long-term cycle ahead.
Thank you, Charles, for the caller, as well as to the two that posted this space/CapEx-related questions. Let me see. I do want to have a chance to share with everyone about sort of the non-HPC side of the business. What is our order outlook for the non-SPE, so semiconductor production equipment, industrial, and auto sectors? Have shipments bottomed? Does this mean that there was some recovery in Europe business, i.e., factory automation in auto? Also talking about auto and FA, so factory automation. How is the trend looking like so far? Any expected timeline on the the inflection point? I guess it's talking about inflection point.
How will automation contribute to earnings close to inflection point? So in general, just questions about our industrial business and then, of course, with auto as well. So for this question, I'd like to hand it over to Florian.
Yes. Thanks, Mike. A little bit of a more difficult question on the weaker side of the business. Well, let's take it into two parts. One is the industrial, and the other one is the automotive part. Let's start with the industrial part. You know that the industrial business, this is the part which is represented on non-HPC and non-SPE portfolio. We have seen a low-demand environment, but we can also see staggered recovery, especially, for example, in healthcare and also factory automation increasing also here sales quarter on quarter. And this is the first time since the peak in the first quarter of 2023.
So we see some signs of recovery also in Europe. Europe is worse than the other areas in the world. So we may be transitioning from our prolonged bottoming phase toward more an early-stage recovery, but it will not be a sharp recovery. We see still a limited visibility. And in some areas, still inventory digestion digestion is ongoing in the parts of the channel. And what helps us is external indicators such as the manufacturing PMIs of the big regions. So PMI of the United States, of China, and of the Eurozone, they have remained broadly flattish. But if they show any sustained pickup, especially if they go above the 50-point mark, this would historically signal expanding also factory activity, factory investment activity, and also improved OEM CapEx. So these tend to precede the demand recovery by one or two quarters, usually.
So we continue at the moment to manage our working capital and also our production conservatively. We are staying in close contact with our key accounts to monitor stock behavior and supply chain behavior. It's also important if we see our high mix and low volume nature of these businesses, any sustained demand recovery then typically also translates into stronger earnings conversion and also improved cash dynamics. So cautiously optimistic now for slight pickup also in this area. Now, coming from industrial more to automotive, the automotive market is currently, let's say, being flooded with new EV solutions. But we do not see yet a clear, really killer application that is pulling through large-scale demand similar in the way what smartphones did for prior technology cycles. So we don't see that at the moment.
So while we also remain cautious in our outlook, we do see business conditions will not materially worsen from here. At the same time, we also hesitate to call it definitely a bottom, given especially the competitive intensity we see at the moment and the ongoing adjustments across all supply chains. So our approach here remains more disciplined, staying close to core programs, engaging selectively in new opportunities, but also ensuring that we are positioned to capture upside once the demand is coming back and the demand visibly improves. So that's basically my comment on the automotive. Back to you, Mike.
Thank you, Florian. And thank you to those in the audience today for asking about the non-HPC side of our business. We are running a little bit short on time. I do see that there's some questions still in the chat.
Myself and my IR colleague, Jimmy, will address these separately. But thank you, Roger, Florian. This concludes our Q&A section. A replay of the conference call today will be available on our IR website within 24 hours from now. If you have any further questions, please feel free to reach out to the BizLink Investor Relations team. We thank you very much for joining today's call. We may disconnect.