BizLink Holding Inc. (TPE:3665)
Taiwan flag Taiwan · Delayed Price · Currency is TWD
2,630.00
0.00 (0.00%)
Apr 28, 2026, 1:30 PM CST
← View all transcripts

Earnings Call: Q4 2025

Mar 6, 2026

Derrick Yang
Equity Analyst, Morgan Stanley

Good afternoon, everyone. welcome to BizLink's 4Q 2025 earnings call. My name is Derrick Yang. I'm the coverage analyst for BizLink here at Morgan Stanley. It's our honor today to have BizLink's top management with us today, to discuss the 4Q 2025 financial numbers as well as the outlook and industry dynamics. Without further ado, let me pass it to BizLink's Investor Relations Manager, Mike Wang, for a brief rundown of this call.

Mike Wang
Senior Investor Relations Manager, BizLink

Thank you, Derrick. Thank you, Derrick, thank you to Morgan Stanley for hosting today's earnings call. Good afternoon, everyone, welcome to our fourth quarter 25 earnings conference call. My name is Mike Wang, Senior Investor Relations Manager. Joining me today are Roger Liang, our Chairman; Felix Teng, our Chief Executive Officer; Florian Hettich, our Chief Operating Officer; and Charles Tsai, our Chief Financial Officer. Our earnings results were released earlier today and are available on our investor relations website, where you can download the latest earnings materials and access them through MOPS. Today's call will begin with Charles, who will review our financial highlights, followed by Florian for operational highlights. Felix will conclude with corporate updates before we move to the Q&A session.

You may submit your questions at any time through the public or private chat function, and we will address as many as time permits. Before we begin, please note that today's discussion may contain qualitative forward-looking statements based on our current expectations, and they're subject to, of course, risks and uncertainties. Actual results may differ materially. Please refer to the safe harbor notice in our earnings materials for further details. This call is being recorded and will be available on our investor relations website within 24 hours. With that, I would like to turn the call over to Charles.

Charles Tsai
CFO, BizLink

Thank you, Mike. Before reviewing the quarterly numbers, I would like to briefly step back and frame how we think about BizLink financially as BizLink continue to scale involved. Over the past several years, we have been transforming BizLink, a collection of strong operating businesses, into an integrated platform with formal capital governance designed to allocate capital prudently, manage risk, and convert growth into durable financial outcomes. As our mix shift toward more complex system-level engagement, particularly across high-performance computing, infrastructure, and capital equipment, we believe it is increasingly important to anchor the financial discussion in earning quality, cash conversion, and capital discipline rather than short-term volume dynamics. Our objective is not simply growth, but growth that is self-funded, resilient across cycle, and aligned with long-term shareholder value creation.

Our global teams executed strongly, achieving our eighth consecutive quarter-on-quarter increase in sales since bottoming two- years ago. Quarter-on-quarter sales growth was primarily driven by HPC, rising 16%. Capital equipment sales remained flat quarter-on-quarter and stayed at the level we saw in 2Q 2025. Electric vehicle and factory automation sales continued to gradually recover after bottoming out near the start of the year, the last year. Healthcare sales remained at 3Q 2025 highs. The quarter reflected the early ramp phase of several new high-complexity programs where production maturity, procurement optimization, and yield learning curve have not yet fully normalized. As these programs scale over the coming quarters, we expect meaningful cost absorption and margin improvement consistent with historical ramp dynamics. This resulted our gross margin falling from the 2Q 2025 levels.

OpEx was roughly stable quarter-on-quarter and stayed at year-to-date levels, showing that our cost efficiency efforts are flowing through into our P&L. During the second quarter of last year, we completed a review of certain co-invested production assets and recognized the non-operating non-cash impairment to align carrying value with updated long-term cash flow assumptions. In the fourth quarter, we continue to apply the same conservative framework to reassess asset recoverability as market conditions evolve. The resulting adjustments were program-specific, non-cash, and accounting driven, and do not reflect a deterioration in the underlying competitiveness or strategic relevance of the business. Our effective tax rate this quarter reflect the year-end adjustment that capture full-year impact and therefore should not be integrated as indicative of our ongoing tax rate. Operating leverage further boosted earnings with our EPS continuing to grow quarter-on-quarter.

On a full year basis, our total sale was stronger than expected. Sales was driven by HPC, which more than double year-on-year, and by capital equipment, which was up more than 50% year-on-year and was more than double 2023 levels. Industrial sales recorded its second annual increase. Finally, we closed our XFS Communications acquisition on January 5, 2026 and began consolidating its financial on the same day. From a portfolio perspective, our industrial segment continue to provide a stable earnings base supported by high mix, project-driven demand across automation and equipment applications. Over the recent quarters, customer procurement behavior has begun transitioning from broad inventory correction toward normalized ordering patterns, with visibility increasingly tied to new program activity rather than restocking.

Our consumer-linked segment are being managed conservatively, while our automotive and electrical appliances are not expected to be primary near-term growth drivers. They continue to play an important stabilizing role within the portfolio, where prior adjustment and disciplined planning help support operational resilience and limited, limit volatility. In parallel, demand related to AI infrastructure remains structurally strong, while engagement level continue to increase. We're managing this business with appropriate discipline given its step function deployment profile. This diversified mix allow us to maintain financial stability while retaining exposure to long-term infrastructure investment cycles. What is changing? Growth across the portfolio is increasingly driven by higher system content, deeper engineering engagement, and earlier qualification cycle rather than pricing actions. As customer program increasingly follow platform-based deployment cycles, portion of our capability investment occur earlier in the development phase to establish engineering and manufacturing readiness.

Importantly, this investment are reusable across multiple customer programs and generations, allowing revenue and content to scale once platform enter volume deployment without requiring proportional increases in incremental capital deployment. Across multiple segment, customer are also engaging earlier in platform design and qualification cycles, modestly extending planning visibility while increasing execution requirement. As a result, growth increasingly reflect platform scaling rather than incremental capacity expansion, supporting more efficient capital deployment over time. At the same time, earning quality continue to improve with cash conversion and return on equity trending upward as operating leverage improves and working capital intensity declines. Growth is being absorbed efficiently rather than funded through balance sheet expansion. As system scale, progress is rarely linear. Platform transition and deployment schedule do not move in straight line even when long-term demand is clear. Our focus is therefore on execution reliability rather than maximizing short-term throughput.

This discipline is reflected in our capital allocation approach. We maintain tight working capital management, steady CapEx intensity, and selected M&A focus on capability enhancement rather than growth for its own sake. During the first quarter of 2026, we completed the acquisition of XFS Communications, Inc., adding optical interconnect capability consistent with our disciplined capability-enhancing M&A approach. Consistent with our framework, integration is being executed methodically within our existing financial discipline. What this mean going forward? NPI activity has become an increasingly important forward indicator, with many program aligned to customer platform refresh cycle extending well into 2026 and even into 2027. During the quarter, BizLink achieved several milestone reflecting increased institutional maturity, including index inclusion and joining Russell 2000. This step reinforce governance, transparency, and balanced discipline as the company scale.

Taken together, we believe BizLink is positioned to continue scaling with financial resilience as customer architecture evolve. Florian will now provide update on our latest quarterly operational takeaways.

Florian Hettich
COO, BizLink

Thank you, Charles. Yes, from an operational point of view, the customer engagement is shifting from component execution toward more system-level collaboration as architectures become more complex. Power delivery, also data connectivity and mechanical integration increasingly interact across customer platforms. Higher performance requirements and tighter deployment schedule raise the cost of execution error and increase the value of early engineering engagement. This increasingly includes both electrical and optical connectivity domains, where integration and manufacturability requirements are converging as system architectures evolve. What we are seeing is not a collection of isolated programs, but rather a repeatable operating pattern across multiple customers and industries. From an operational perspective, we frame our business not around individual products, but around how systems function as a whole. As customer architectures evolve, our delivery, data connectivity, mechanical engineering, and manufacturability increasingly interact with one another. Customers are no longer optimizing isolated components.

Instead, they are designing platforms that must operate reliably under higher density, tighter tolerances, and faster de-deployment timelines. In this environment, execution reliability and integration discipline become as important as technical performance. This shift changes how customers engage suppliers. Rather than sourcing individual parts late in development cycles, customers increasingly seek partners capable of contributing earlier in design and supporting global execution once platforms scale. Our operating model is built around supporting this transition, enabling us to participate where system-level constraints are emerging rather than where specifications are already fixed. As compute density rises, the challenge is no longer meeting individual specification in isolation. Validation becomes multilayered, failure tolerance tightens, and execution consistency across regions become critical. In AI infrastructure, higher rack-level integration is driving redesigns in both power delivery and data connectivity.

Similar dynamics are emerging in semiconductor equipment, where subsystem complexity and engineering constraints are accelerating outsourcing toward integrated partners. As requirements increase, the universe of suppliers capable of supporting these platforms at scale naturally narrows. Our response has been to deepen system-level engagement through expanded NPI capability and tighter integration between engineering, box build, and system integration. Increasingly, customers engage us not only for components, but at the system level, where power, data, and mechanical constraints intersect. Once architectures are validated, continuity and execution reliability become critical. At the rack level, power delivery is becoming a primary architectural constraint as densities exceed 100 kW. Increasingly, we are being engaged earlier in rack architecture discussions, particularly around power distribution geometry and serviceability considerations. Across capital equipment platforms, we see a similar pattern.

OEMs, who are facing engineering and cleanroom constraints, are outsourcing more integrated subsystems, rewarding suppliers capable of multidisciplinary execution at scale. What is that meaning going forward? As the integration increases, manufacturability and repeatability become as important as design performance. Our focus is therefore on building execution capability tied directly to customer programs rather than expanding capacity ahead of demand. We believe this disciplined approach positions BizLink to scale alongside increasing system complexity without increasing operational risk. Felix will now provide updates on our latest quarterly corporate takeaways.

Felix Teng
CEO, BizLink

All right. Thank you, Florian. yeah, BizLink was founded on solving customer problems through engineering engagement and execution disciplines rather than focusing on any single product or market. That operating philosophy continues to guide the company today. As systems scale, the constraints our customers face are changing. Power density, data movement, and integration complexity have become binding factors across data centers, semiconductor manufacturing, and emerging automation environments. We view this not as a short-term technology cycle, but as a long-term industrial transformation. Importantly, these constraints are not emerging only within AI infrastructure. Similar shifts toward higher system integration and earlier engineering collaborations are appearing across semiconductor equipment, industrial automation, healthcare, and advanced equipment platforms. This reinforces our view that the changes underway represent a broader industrial evolution. BizLink has never been organized around a single product category or technology cycle.

Instead, we frame our role around solving physical layer constraints that emerge as the systems scale. Across industries, periods of technical advancement eventually shift bottlenecks away from compute or core intellectual property and toward integration, power delivery, reliability, and manufacturability. These constraints determine how efficiently systems can be developed at scale. Our strategy is not to predict a single tech technological end state, but to position BizLink at a layer where these constraints consistently appear. By focusing on execution capability, engineering engagement, and global manufacturability, we aim to remain relevant across successive generations of platform transitions rather than any single technology wave. What is changing? Value creation is increasingly moving away from discrete components towards system-level integration and execution reliability. Suppliers capable of engaging early, qualifying globally, and executing consistently become more critical as architecture evolve.

BizLink's role remains consistent, solving physical layer constraints as systems scale. How we are executing. NPI has becomes a strategic capability, enabling earlier participation in architectural decisions and positioning BizLink ahead of volume ramps across HPC and capital equipment platforms. We view AI infrastructure development similarly to those evolution of the modern city. Early phases prioritize scale, while later phases optimize efficiency and coordination within the existing footprints. Our role is to ensure the physical foundations, power, data, and connectivity supports whatever technologies follow. As integration increases, we are preparing for higher levels of optical adoption over time by expanding our interconnect capability across both electrical and optical domains. We view this evolution not as a binary transition, but as a convergence of electrical and optical interconnect technology addressing different system constraints as architectures scale. We do not view this as a binary transition.

Electrical and optical interconnects address different constraints and are expected to coexist for an extended period as architectures evolve. Our focus is therefore on building manufacturing and execution capability rather than betting on a single technological outcome. AI infrastructure represents a long-term industrial upgrade cycle rather than a single demand wave. The same interconnect disciplines underpin emerging applications such as robotics, autonomous systems, and intelligent manufacturing. We see this area as a long-dated extension built on top of today's infrastructure foundation, reinforcing the importance of disciplined execution across current platform transitions. Now let me turn the call over to Mike.

Mike Wang
Senior Investor Relations Manager, BizLink

Thank you, Felix, Florian, and Charles. This concludes our prepared statement section. Now let us begin the Q&A section. Please type in your questions, and then we will answer as many of them as possible under remaining time. Now, I want to remind everyone that there will be no forward-looking quantitative comments. Looking at the questions. Let me see. MS, sorry, Morgan Stanley's Derek. That was Helen, [uncertain] , BofA's Doris, all have asked about our outlook, growth trajectory, quarterly pacing. It seems there's still a lot of questions about how sensitive our HPC business is, potential moderation in AI CapEx spending. There's been continued concerns about an AI bubble.

For this, for a more sort of mid to longer term outlook, especially when in regards to this is AI CapEx's sustainability and industry transition, I'll hand it over to Felix.

Felix Teng
CEO, BizLink

All right. Yeah, we view the current discussion around AI capital intensity as a natural progression of the infrastructure build cycle rather than a signal of weakening demand. You can think of this as a transition similar to building out a transportation network. Early investments focus on adding more vehicles, while later investment shifts toward highways, traffic management, and efficiency, so the system can operate reliably at scale. Earlier phases of AI investment focused primarily on accelerating compute development. As clusters scale, the industry's priority increasingly shift toward deployability, efficiency, and long-term operation of installed capacities. This changes spending composition more than spending directions. Operationally, we are seeing greater emphasis on rack-level architecture, power delivery, thermal coordination, and reliable high-speed connectivity. Areas that become more critical as power density rises.

Even if the rate of CapEx growth normalize over time, infrastructure content per rack continues to increase because higher density systems require more sophisticated integration. As a result, our engagement is less tied to short-term procurement cycles and more aligned with multi-generation platform architecture. From our perspective, the key driver is not simply total CapEx, but how AI infrastructure is being built, and that trend is constructive.

Mike Wang
Senior Investor Relations Manager, BizLink

Thank you, Felix, for a colorful outlook, as well as to the analysts that posted their questions. For the next set of questions, talks about rack-level power, how that's becoming central. This is coming from, well, actually kind of the same group. What I've seen, Nomura's Kenny, BofA Doris, Morgan Stanley Derek, Goldman Billy. Why is rack-level power becoming more important, and how does this expand business opportunity? For this one, since it's more operational, I'd like to hand it over to Florian.

Florian Hettich
COO, BizLink

All right. Thanks. yes, I mean, power is indeed one of the major topics for future development. As AI workloads scale beyond the, let's say, traditional server environments, the rack power density is really significantly increasing, in many cases now exceeding several 100 kW per rack. At those levels, power delivery is really becoming a system-level reliability constraint, rather than, as I said earlier on, only a component decision. At lower densities, the power design is similar if you compare it to wiring individual homes within a city. If you go at the AI scale rack densities, it becomes closer to, let's say, managing a power distribution for an entire city block or part of the city, where stability and also coordination matter more than any single device.

Inside individual servers, power path are relatively localized and also maybe standardized. At rack level, however, operators must manage significantly higher current, also, higher thermal load. Redundancy design is important, and also serviceability across the entire cluster segment is an important topic. A fault at this level can affect a large portion of the deployed compute simultaneously, and this is elevating validation and reliability requirements tremendously. The impact of an effect on this deployed compute is really serious. Hyperscale therefore, increasingly define rack power architecture, mainly independently from individual PSU vendors, in order to maintain flexibility and also avoid single vendor dependence.

This shifts engineering responsibility toward integrated rack solutions, where, as I said, the electrical and thermal and operational considerations must be validated together. As architectures get mature, qualified designs typically remain fixed for platform life cycle. This dynamic increases the importance of early engineering engagement. That is the key, I think the early engineering engagement and also the execution afterwards, the consistency execution and structurally raising the content in infrastructure when we scale AI systems. Also as power density increases, the infrastructure reliability increasingly determines how much of your compute power is usable and is bringing into performance.

This shifts, actually the value creation towards integrated rack-level solutions, where engineering execution and validation become, as I said, more important than the individual component specifications on the component level.

Mike Wang
Senior Investor Relations Manager, BizLink

Thank you, Florian, for providing the feedback to those questions from those analysts. For the next set of questions, again, thank you to Morgan Stanley's Derek, BofA Doris, and Daiwa's Helen for this topic on our HPC strategy positioning. How should investors, of course, analysts understand our HPC strategy within AI infrastructure? This is a very big picture sort of view. For this one, I'd like to hand it over to our Chairman, Roger.

Roger Liang
Founder and Former CEO, BizLink

Well, our HPC strategy is built around enabling the continuous scaling of the AI infrastructure. As AI deployment evolve from individual servers toward increasingly large accelerated cluster, the industrial primary challenge are shifting away from compute performance alone.

Toward the physical infrastructure required to support system-level expansion. In practical terms, the limiting factors increasingly become how power is delivered into dense rack and how data move reliably and efficiently across the growing cluster. Accordingly, our focus is on two infrastructure layers that directly determine how fast AI system can scale. Rack-level power delivery and high-speed data connectivity. In many ways, AI infrastructure is evolving from standalone computers towards something closer to an electrical grid, where power delivery and data connectivity determine how much compute can actually be utilized. On the power side, AI workloads are driving a significant increase in rack power density. As racks move towards much higher power level, traditional data center power approach require redesign to improve efficiency, thermal performance, and operational reliability.

This transition highlighted the importance of how a high current power connectivity, including power feeds, busbar system, and integrated rack distribution solution. Our long-standing experience in high power interconnect and system integration position us naturally within this architectural shift, where power delivery is becoming a defining element of next generation AI infrastructure rather than a standardized supporting component. On the data connectivity side, the majority of current industry investment is focused on scale-out architecture, where thousands of accelerators must communicate across racks within large cluster. This deployment requires solution that balance bandwidth, power consumption, signal integrity, and the serviceability and the high speed at high-hyperscale. Our portfolio of direct attach and active electrical cable solutions support this requirement by extending reach and maintaining performance efficiency across real-world database environment.

As cluster continue to expand, reliable short to mid-reach connectivity remains essential to enabling practical deployment at scale. We also participate in scale-up connectivity environment, including PCI-related solution, which broaden our exposure across different system architecture and provide diversification as compute platform continue to evolve. While scale-out deployment currently represent the largest area of infrastructure expansion, participation across multiple connectivity layers allows us to support customers throughout the system architecture. From a financial perspective, this positioning also support greater durability in our operation, operating model. Engagement increasingly occurs earlier in customer platform design cycle and extend across multi-generation architecture rather than single product deployment. As system complexity increase, qualification requirement, validation cycle, and execution continuity become more important, which tend to lengthen the program life cycle and include visibility.

As a result, growth is increasingly driven by expanding system content and deeper engineering participation rather than short-term volume fluctuation, supporting more stable cash conversion, and disciplined capital deployment over time. As interconnect requirement advance, the industry is exploring multiple technology approaches to address performance and efficiency need across different distance and the use case. Our strategy is not centered on any single transmission media, but rather on supporting customers' evolving connectivity architectures. Today, our disclosed activity are primarily focused on electrical interconnect solution, where we have deep engineering and manufacturing capability. Recent development has expanded our capability into adjacent area of next-generation interconnect technology, including optical connectivity solution that complement our established electrical expertise. This capability allow us to participate more broadly in evolving system architecture as customer balance performance, efficiency, and the manufacturability across both electrical and optical domain.

Important that we view electrical and optical technology are complementary rather than mutually exclusive. AI infrastructure increasingly apply different solution to different portion of the system depending on distance, power efficiency, and the operational consideration. By focusing on the underlying infrastructure challenge, delivering higher power density and enabling reliable high-speed data connectivity, we believe our opportunity remain relevant across technology transition and the future architectural change. In summary, our HPC strategy is centered on supporting the physical infrastructure layers that enable AI cluster to scale. By addressing both rack-level power and high-speed data connectivity, we participate in area that become more critical as AI deployment grow larger, denser, and more complex, while maintaining flexibility to evolve alongside future technology development.

Mike Wang
Senior Investor Relations Manager, BizLink

Thank you, Roger. That's our company's big picture view of HPC. Of course, thanks to the analysts that posed those questions. This is gonna be a little bit more financial in nature. Again, thanks to BofA Securities , Alice, Morgan Stanley's Derrick, Bank of America's Doris. This is about our earnings growth mechanics and generally what drives earnings expansion as the infrastructure scales for. For this one, I'd like to hand it over to our Chief Fianacial Officer, Charles.

Charles Tsai
CFO, BizLink

Yeah, that's a good question. If we really look closely, our earning expansion is increasingly driven by structural content growth and system complexity rather than cyclical shipment recovery. As AI infrastructure scales, high power density, tighter performance requirement, and deeper system integration are increasing the amount of data interconnect, power delivery, and engineering content required per deployment. The earlier computing cycle were largely unit-driven, where growth depended on a number of servers deployed. In contrast, AI architecture requires significantly higher power density, tighter signal integrity requirement, and more integrated subsystem design. This increases the amount of data interconnect, power delivery, and engineering content required per rack. You can think of this transition as moving from simply adding more identical machine toward upgrading the electrical and structural infrastructure inside a building.

Even if the number of building grow steadily, each one requires substantially more internal infrastructure to operate at higher performance levels. As a result, our growth is influenced by three structural factors. First, content per deployment increases as power and data connectivity requirements scale. Second, product mix shift toward higher value system level solutions where reliability and engineering capability become more important than commoditized components. Third, earlier engagement in platform design allow us to participate across multiple generation of a program, improving operating leverage over time. While deployment volume remain important, we believe our earning progression increasingly reflect the growing technical intensity of AI infrastructure rather than pure cyclical shipment growth. I hope that answered the question.

Mike Wang
Senior Investor Relations Manager, BizLink

Yeah. Thank you, Charles. That was our first set of financial related questions. Of course, thank you to the analysts that shared those questions. Well, the next one's also for you. This is from UBS' Terry, Morgan Stanley's Derrick and of course Merrill Lynch's Alice. This question is about our operating leverage from AI infrastructure scaling. I think the summary is gonna be something like how much further can operating leverage improve as AI revenue grows? Now, this is gonna be another financial related question. Charles, please, if you may.

Charles Tsai
CFO, BizLink

Yeah. Thank you. You know, as AI infrastructure evolve, growth dynamics are shifting away from traditional volume-driven expansion toward architecture-driven scale. Increasing compute density requires substantially higher power delivery, thermal management coordination, and system integration complexity at the rack level. This evolution increase the amount of data interconnect and power content required per system deployment. Importantly, much of the incremental content leverage existing engineering platforms

Manufacturing processes and qualification framework that already developed for hyperscale customers. In practical terms, once a platform architecture is engineered and validated, incremental deployment resemble adding additional floor to an already designed structure rather than redesigning the foundation each time. As a result, revenue growth increasingly benefit from rising content per deployment rather than proportional increases in operating expenses. Engineering investment made in earlier platform generations now support multiple successive architecture, allowing incremental revenue to scale more efficiently. This creates structural operating leverage, where operating expenses grow at a slower rate than the revenue as AI infrastructure adoption expand. From a financial perspective, this leverage reflect platform maturation and architectural progression rather than short-term pricing or cyclical utilization changes. Over time, we expect improvement in operating efficiency to be driven primarily by deeper system participation and increasing platform reuse across customer deployment.

Yeah, I hope that also answered the question.

Mike Wang
Senior Investor Relations Manager, BizLink

Thanks again, Charles, for taking the second one right after the first one. Also to the analysts for those questions. Let me see the next set of questions from both Doris and Kenny. Again, one is Derek. This is gonna be on HPC value capture within the AI stack. You know, in fact, we have received lots of questions around AI. This is gonna be as AI systems scale, I guess, what the analysts are trying to ask about is where is the value increasingly being created within the infrastructure stack? This is gonna be another bigger picture question. I'd like to hand over the floor to Roger.

Roger Liang
Founder and Former CEO, BizLink

All right. as AI systems scale, value creation within the infrastructure stack, I mean, increasingly shift toward component that enable reliable operation on high density compute environment. While the rise in semiconductor drive the computational capacity capability, practical deployment of these system depend on power delivery, signal integrity, and system integration layer that ensure stable performance at scale. Higher compute density increased electrical of thermal sensitivity and the data connectivity complexity, which elevate the importance of infrastructure solution, manage power distribution and the data interconnection reliability across entire rack architecture.

In many ways, this is similar to large transportation network where long-term value, ultimately, concentrate in the infrastructure that keep the system operating reliably, such as the highway traffic coordination and the energy distribution, rather than in any individual vehicle movements through the neighborhood. Business participate in this portion of the value chain through solution that support both power and data connectivity at the system level. As architecture evolve toward higher density and greater integration, the relative importance of these enabling layers increased alongside the computer performance. Rather than the dependency on unit shipment growth, value expansion increasingly come from high content intensity per system and the deeper integration into platform architecture. This position the company to benefit from continuous scaling of AI infrastructure, even a system design evolve across multiple technology generation.

In this context, our role increasingly align with enabling the infrastructure scalability rather than just us find a discrete component.

Mike Wang
Senior Investor Relations Manager, BizLink

Yep. Thank you, Roger. Another bigger picture question for from the analysts and for the audience today. For the next one, lots of questions around copper and optics lately. Then of course, thank you to Ebiz, Ellen, Kenny, J.P. Morgan, Bill, Saturn Pax, Vicky and Terry's, [uncertain] . Putting a lot of these questions together and talking about copper and optics architecture expansion. How should everyone think about copper versus optical interconnects over the longer term? This we did touch on this in the remarks during Florian's section. I could hand it over to Florian.

Florian Hettich
COO, BizLink

Yes, thanks. I mean, this is an ongoing discussion since several years, and this is also in the community very controversially and discussed also in the last couple of weeks and months. We do not actually view that as a binary, let's call it binary transition. It is less in our view, like replacing one technology with another. I mean, like you can imagine like a transportation system. I mean, you have streets, you have railways, you have airplanes, and you need all of them because you have different distances which you need to bridge. Each of those transportation systems is optimized for different distances, efficiency requirements, and so on. The same is AI infrastructure scales.

Optical adoption is increasingly driven by system architecture requirements, rather than isolated component substitution, which expands overall interconnect complexity, rather than reducing the electrical connectivity. Electrical and optical interconnects address different engineering constraints within this AI system architecture, particularly when it comes to distance and power efficiency, but also latency and serviceability. Copper-based active electrical solutions remain highly efficient, especially for shorter reach, high density environments where reliability and integration and also power efficiency are the more critical factors. On the other side, optical technologies become increasingly important, as bandwidth requirements and also transmission distances expand and go up. What we are observing is not so much a substitution, but a architectural expansion.

This architectural expansion also introduces additional connectivity and distribution layers within the rack environment to support what I said, the serviceability, modular replacement or operational reliability as system scale increases. As system scale, overall the complexity of the interconnect systems increases because the designers optimize different parts of the system for different performance requirements. Having said that, in addition, as integration moves closer to high power ASICs, manufacturability and also thermal coordination and yield management become even more critical challenges. Our focus is therefore on building, manufacturing and engineering readiness that allows us to support evolving architectures rather than relying on single technology outcome. That's also what we heard earlier on. Hope that answers the questions and back to you, Mike.

Mike Wang
Senior Investor Relations Manager, BizLink

All right. Thank you for that. Again, the whole debate on copper and optics is something that we believe is to be a binary transition. For the next set of questions, we will again, we did touch on this high voltage DC, quite a few questions, again, thanks to Derek, Doris, Kenny, and Billy. I think I see some questions also from the audience there. This will capture some of that as well. Why is the industry moving toward HVDC, and what does that mean for copper interconnects? This is going to be another operational topic. Florian, if you don't mind.

Florian Hettich
COO, BizLink

Yes, thanks. Well, one is the optics versus copper discussion, and the other one is the HVDC discussion. Yes, the transition towards higher voltage power architecture is primarily driven by physics, not so much by the technology preference. It's just. Well, it's physics. We are reaching now power densities that traditional low voltage data centers were not designed for, and were also never intended to support. As now the GPU cluster scale rack power requirements are moving from former 10 kW , historically, now towards 100 kW per rack, and this is really a tremendous jump.

At these levels, electrical efficiency and also, as I already said, the thermal management become the primary constraint on scaling rather than the compute performance or the networking bandwidth. Delivering the power at low voltage is something similar as trying to move large volumes of water through many small pipes. Its flow becomes inefficient and the heat losses increases and so on. Higher voltage allows the same energy to move more efficiently through smaller currents improving scalability. Physics says that the power losses increase with the square of current, meaning that delivering large amounts of power at low voltage requires extremely high current. High current introduces excessive heat generation, larger conductors, reduced efficiency and also increased reliability risks.

When we now look at the scale of the AI systems, the industry is therefore moving toward or needs to move toward higher voltage power distribution architectures. In practice, some cloud service providers are adopting a plus or minus 400 volt HVDC, high voltage, direct current designs, while others are moving toward 800 volt HVDC. That's reflecting just different platform architectures. Both approaches significantly reduce the current levels, so they are then getting to the goal that they improve efficiency, they lower the thermal stress and also enable a safer and more scalable power delivery across the whole dense AI deployment.

In this sense, a higher voltage HVDC distribution is not simply an upgrade, it is becoming a physical requirement more for continued scaling of AI infrastructure. A common market assumption, by the way, is that higher voltage automatically accelerates optical adoption and reduces copper usage. In practice, these are solving different engineering problems and HVDC addresses power delivery efficiency while interconnect technology choices are determined primarily by distance latency, as I said, and power consumption and serviceability requirements. In fact, improved power architecture can enhance even operating conditions for electrical interconnects. Higher voltage distribution improves, for example, grounding stability, reduces electrical noise, and also lowers thermal stress within the rack environment.

These are all positive effects to improve the signal integrity, allowing short reach electrical protections to remain efficient and reliable even when data rate increases. AI system design is increasingly concentrating on compute into tightly integrated rack environments where connection distances are extremely short. Within these ultra-short reaches, electrical interconnects will still remain highly attractive because they provide, as I said already, lower latency, lower power consumption, simpler serviceability compared with optical solutions. As a result, HVDC trends to push optical solutions outward toward longer distance connectivity while preserving, and in some cases even reinforcing, the copper's role inside racks and near-rack domains.

Structurally, this reflects a broader industry transition from a, in the past, more compute limited environment toward now an energy limited environment. Constraint is energy, and as power delivery improves, system density increases, which drives more localized connectivity and a greater overall interconnect complexity. The likely outcome is not replacement, but a hybrid architecture, that's how we see it, in which both electrical and optical technologies coexist. Each optimized, as I said, for different distances, and efficiency requirements. In that environment, electrical interconnects continue to play a critical role in enabling scalable and also energy-efficient AI deployment. I hope this answers the questions. Back to you, Mike.

Mike Wang
Senior Investor Relations Manager, BizLink

Thank you, Florian. We still have a lot of questions, but I think we'll try to address something on AEC. Our largest AEC customer guided to 50% year-on-year growth for its fiscal 2027. Do we have enough capacity to meet the majority of this increase in demand? I'd like to hand this over to Felix. Again, this is the last question that we will have time for today.

Felix Teng
CEO, BizLink

Yeah. Actually, you know, we are actually seeing quite a lot of pulling and also high demands, increasing demands from our customers. We are actually for the past year or two, we're actually preparing to meet our customers' requirements. This is not a surprise to us. Actually, there are a couple of locations we are increasing our capacity already in progress. Yes, we do have enough capacity to meet the increasing demands. It's not just in AEC but in multiple sectors.

Mike Wang
Senior Investor Relations Manager, BizLink

Thank you, Felix, for that last question, also to the investor that posted that question. Again, thank you Ryder, Felix, Florian, and Charles. This concludes our Q&A section. A replay of the conference call today will be available on our IR website within 24 hours from now. If you have any further questions, please feel free to reach out to BizLink's Investor Relations team. That would be, of course, me as well as Jimmy. We thank you very much for joining today's call. You may now disconnect.

Powered by