Good afternoon and good morning, everyone. Welcome to BizLink first quarter 2026 earnings call hosted by UBS. I'm Ally Chen, covering the industrial sector in Taiwan. It's our pleasure to have BizLink Management joining the call today. Let me hand over the call to Mike Wang, Senior IR Manager, to introduce the management team and host the earnings call. Mike, please.
Thank you, Ali. Good afternoon, everyone, welcome to BizLink's first quarter 2026 earnings conference call. My name is Mike Wang, Senior Investor Relations Manager. Joining me today are Roger Liang, our Chairman, Felix Teng, our CEO, Florian Hettich, our COO, Charles Tsai, our CFO. Our earnings results were released earlier today are available on our website, where you can download the latest earnings materials and access the call through MLPS. Today's call will begin with Charles, who will review our financial highlights, followed by Florian for operational highlights. Felix will conclude with corporate updates before we move to the Q&A session. You may submit your questions at any time through the public or private chat function, we will address as many as time permits.
Before we begin, please note that today's discussion may contain forward-looking statements based on our current expectations and are subject to risks and uncertainties. Actual results may differ materially. Please refer to the Safe Harbor notice in our earnings materials for further details. This call is being recorded and will be available on our investor relations website within 24 hours. With that, I would now like to turn the call over to Charles.
Hi. Thank you, Mike. This quarter reflected a combination of product mix transition effect, early stage infrastructure program ramp, and varying recovery condition across portion of the portfolio. Within HPC, next generation platform continued ramping while portion of earlier generation product normalize, creating temporary mix utilization and operating leverage effect during the transition period. In addition, new platform typically begin with lower initial volume and higher upfront engineering operational costs before scaling efficiency improve over time. Outside of HPC, industrial and automotive condition continue improving gradually, supported by selective customer activity and pilot program engagement. While electrical appliances remain competitive across selected product categories due to industry overcapacity and customer competition. At the same time, we continue expanding operational readiness across several manufacturing site to support future customer development, deployment requirement and increasing infrastructure complexity.
While this effort may temporarily impact near-term efficiency as operations move up the learning curve, they're intended to support long-term capability expansion and execution flexibility across the portfolio. This quarter also reflects an ongoing evolution across the AI infrastructure ecosystem. Deployment cycles become more synchronized and rack-level complexity continues to rise, customers are placing greater emphasis on execution capability, interoperability, and deployment reliability. Platform transitions are also becoming less sequential and more overlapping. Earlier infrastructure upgrade cycles often follow relatively clean transition paths. Deployments increasingly coexist across multiple infrastructure generations architecture simultaneously. Deployment timing is no longer determined only by silicon availability, but also by rack-level validation, power infrastructure readiness, thermal optimization, and interoperability requirements. Supply chains are operating with less buffer inventory and fewer expedited orders than in prior period.
While this create greater visibility into deployment timing differences during transitional phases, we believe it also reflect a healthier and more operationally disciplined infrastructure environment over the long term. We believe this trend favor company capable of supporting increasingly complex infrastructure environment across multiple deployment layer and evolving architectures. We believe BizLink is positioned well as this infrastructure requirement continue to scale. How we frame the business? From a portfolio perspective, our business remain balanced between long cycle infrastructure growth area, and more stable cash generation generating businesses. Our HPC and capital equipment business continue to represent the primary long-term growth driver of the company, supported by increasing system complexity, deeper customer engagement, and expanding infrastructure requirements. At the same time, our industrial, automotive, and electrical appliances business continuing to provide portfolio diversification across broader end market and customer bases.
This balanced portfolio structure supports both long-term growth participation and operational resilience across varying industry cycles. We're managing this business conservatively while continuing to focus on engineering differentiation and customer diversification. Overall, our portfolio continues to provide both durability and exposure to the structural infrastructure investment cycles. Several important industrial shifts are becoming visible. First, platform transitions are becoming less sequential and more overlapping. Earlier, AI infrastructure cycles often followed relatively clean upgrade paths. Today, deployment coexists across multiple infrastructure generation architectures. This matters because deployment timing is no longer determined only by silicon availability. It is influenced by rack-level validation, power infrastructure readiness, thermal optimization, and interoperability requirements. Second, AI infrastructure architectures themselves are becoming more diverse across interconnect technology. The industry continues to evaluate multiple pathways simultaneously, including both copper and optical approaches depending on deployment requirements, performance targets, and system architectures.
At the same time, rack-level power architecture are also continuing to evolve as power density rises, including different approaches to power distribution and system-level infrastructure design. We view this as positive for BizLink. As architecture diversify, the importance of connectivity, integration, and interoperability increases. Customer require partner capable of supporting evolving infrastructure environments rather than depending on a single technology pathway. Third, we are seeing a continued increase in raw material costs, particularly in copper and PVC. This has led to ongoing pricing discussion across portion of the supply chain. We are selectively adjusting pricing where appropriate while continuing to support customer ramp requirements. Finally, we are continuing to invest into capacity and operational readiness across multiple sites, including Batam, Hainan, Penang, Johor, and Vietnam. This investment are intended to support long-term structural growth and strengthen our ability to execute complex customer program globally.
Operationally, we're continuing to move up the learning curve across several strategic important programs and manufacturing ramps. In HPC infrastructure, customer requirement continue shifting toward higher current density, tighter integration, and more complex rack-level deployment requirements. This includes both power and data infrastructure. As system scale, engineering complexity is increasing faster than unit volume. This is important distinction. The challenge is no longer simply shipping more units. The challenge involve validation depth, interoperability, thermal reliability, safety, and deployment execution across dense system. Within capital equipment, we continue to see healthy long-term customer engagement despite a more measured near-term investment cadence in certain areas of semiconductor ecosystem. Demand visibility re remain relatively healthy, particularly for more complex box build and subsystem integration project. Across our global operations, we continue strengthening execution discipline, manufacturing readiness, and engineering collaboration.
While this investment may create temporary inefficiencies during ramp phases, they position us well for future scaling opportunities. Looking ahead, we believe the broader AI infrastructure opportunity remains substantial, but infrastructure constraint rather than silicon constraint. As rack-level complexity rises, deployment execution becomes a key differentiator. This benefits company with engineering depth, global operational capability, interoperability, experiences, validation capability, and system-level infrastructure participation. We believe BizLink is positioned well in these areas. Importantly, we continue approaching the current environment with financial discipline. We are preparing our capability and capacity for long-term infrastructure scaling rather than reacting to short-term deployment fluctuation. While quarterly timing variability may continue, we remain focused on strengthening our long-term positioning across AI infrastructure, semiconductor system automation, healthcare, and next-generation mobility applications. Florian will now provide update on our latest quarterly operational takeaways.
Thank you, Charles. Across our operations, the biggest change we are seeing is that infrastructure complexity is now increasing faster than deployment volumes. This is true across HPC infrastructure, semiconductor systems, and within automation and advanced industrial applications as well. The GPU may attract the headlines, the infrastructure surrounding it determines how quickly and reliably systems can scale. That includes power delivery, thermal management, interconnect routing, serviceability, and validation reliability. These requirements are becoming more demanding quarter after quarter. We view BizLink as an infrastructure execution and integration company rather than purely a component supplier. Across multiple businesses, customers are engaging with us earlier in their development cycles, and in many cases, we are participating during qualification, validation, routing optimization, manufacturability reviews, and deployment preparation phases. This is especially visible in HPC infrastructure and semiconductor systems, where customers' requirements now extend beyond standalone components towards system-level execution capabilities.
Our global footprint also continues to become more strategically important. Customers prioritize geographic flexibility, manufacturing redundancy, and operational resilience as part of their supplier selection process. One of the clearest shifts is that AI infrastructure is becoming rack-centric. Historically, much of the industry's focus was on server-level optimization. Today, power distribution, cooling, and interconnect architectures are being designed at the rack level. As rack power continues to rise, the industry is evaluating multiple evolving architectures simultaneously, including centralized power distribution, large rack busbars, liquid-cooled power infrastructure, and modular power delivery systems. At these power levels, the challenge is no longer simply delivering compute. The challenge is to deliver the infrastructure around the compute. In many cases, infrastructure readiness is now becoming just as important as compute availability itself. This is an important industry transition. We are also seeing increasing architectural fragmentation across optics and copper interconnect technologies.
Multiple optical approaches continue to coexist, and standards are evolving. Rather than betting on a single pathway, our focus remains on supporting the connectivity and integration layers across evolving architectures. This includes high-speed copper, optical interconnect infrastructure, structured connectivity, routing optimization, and power delivery integration. Operationally, our teams continue focusing on execution readiness and capability expansion. Within HPC infrastructure, we continue scaling our participation across both power and data connectivity layers. These includes continued customer engagement related to higher current power delivery systems and next-generation interconnect solutions. Within semiconductor systems, customer engagement remains active across subsystem integration, EDS and FDS-related opportunities, and complex box build environments. We continue to see customers seeking greater outsourcing capability and global manufacturing support. In Penang, our primary challenge remains operational scaling and labor capability development rather than demand generation. Similar operational learning curves continue across other expansion sites as well.
Within factory automation and robotics, activity levels continually gradually improving. We continue seeing increasing customer interest in applications requiring higher durability, higher data transmission capability, and more complex motion environments. Importantly, many of these applications require engineering capabilities similar to those seen in HPC infrastructure, including signal integrity, EMI management, and higher density connectivity environments. Looking forward, we believe the industry remains in the early stages of a much broader infrastructure scaling cycle. The key difference today is that infrastructure scaling is becoming cumulative rather than purely generational. Platform transitions overlap rather than replace one another. This extends deployment cycles while also increasing infrastructure complexity. We believe this supports longer duration opportunities across power infrastructure, interconnect infrastructure, subsystem integration, and structured connectivity. At the same time, execution requirements are becoming more demanding. As architectures evolve, validation depth and interoperability become the gating factors for deployment.
We believe our engineering culture, operational footprint, and cross-domain experience position us well for this environment. Felix will now provide updates on our latest quarterly corporate takeaways.
All right. Thank you, Florian. We believe the AI infrastructure industry is entering a new phase. The first phase focus primarily on compute acceleration and deployment need speed. The next phase focuses on infrastructure efficiency, scalability, and interoperability. This is a natural evolution. As AI systems become larger and more distributed, infrastructure coordination becomes important. Expanding AI infrastructure resembles expanding a city's power grid and transportation network rather than simply adding more compute boxes. That transition creates both complexity and opportunity. BizLink's role within this ecosystem continues to evolve. Historically, many viewed connectivity primarily as a supporting component. Today, connectivity acts as a coordination layer across compute, power, cooling, and deployment infrastructure. This is especially important because the industry is no longer scaling through a single architecture pace, pathway. Multiple compute, optical, and power architectures continue to coexist and evolve simultaneously. We believe this increases the value of interoperability.
Our objective is not to depend on a single architecture winner. Our objective is to help customers scale reliably across evolving infrastructure environments. One of the most important industry shifts is that infrastructure deployment itself is becoming an engineering challenge. As rack power rises and architectures become more complex, customers require deployment reliability, operational resilience, faster validation cycles, tighter system integration, scalable manufacturing capabilities. At the same time, AI infrastructure deployment is becoming global. Customers are balancing performance, supply chain resilience, geopolitical flexibility, operational redundancy. This increases the importance of companies capable of supporting customers across regions and across multiple infrastructure layers. We are also seeing increasing convergence between AI infrastructure and other advanced application domains, including automation, robotics, autonomous systems, and advanced industrial systems. These areas remain early, many are beginning to require similar characteristics, high power density, low latency, high reliability, compact integration, and continuous operational durability.
Our focus remains execution-driven and long-term oriented. We continue strengthening our engineering capability, operational footprint, and customer engagement depth across key growth areas. Within optics, we continue expanding our infrastructure and integration capabilities while remaining disciplined regarding where we participate within the ecosystem. Within power infrastructure, we continue preparing for higher density architecture and more complex deployment environments. Within automation and advanced industrial applications, we continue positioning ourselves around higher value engineering-intensive solutions rather than purely volume-driven opportunities. Importantly, we continue approaching growth with financial discipline and operational pragmatism. As we have discussed previously, our objective is not simply to grow faster. Our objective is to build durable long-term capabilities that can scale across multiple infrastructure cycles and customer generation. Looking ahead, we believe AI infrastructure growth is becoming both cumulative and content-driven.
As deployment architectures become more complex, infrastructure content's persistent rises, interoperability requirements increase, engineering depths become more important, and execution capability become more valuable. We believe these trends align well with BizLink's long-term direction. Importantly, we do not believe the industry will converge quickly toward a single architecture pathway.
Multiple approaches are likely to coexist for years across updates, power distribution, and deployment topologies. This creates complexity for the ecosystem, but also opportunities for companies capable of bridging multiple environments effectively. Our role is to help customers deploy complex infrastructure environments more reliably, efficiently, and at greater scale. We remain focused on execution, customer intimacy, operational resilience, engineering capability, and disciplined long-term growth. We believe this foundation positions BizLink well for the next phase of infrastructure scaling. Now let me turn the call over to Mike.
Thank you, Felix, Warren, and Charles. This concludes our prepared statement section. Now let us begin the Q&A section. Please type in your questions, and then we will answer as many of them as possible in the time remaining. I want to remind everyone that there will be no forward-looking quantitative comments. Now, for the first question, let's see, questions from MasterLink Alice, Morningstar Eric, KJ Terra, and of course, BPA Doris. This all has to do with our margins in the first quarter. Now, for this one, generally, the sort of the summary for the question, margins were weaker than expected. What specifically drove compression? How should we think about margin recovery? For this one, I'd like to hand it over to Charles.
Okay. Thank you, Mike. Maybe let me take a step back and address this, you know, holistically because there are several moving parts, they all lie, you know, back to one core dynamic. What we're seeing is primarily a transition between product generations, with a more normalized and increasingly synchronized supply chain development rather than a structural change in demand or probability. On the product side, our legacy solution are declining as expected, while next-generation programs are still in early stage of ramp. In that phase, volume are not yet fully scale. Utilization is still building, the mix has not yet fully shifted to a higher value content. That naturally create temporary pressure on margins and operating leverage during this transition period.
One important difference, versus prior upgrade cycle is that this transition are now becoming increasingly compressed and overlapping. In earlier generations, products typically experience a long ramp and monetization period before next major transition began. Today, next generation platform are being introduced earlier while existing platform are still scaling. As a result, multiple generation can coexist simultaneously, creating a more complex operating environment from both a mix and operational standpoint. During this overlap period, we might see a combination of legacy pricing normalization, early-stage ramp inefficiency for newer programs, and elevated operational complexity as multiple product generation are supported in parallel. At the same time, the supply chain itself has become more disciplined, with significantly less buffer inventory and fewer expedited order compared to prior periods.
Demand is now much more closely aligned with actual deployment schedule, which means timing difference are more visible in near-term development. Importantly, we do not view this as a structural demand issue. In many cases, it reflects customer accelerating infrastructure planning cycle and preparing earlier for next-generation deployment as system complexity and performance requirement continue to increase. As new program continue to scale, we would expect the nature of that to come through over time, including improving utilization, better operating leverage, and a more complete shift toward higher value content. From our perspective, the current margin profile is best understood as part of this transition and overlap phases within broader structural infrastructure growth, and rather than a change in underlying long-term trajectory of the business. Yeah, I hope that answers the question and back to you, Mike.
Thank you, Charles. Let me see. For the next couple of questions, still kind of talking about HPC. Let's see, talking about how should investors think about business positioning within AI infrastructure as architectures continue evolving? Thanks to Doris, Kenny, Derek, and, of course, Helen. Since this is a bigger picture sort of question, I'd like to hand it over to Roger.
Okay. Our HPC strategy is centered around enabling the physical infrastructure required to scale increasingly dense, AI deployment.
Early phase of the industry focused primarily on compute acceleration itself. Today, the challenge is increasingly shifting toward how these systems are power interconnected, validated, and deployed reliably scale. As AI infrastructure become more rack centric, the importance of power delivery, thermal coordination, high speed connectivity, and the interoperability continues to increase. In many cases, infrastructure readiness is becoming the just as important as compute availability itself. Our positioning refer this evolution. We participate across three key infrastructure layers. Copper interconnect, optical connectivity, and the power delivery, which allows us to support customers across multiple deployment architectures and the evolving technology pathways. Importantly, our strategy is not dependent on any single architecture outcome. Multiple power, optical, and the interconnect approach are likely to coexist for years depending on deployment requirement, performance targets, and the operational consideration.
As a result, we believe the value creation increasingly shift toward company capable of supporting interoperability, validation, integration, and the execution across increasingly complex infrastructure environment. We believe BizLink is positioned well in this area given our engineering test, manufacturing footprint, and the long-standing customer engagement across multiple infrastructure layer. Thank you.
Thank you for for that, Roger. That's a big picture view. I think we get asked pretty much every quarter. Now for the next set of questions, a lot of people are still quite curious about the optics side of business, especially given the deal that we did earlier in the quarter, XFS. How should we think about the optical business opportunity and our ramp trajectory? Thanks to Ali, Amber, Helen, Vicky, Doris, and Kevin. For this one, I'd like to hand it over to Felix.
All right. Yeah, talking about XFS or Sin Fu Shan. Okay. We consolidated the business starting early January. The first quarter this year largely reflects the baseline run rate that is broadly consistent with 2025. What we are seeing more recently, particularly into April, is a meaningful pickup in activities, which suggests that we are beginning to move beyond that initial baseline and into an early scaling phase. That said, we would still characterize this as a early stage of development rather than a fully established ramp, and we would prefer to see a few more quarters of consistencies before drawing firm conclusions on the trajectory. Stepping back, the strategic rationale of XFS continues to strengthen as AI infrastructure scales. Bandwidth requirements, fiber density, and interconnect complexity are all increasing meaningful across next generation systems.
At the same time, the industry is increasingly recognizing that optical connectivity infrastructure itself is becoming an important scaling layer within AI deployments. While multiple architectures are still evolving across optics and interconnect technologies, we believe the long-term direction is clear. Higher performance systems will require significantly greater optical density, more sophisticated fiber management, and increasingly scalable optical integration capabilities. As fiber density rises, routing, managing, and integrating those fibers become operationally and mechanically more complex. This creates a growing need for high density fiber assemblies, precision optical integration, and scalable manufacturing capability. Importantly, scaling this ecosystem is not only about optical engines themselves. As optical density rises, the industry also needs supporting infrastructure across fiber supply, connector ecosystems, routing architectures, assembly capacity, and qualification process to scale together.
We believe this ecosystem synchronization requirements may increasingly become an important consideration as next generation's AI infrastructure deployments continue expanding. XFS strengthens our capabilities in exactly these areas, particularly in fiber assembly and optical integrations, which positions us to support the next phase of optical interconnect infrastructure development. While near-term revenues contribution will still depend on the pace and timing of customer deployments, strategically, we view XFS as an important long-term capability expansion with our broader HPC infrastructure portfolio. More broadly, this also strengthens our overall HPC positioning, where we now participate across three key infrastructure layers, copper interconnects, optical connectivities, and power deliveries, allowing us to support increasingly complex AI infrastructure environments as system architectures continue to evolve. Now back to you, Mike.
Thank you, Felix. Thank you to those who submitted a similar set of questions. Now we've been talking a lot about infrastructure complexity, so we do feel that this is something that we do need to stress. For, so for the next set of questions, again, thank you to Kenny, Eric, Callie, and Billy. Why are power delivery and interoperability and validation becoming more important? For this one, given he's talking about operations, I'd like to hand it over to Florian.
All right. Thanks, Mike. As we already laid out, I think the biggest industry changes is that AI infrastructure is increasingly becoming more rack-centric rather than server-centric. As the rack power and system density rises, the challenge is no longer simply deploying more compute. The challenge is more reliably operating increasingly complex infrastructure environments around the computing system. At these power levels we see currently, the customers must simultaneously manage different things, power distribution, thermal coordination, thermal management, signal integration, redundancy, serviceability, and also, as I said already, interoperability. This all has to be managed across dense rack environments.
You can imagine a failure at infrastructure layer can affect a large portion of the deployed compute power simultaneously, on the other side, elevates validation and also reliability requirements materially. At the same time, multiple architectures continue to evolve simultaneously across different technologies like optics, copper interconnects or power distribution. Customers, therefore, in our view, increasingly prioritize interoperability and execution capability rather than dependence on a single technology pathway. As a result, the qualification cycles are becoming much deeper, and the deployment readiness is becoming more operationally intensive over the process. In many cases, infrastructure readiness is now becoming just as important as compute availability itself.
We believe these trends favor companies, especially with engineering depth and validation capabilities, and also operational execution experience across multiple infrastructure layers. We believe that we are well-positioned in that respect. I hope this answers this question. Back to you, Mike.
Thank you, Florian. Also thank you to those that presented questions. Again, this kind of ties back to our margins as well. We would like to address this in Q&A. This also has to do with the whole evolution of the AI infrastructure cycle. For this particular question, where are we currently in the AI infrastructure investment cycle? Thanks to Derek, Billy, Helen, and of course, Vicky. For this one, I would like to hand it over to Felix.
All right. Yeah. We believe the industry is transitioning, you know, from a phase focused primarily on accelerating compute deployment toward a phase increasingly centered around scalable infrastructure operations. Earlier deployment cycles were more sequential, where 1 platform generation is largely replaced the previous generation before the next major transition began. Today, platform cycles are becoming increasingly overlapping and cumulative. Customers are scaling existing deployments while simultaneously preparing for next-generation architectures. This creates a more complex operating environment but also extends the overall infrastructure investment cycles. Importantly, infrastructure scaling today involves much more than semiconductor availability alone. Deployment timing increasingly depends on rack-level standardization
Our infrastructure readiness, thermal optimization, interoperability, and operational deployment capabilities. As a result, we believe the industry remains in the early stage of much broader infrastructure scaling cycle, and one that is becoming increasingly infrastructure-intensive and operationally complex over time. Yeah, to sum up, I think we are, again, in a very early stage of the AI total development investment cycle. Back to you, Mike.
Thank you, Felix. Of course, to those that asked that about that particular topic. Now, I do see there is a question posted in the chat. This also has something to do with this, the operating leverage and long-term profitability, thanks to Eden. This one is: How should investors think about operating leverage given higher engineering intensity and optics investments? Of course, in addition to Eden, and the panelists that presented this particular question, and Kenny, Terry and Derek, I'd like to hand this one over to Charles.
Okay. Thank you, Mike. I think, you know, that we can look at it this way, you know, we continue to believe that the long-term operating leverage opportunity remain intact. Although the past may not be perfectly linear, quarter to quarter, due to ongoing platform transitions and, you know, earlier stage program ramps, as we just mentioned. As AI infrastructure scale, value creation increasingly shift towards higher content and more engineering-intensive solution across power delivery, high-speed connectivity, subsystem integration, and interoperability layers. Importantly, many of those newer program leverage engineering platform, manufacturing processes, and qualification framework that we already developed over multiple customer generations. During early-stage ramps, we typically experience temporary inefficiency related to utilization, engineering support, operating readiness, and qualification of activity.
However, as volume scale and architecture mature, we would expect operating leverage to improve, although greater platform reuse, better utilization and a more complete shift toward higher-value content. Structurally, we continue to believe that the business is moving toward higher operating efficiency over time, although near-term variability can occur during transitional period as one that we're currently navigating. Yeah, and, you know, and also one thing to look at is that, you know, if we look at the operating profit level this quarter, actually, you know, it's still, I think it still show that, you know, we have strong OpEx discipline, although we're investing in those capability. I hope that answered the question, and also back to you, Mike.
Thank you, Charles Tsai. Thank you to Eden as well, and for the analysts that posted those questions. For the next one, this is something that we do want to address, given that this narrative was present for pretty much early this year. This has to do with copper versus optics. We do believe in a coexistence rather than a replacement. This one should be the question sort of: How should investors think about copper versus optical alternatives in the longer term? Is optics replacing copper? For this one, thanks again to Kenny, Ally Chen, Kevin Norris, Vicky, and Amber. I'd like to hand it over to Florian Hettich.
Right. Thanks, Mike. This is probably one of the most discussed and questions in these times. As already said, we do not view this as a binary transition, where one technology is just replaced by another. When we now see the AI infrastructure scaling, we see that different interconnect technologies are increasingly being optimized for different application and requirements in terms of distances, in terms of power envelopes, in terms of latency requirements, and so on. Also operational considerations. This is also depending on the deployment architecture. There's a coexistence of both technologies. For example, copper-based electrical interconnects, they are still highly efficient for shorter reach. They have high-density environments, where power efficiency, latency, serviceability, and also integration simplicity is or are critical requirements.
Optical technologies become more important if it comes to bandwidth requirements or transmission distances. When they get longer, optical technologies will become more popular. What we see is not so much a substitution, but an increasing architectural diversification and also an interconnect complexity. That's what we already said in our remarks. In practice, the AI systems are becoming more sort of in a hybrid environment where different technologies coexist across different portions and different areas of the infrastructure stack. Importantly, as the optical density also increases, the challenge extends beyond also optical engines themselves, the routing and fiber management, connector ecosystems, assembly scalability, but also qualification processes and operational serviceability.
These are all now becoming increasingly important, this is one reason why the optical infrastructure layer itself is strategically becoming more and more important as deployments are scaling. Our approach, as already pointed out, is therefore not centered around predicting one single architectural winner. Our focus remains more supporting evolving infrastructure environments across all technologies, copper interconnects, but also optical connectivity, structured routing, and also power delivery integration. To put it in a nutshell, we see at least for the foreseeable future that both technologies will still exist and coexist in the infrastructure. Thanks, back to you, Mike.
Thank you, Florian. Of course, also thank you to the analyst that posted those questions. I see actually quite a few questions around CapEx. Let me see. How should we think about CapEx and manufacturing expansion over the next few years, given that a lot of the content that we've been sharing with everyone is more talking about more of a longer-term capability building for BizLink. Thanks again to Kevin Norris, Taryn Kenny, I go hand this over to Charles.
Okay. Thank you, Mike. Yes, actually, that is a, you know, very important question, you know, given our, you know, growth trajectory. Our overall approach remains disciplined and visibility-driven, rather than speculated. The investments we're making today are primarily intended to strengthen operational readiness, engineering capability and development flexibility across multiple long-term infrastructure growth periods. This include expansion activity across sites such as Batam, Penang, Johor, and Vietnam, where we are preparing to increasing system complexity, higher infrastructure density, and broader customer deployment requirements. Actually, more importantly is that, we manage our footprint as an integrated global operating system rather than isolated factories. Different sites support different roles across engineering, new product introduction, high-mix execution, and scale manufacturing.
In sum, we also continue pacing investment alongside customer program visibility and operational maturity rather than purely against short-term demand fluctuation. As a result, we believe that our current expansion strategy remain financially disciplined, while still supporting long-term infrastructure scaling opportunity. Yeah, I hope that answered your question. Yeah. Back to you, Mike.
Thank you, Charles, and of course to the analyst that shared those questions. For the next question, again, it seems like every quarter or every other quarter we have, we're asked about our M&A. For this one, again, thanks to the analyst that had put the question, how we should think about our long-term M&A strategy. Again, this is bigger picture, I'll hand it over to Roger to highlight this one.
The M&A, long-term M&A strategy remains the same. We unchanged. Our M&A philosophy remains capability-driven and strategically selected. Historically, we have focused on acquisition that strengthen the engineering test, customer engagement capability, manufacturing footprint, or participation across structurally attractive infrastructure markets. Importantly, our objective is not simply adding revenue scale. We prioritize opportunity that enhance long-term position within evolving infrastructure ecosystem, where comprehensive interoperability and execution capabilities are becoming increasingly important. We also continue maintaining financial discipline regarding valuation, integration capability, and the long-term strategic fit.
As the industry continue evolving across AI infrastructure, semiconductor system, automation, and next generation connectivity, we believe there may continue to be selective opportunity to strengthen our broader capability portfolio over time. I hope this answers your question. Thank you very much.
Thank you to Roger, of course to those who continue to be very curious about our long-term MA strategy. There's some questions about order visibility, and I would like to think this is gonna be talking about our HPC business. This one's gonna be, how should we think about visibility compared to prior hardware cycles? For this one, I'd like to hand it over to Felix.
All right. Yeah. We believe AI infrastructure differs from many prior hardware cycles because deployment timing increasingly occurs much earlier and involves significantly deeper architecture coordination across the ecosystem. Power architecture, thermal coordination, interoperability, rack level validation, and deployment re-readiness often need to be defined well before the system enter large scale production. As a result, supplier engagement increasingly begins earlier and extend across multiple deployment generations, rather than being tied purely to short-term achievement cycles. While quarter deployment timing can still vary depending on customer schedules and in-infrastructure readiness, the underlying infrastructure board build-out itself appears increasingly multi-year and cumulative in nature. For this reason, we viewed the industry less as a traditional cyclical hardware market and more as a longer duration infrastructure scaling cycles with overlapping deployment phases and continuing efficiency upgrades over time. Right. Back to you, Mike.
Thank you, Felix. Given time, we'd like to take a look at the questions that were posted as well. Again, thank you earlier for again. Let me see the questions in the chat, in the public. Let's go to the questions then. Okay. Well. Okay. There's a question about can you share the technical differences between Power Whip and busbar and traditional 48, 54 volt systems and 800 V HVDC systems, and specifically, how much would the technical gap increase the ASP? Thank you. Now, there were some questions that were posted earlier about HVDC, and these will go back, thanks to Doris, of course, Derek, Kenny, and Billy.
Well, this one also addresses that particular question. Why is the industry moving toward HVDC, and what does this mean for BizLink? This is gonna be the final question for the call. I'd like to hand it over to Florian for this last one.
All right. Thanks. The transition towards a higher voltage power architecture is primarily driven by the physics and scaling efficiency rather than a certain technology preference of customers. If the infrastructure is scaling up, the rack density, the rack power density is significantly increasing. At these power levels, the delivery of large amounts of power in an efficient way using traditional low voltage approaches becomes increasingly difficult due to current intensity, but also thermal load and efficiency limitations. Higher voltage architectures are significantly reducing the current levels.
This is on the term improving efficiency, and lowers the thermal stress of the system, and therefore enables more scalable power distribution across these dense rack deployments. Importantly, the different customers are still evaluating different approaches, so including various HVDC and rack level distribution architectures, a different level of current. We are therefore expect multiple pathways to coexist for the future, at least for the near time future, depending on the deployment philosophy and the system design of the customers. From our perspective, this transition increases the importance of higher current connectivity and power distribution integration, validation capability, and also rack level infrastructure engineering.
We believe that these trends align very well with our long-term positioning in power infrastructure. At the same time, just a last remark, I would also like to emphasize that for sure, HVDC is not replacing data connectivity. These technologies address for sure different infrastructure requirement. In fact, improving power stability and thermal efficiency can enhance also operating conditions for high speed electrical interconnects inside a very dense rack environment. Hopefully this is addressing in short terms, short time the question. Back to you, Mike.
Thank you, Florian. Also, thank you to the investor that posted that question. Thank you again, Roger, Felix, Florian, and Charles. This concludes our Q&A section. A replay of the conference call today will be available on our IR website within 24 hours from now. If you have any further questions, please feel free to reach out to BizLink investor relations team. We thank you very much for joining today's call. You may now disconnect