BizLink Holding Inc. (TPE:3665)
Taiwan flag Taiwan · Delayed Price · Currency is TWD
2,630.00
0.00 (0.00%)
Apr 28, 2026, 1:30 PM CST
← View all transcripts

Earnings Call: Q3 2025

Nov 11, 2025

Mike Wang
Senior IR Manager, BizLink

Good afternoon, everyone. Welcome to BizLink's Third Quarter 2025 Earnings English Conference Call. This is Mike Wang, Senior IR Manager. I am joined by Roger Liang, our Chairman; Felix Teng, our CEO; Florian Hettich, our COO; and Charles Tsai, our CFO. Our results were just released and are available on our IR website, where you can download the latest earnings materials as well as access them from MOPS. This one-hour call will begin with Charles for financial highlights before we switch to Florian for operational highlights, and then end with Felix for corporate highlights. We will then conclude with Q&A. You may type in your questions now in the public or in the private chat, and we will answer as many of them as we can.

There will be no quantitative forward-looking comments. Before we continue, please kindly be reminded that today's discussions may contain qualitative forward-looking statements based on our current expectations, which are subject to significant risks and uncertainties and may cause actual results to differ materially from those contained in these qualitative forward-looking statements. We are not obliged to update these statements, which are to be used for information purposes only. Please refer to the Safe Harbor Notice in our Earnings Deck for more details. I would like to remind everyone that today's call is being recorded. This recording and these prepared remarks will be uploaded onto our IR website within 24 hours after the conclusion of this call. We sincerely appreciate Morgan Stanley for hosting today's call. With that, I will turn the call over to our CFO, Charles.

Charles Tsai
CFO, BizLink

Thank you, Mike. This quarter's results again demonstrated disciplined execution amid an evolving AI investment cycle. Sales and margin both reached new record highs. We achieved continued improvement in our sales mix with our HPC and capital equipment sales mix rising from 32% in the third quarter 2024 to 51% in the third quarter 2025. In our efficiency and productivity through our various excellence initiatives, we were able to successfully execute ECP VI to build our cash buffer to fund our strategic growth efforts. Finally, we have announced a new M&A deal. While HPC and capital equipment are semiconductor production equipment businesses, accounting for 51% of total sales in the third quarter 2025, they continue to drive the bulk of our growth during the quarter, rising 31% QoQ, 179% YoY, and 1% QoQ and 36% YoY respectively.

The combined sales in our other areas rose 6% YoY , 6% QoQ , flat YoY . This was the first time since the first quarter 2024 we have seen a sequential increase in sales for those other areas, showing that we're beginning to switch on more of our growth cylinders. We caught a bottom in factory automation in the first quarter 2025, and sales here have risen for the past two quarters. One other category within our industrial segment drove growth during the quarter: healthcare. Sales here rose 11% QoQ , 18% YoY . In fact, after we caught a bottom in our industrial business in the fourth quarter 2023, sales have risen in six out of the seven past quarters, rising by a total of 30%. We're now calling a bottom in auto or but were initially hesitant to do so in the first quarter of 2025.

Sales here have risen for the past two quarters. Finally, our electrical appliances segment continues to go through a challenging market environment, and the outlook is hazy. Our portfolio strategy is designed to deliver both performance and resilience. Our HPC and SPE business are scaling in tandem and increasingly enabling each other. The rapid expansion of AI compute is driving demand for advanced semiconductor capacity. While the next generation of chips is unlocking even more powerful data center architectures, we see this as a reinforcing cycle as HPC grows, SPE grows, and as SPE advances, HPC accelerates further. Within this structural flywheel, our high-power and high-speed solution continues to gain content in AI infrastructure, while our semiconductor equipment subsystems benefit from the foundry and HBM investment that support that growth. At the same time, our diversified industrial business continues to provide stable cash generation and operating leverage.

This combination gives us a clear path to stronger earning quality, margin expansion, and self-funded growth across market cycles. BizLink sits at the intersection of HPC power and data connectivity, where physical deployment continues to scale rapidly. We're also positioned at the convergence of AI infrastructure with emerging application domains such as humanoid robotics, autonomous driving, and edge computing. This sector remains early, but they represent the next wave of this AI super cycle as value shifts from infrastructure build-out to real-world deployment. Our diversified exposure across this intersection from rack power and data persistent integration for next-generation applications positions us to benefit both from today's infrastructure acceleration and tomorrow's application diffusion. The AI compute landscape is also entering a phase of platform diversification.

NVIDIA remains the clear performance leader, but hyperscalers are increasingly investing in alternatives, and these are MI300 and MI400, Intel Gaudi and Lunar Lake, and in-house silicons such as Amazon's Trainium and Microsoft's Maia to ensure supply continuity and to optimize for the cost of ownership. This broadening of the compute base creates more design, more SKUs, and greater interconnect complexity, exactly where BizLink adds value. As system architecture proliferates, interoperability becomes the gating factor. Our ability to design both power and data interconnects to work across GPUs, CPUs, and assets without compromise is what enables customers to scale comfortably across platforms. Our true moat lies not in neutrality, but in our unmatched ability to bridge every platform with agility, precision, and speed. The shift toward flexible scaling across clusters, zones, and clouds is increasingly demanding for vendor-agnostic, high-performance interconnects. This includes AECs, HVDC power systems, and rack-to-rack connectivity.

While the first wave of AI spending was concentrated among a few hyperscalers, the next phase is defined by workload proliferation as enterprise and AI startups transition from training to deployment. Our role, as a system enabler embedded across multiple leading cloud ecosystems, gives us expanding leverage as customers scale across diverse architectures and multi-region footprints. We recognize the recent focus on circular financing across the ecosystem, but what we're seeing is not speculative leverage. It is ecosystem acceleration. A helpful way to understand this is through a working capital lens. Imagine receiving a large purchase order, but not yet having the cash on hand to fulfill it. You can borrow from the bank, build cash organically over time, or temporarily rely on partners to help finance the rent. If you receive multiple large orders at once, you will likely need to do all three.

That is essentially what is happening across AI infrastructure today. Hyperscalers and suppliers are co-investing, directly and indirectly, to pull forward deployment of increasingly large and complex systems that would otherwise take years to materialize. This capital partnership allocates risk efficiently and shortens innovation cycles, which is clearest in NVIDIA's accelerated product catalyst. These are real systems, real deployments, and real cash flows, and BizLink is paid to physically deliver the power and data that make AI work. As the industry moves toward synchronized large-scale deployment, BizLink's value proposition only grows. Every node requires power and data to be physically delivered reliably and efficiently. That is where we win in real engineering delivery tied to tangible deployment velocity.

While the ecosystem today may feel centrally aligned behind a few major models, we view this as a natural conservation phase, one that will give way to deeper specialization and broader application layer diversification. As AI matures into multiple compute and deployment paradigms, demand does not diminish; it redistributes. One element remains constant across every architecture: the interconnect. That is exactly where BizLink delivers proven resilience. Recent hyperscaler outreach only reinforces this. The challenge in AI infrastructure is no longer simply scale. It is resilience, redundancy, and uptime. As power density rises and compute becomes more distributed, reliability in interconnect and power delivery moves to the forefront. These are precisely the system-critical domains where BizLink excels. Every architectural shift expands our contemporary rack and per node, increasing recurring revenue visibility.

Finally, while underlying demand remains strong, we're closely watching pacing signals, such as moderation in hyperscaler leasing activity, which suggests a shift toward utilization discipline and deployment efficiency. BizLink is well-positioned in either scenario, where the cycle emphasizes volume speed or operational optimization. Our role in high-density power and signal integrity-critical interconnect remains indispensable. Our diversified exposure and operational agility give us resilience through both acceleration and digestion phases of the AI cycle. We aim to be the most coherent voice in the complex AI landscape, and that clarity is earning long-term investor confidence. Our commitment remains unchanged: financial discipline with real-world delivery. That balance continues to reinforce BizLink as a trusted system-critical enabler in a multi-decade transformation. Florian will now provide updates on our latest quarterly operational takeaways.

Florian Hettich
COO, BizLink

Thank you, Charles. Let me start with our structural growth drivers.

AI infrastructure is evolving through distinct architectural phases, from data to power and soon to optics. BizLink is one of the few suppliers enabling every step of that journey. As AI infrastructure scales from prototype clusters to global utility-grade deployments, we are observing a clear rotation of architectural bottlenecks, moving from data to power to data back again. Each phase of this cycle presents new opportunities for differentiated suppliers to play a foundational role. The narrative around AI spending is often oversimplified as linear GPU demand. In reality, the true constraints keep evolving. BizLink is uniquely positioned to solve each phase of this architectural cycle. This is what gives us long-term visibility and makes us a structural enabler in the AI build-out, not just a technical beneficiary. Now let's talk about the platform and architecture evolution.

Over the past 18 months, the industry has focused on overcoming the scale-out data bottleneck, particularly within training clusters. This has driven widespread adoption of active electrical cables, AECs, to replace passive DACs and AOCs, first from tray- to- tray and now to rack-to-rack Ethernet configurations. Now, as we move deeper into high-power platforms such as Blackwell and Rubin, the constraint is shifting towards power with next-generation racks exceeding 100 kW delivery systems. These trends are pushing power infrastructure to the forefront of hyperscaling planning. As white and gray space converge in AI data centers, the industry is realizing that power and data can no longer be engineered in isolation. Power density dictates data topology, while data throughput drives power distribution choices. Building next-generation AI clusters is like building a modern city. In early times, roads, which can be seen as data paths, worked when traffic was light.

As skyscrapers, which can be seen as power-dense racks, go up, the grid must be rebuilt to prevent congestions and outages. Once the city sprawls to multiple districts, which would be the pods, you need high-speed metro links to keep everything flowing. These metro links would be optical interconnects. We are the engineers designing the roads, power lines, and transit systems together, not one piece at a time. This brings me to BizLink's strategic differentiation. At BizLink, we sit precisely at this intersection between power and data, enabling co-design solutions that ensure rack-level performance, reliability, and scalability. This dual capability of power and data domain knowledge is what makes us a unique structural enabler of AI infrastructure. We are encouraged to see the capital markets beginning to recognize what we have been building toward: HPC power as both a critical bottleneck and a long-term investment theme.

For BizLink, this is not a future aspiration. It is happening now. With Blackwell and soon Rubin racks, our HPC sales mix looks to decisively shift towards high-power interconnects. This positions BizLink as one of the few suppliers with scale in both data and power interconnects, reinforcing our structural role in AI infrastructure growth. Looking ahead, we view co-packaged optics CPO as a potential long-term evolution in AI system design rather than an imminent volume transition. While CPO represents a compelling approach to address bandwidth, power, and thermal challenges at extreme performance densities, its adoption path remains dependent on ecosystem alignment, cost scalability, and manufacturability maturity. BizLink's role at this stage is to build readiness, not to predict timing.

Through our partnerships with SENCO and ficonTEC, we are developing and validating automation, packaging, and testing solutions that would enable CPO and silicon photonics manufacturing at scale once the technology inflection arrives. This early collaboration ensures that BizLink is technically positioned when the industry begins to move without overcommitting to timelines still being shaped by customer roadmaps and standards convergence. In parallel, we continue to advance our low-loss optical termination and hybrid copper optical interconnect programs, technologies that provide meaningful near and medium-term value while maintaining optionality for longer-term optical transition. This pragmatic approach allows BizLink to participate in today's accelerating AI deployments while preparing for the next era of interconnect architectures with measured confidence. Let's have a look at our positioning for the next AI wave. While co-packaged optics represent an exciting evolution in chip-to-chip data movement, it is a highly integrated and fixed solution.

Cables, both electrical and optical, will continue to play a vital role where flexibility, modularity, and scalability are required, particularly across trays, racks, and rows. We review CPO as a replacement, but as a complementary technology that expands the optical boundary close to the compute die, creating additional interconnect demand elsewhere in the system. BizLink's strategy is to position itself at the intersection of both worlds: the flexible copper and fiber solutions that tie the ecosystem together, supported by the best-of-breed partnerships that balance innovation, scalability, and execution discipline. In the current power-centric phase, we are one of the few suppliers developing both higher-voltage busbars and higher-amperage power whips, supporting customers through high-reliability, high-density deployments. We were honored to be named one of NVIDIA's 800 HVDC ecosystem partners at OCP 2025.

On the data side, our AEC portfolio continues to scale alongside hyperscaler requirements, and we continue to push the limits of copper data connectivity. NVIDIA also announced that Meta Platforms and Oracle will adopt their Spectrum-X Ethernet platform that same week. Finally, our early investments in optical interconnect automation and packaging through partnerships like SENCO and ficonTEC further strengthen our position for the photonic era of scale-out compute. These are not merely product adjacencies, but strategic extensions of our role as a system-level enabler. Just as compute density has evolved, so too has the interconnect. What began as a data problem became a power challenge and is quickly becoming both again. Our role is not simply to keep up, but to architect ahead.

That is why the most forward-looking AI customers are turning to BizLink not just as a supplier, but as a design partner as they build the next generation of global compute infrastructure. As AI infrastructure evolves, it is no longer defined by static bottlenecks or one-time upgrade cycles. Instead, we are seeing a phased progression of architectural priorities, each phase unlocking the next. Early deployments emphasize data movements within the rack, driving adoption of active copper interconnects. As clusters scale, the focus shifted toward power infrastructure. Hyperscalers began pre-provisioning long-reach HVDC and high-current connectivity at the campus level, making power density a primary design constraint. The next phase is just around the corner: scale-up interconnect, where advanced protocols like PCIe Gen 6 and near-die connectivity become essential to sustaining performance density. Customer engagement shows rising demand for architectural foresight, not just components.

As AI workloads intensify, each phase of the build-out requires a distinct set of constraints to be solved. That is where we come in. Whether it is in rack signaling, pod-level HVDC, or next-generation PCIe and CPO connectivity, we bring systems thinking to infrastructure at scale. We believe that the AI infrastructure cycle is progressing through distinct architectural phases, each unlocking new constraints to solve. Growth in this environment is not about chasing SKUs, but about helping customers design systems that scale. From in-rack AEC adoption to pod-level HVDC distribution, PCIe co-design, and eventually CPO integration, our engineering capabilities are increasingly embedded across the architectural stack. That is why we are seeing multi-year opportunities emerge, not just in one product line, but across the entire interconnect layer. Allow me some last comments on the proof of durability. We are in the early stage of a multi-year AI infrastructure transformation.

While the pace of platforms' rollouts may temporarily moderate, the physical build-out of data center capacity remains firmly intact, with global data center capacity now projected to exceed 50 gigawatts within the next several years, up from just 20 GW earlier this year. This sequencing, volume first followed by content and share, is a normal and healthy progression in any large-scale system upgrade. Importantly, design in velocity remains strong, and our engagement across new platform architectures positions us well as deployments resume and scale. Now, Felix will provide us some updates on our latest quarterly corporate takeaways.

Felix Teng
CEO, BizLink

All right. Thank you, Florian. BizLink has evolved from a component maker into a system enabler, helping customers design the very infrastructures that powers the AI era.

We have noticed that the capital markets are beginning to frame AI infrastructure around systems thinking, where procurement is increasingly led by engineering teams that favor end-to-end solution providers capable of guaranteeing performance and reliability at the system level. This is exactly what we have been building toward and what we emphasized in our last result call. BizLink has moved far beyond being viewed as a component supplier. Our role today is as an embedded infrastructure partner, one that co-designs and integrates both high-speed data and high-power interconnects into cohesive rack-level systems. Unlike PSU vendors or cooling system providers, our differentiation lies in being the connective tissue between domains. Our interconnect solutions are not standalone products. They are engineered in lockstep with customer architectures to ensure that data and power delivery work seamlessly together across Blackwell pods and soon Rubin pods deployments.

We believe this ability to bridge domains, to participate at the architecture level rather than the component level, is what creates enduring customer intimacy and makes BizLink unique in the AI supply chain. OCP 2025 marks BizLink's debut as a full system enabler for AI racks, blending copper and optics for data and power under open-standard architectures. This milestone strengthens our strategy credibility, broadens investor confidence in our AI infrastructure narrative, and positions us for inclusion in the next generation OCP reference designs that will underpin hyperscalers' deployments through 2028. We do not compete with our customers. We empower them. That is the position we've claimed early, one we have communicated constantly in our results call, and one we continue to build upon today.

About the platform and architecture, delivering effective AI infrastructure requires a comprehensive view of the full architecture from grid to chip and from signal launch to signal landing. As AI platforms become more power-hungry and compute-dense, the challenge is no longer just supplying more watts or removing more heat. It is ensuring that every watt delivered and every bit transmitted arrives reliably, efficiently, and in sync. This convergence of high-density power delivery with low-latency, high-speed data transfer is becoming increasingly critical as GPU, CPU, and accelerators scale in complexity. Emerging 800 high-voltage DC specification, liquid cooling systems, and the transition to AECs and PCIe Gen 6-7 are reshaping power and data architectures life. Only by addressing both domains cohesively can AI clusters scale safely and efficiently. Power is the lifeblood of AI infrastructure. Without it, nothing turns on. Data is a nervous system, coordinating every movement with precision.

The two must function as one. That is the role we play in AI infrastructure, ensuring that the power and data lifelines are designed as a single system that keeps the whole organism performing and evolving safely. About our differentiation. While power supply integrators are expanding toward floor-step power and cooling solutions, BizLink is carving out an equally critical one, enabling the integration of high-power and high-speed data interconnects at the rack level. This system-level dual-domain capability is what makes us unique and will drive enduring recognition of BizLink as a structural enabler of AI infrastructure. As racks grow in number and density, our opportunity grows with them. Few in our industry can bridge both data and power domains with comparable depths of engineering, positioning us distinctively as system architecture evolved toward greater efficiency and integration.

Our neutrality remains a strategic strength, allowing us to collaborate across compute ecosystems without being tied to any single platform or standard. Our measured partnership-driven approach as a differentiator is, in effect, a deliberate strategic posture for long-term stakeholders. For the next AI wave, this same system-level engineering know-how, combining data, power, and control at scale, extends naturally into new frontiers. The capabilities that make AI data centers more efficient are also enabling the rise of humanoid robotics, autonomous driving platforms, and intelligent factories. These too are complex systems that demand dense, reliable, and safe connectivities. Areas where we already possess proven expertise through our work in factory automations and electric vehicles. Over time, we expect similar architectures to emerge in healthcare equipment and edge computing, where performance, miniaturization, and reliability converge.

The profitabilities we are generating in HPC and SPE give us the financial flexibility to invest in these new growth engines. We are scaling capabilities, deepening partnerships, and building the foundations for long-term system-driven growth. Importantly, these same dynamics are also beginning to play out in our capital equipment business. Semiconductor toolmakers are increasingly outsourcing not just components, but entire subsystems and integrated platforms, from electrical distribution systems to fluid distribution systems. While the pace of change is slower than in HPC, the underlying driver is identical. Customers seek partners who can simplify complexity, ensure subsystem cohesion, and guarantee reliability across two generations. Here, too, BizLink is being engaged earlier in the design cycle and entrusted with higher-value integration roles, expanding our content per system and deepening long-term visibilities. Whether in AI data center racks or semiconductor production tools, the message is consistent.

The industry is moving beyond component sourcing toward system-level procurement. BizLink stands among the very few companies positioned at this intersection, enabling the convergence of data and power in HPC and providing integrated subsystem solutions in SPE. As this shift becomes more widely understood, we believe investors will increasingly recognize BizLink not only as a data and power supplier, but as a structural enabler of system-level integration across both of our highest growth businesses. For the durability, these capabilities are made possible by very high quality that define BizLink culture. Our flexibility, speed, resilience, and relentless drive to innovate. They reflect our Silicon Valley DNA, which keeps us agile even as the industry undergoes profound change. As technology accelerates and geopolitical evolves, our customer relationships remain our core strengths. They are not only our greatest assets, but our enablers, guiding how we co-create, adopt, and scale alongside them.

This combination of innovation and partnership ensures that no matter how the landscape evolves, BizLink will continue to move forward with purpose, discipline, and speed. This is how we drive sustained value creation through structural positioning, not cyclical exposure. We move fast, stay flexible, and partner deeply. Qualities that let us thrive in an industry defined by constant technological and geopolitical change. As AI infrastructure evolves, our role remains the same. We help define what comes next. Now, let me turn the call over to Mike.

Mike Wang
Senior IR Manager, BizLink

Thank you, Felix, Florian, and Charles. This concludes our prepared statement section. Now, let us begin the Q&A section. Please feel free to type in your questions, and then we will answer as many of them as possible in the time remaining. We've already seen a couple of questions.

This is primarily on our high-performance computing growth drivers, the outlook, and fiscal year 2026 visibility. What else? Data versus power mix, timing of HVDC contributions, sustainability of AI demand. Quite a few questions received on the HPC business, and we'll try to answer as many as we can, try to lump the ones that are similar. Now, for this first grouping of questions, I'd like to hand the floor over to Florian.

Florian Hettich
COO, BizLink

All right, thanks, Mike. We see actually that the AI infrastructure demand is scaling faster than the physical world can keep up, and that is, in our view, creating a structural growth for the HPC business. This is, in our view, not a cyclical growth. Because each generation of accelerators, they draw more power that moves more data and that packs more compute into the same physical footprint.

This drives the architectural change, as you can see, from 54 volts, for example, to 800 volts power systems, or from passive to active electrical cables, and from conventional racks to co-designed high-density platforms. This is what we are seeing currently ongoing. All of these transitions expand actually our content value in both areas, in power as well as in data. Our role, as I explained already in the remarks, begins early in the co-design with the hyperscalers and the GPU makers. We get into that well beyond a single cycle. Now looking forward to 2026, we see growth driven by continued AEC adoption, but also due to new, the 1.6T platform. We will probably see the initial wave of HVDC deployments.

It is important to view this as a multi-year infrastructure upgrade rather than a year-to-year forecast, in my view. Back to you, Mike.

Mike Wang
Senior IR Manager, BizLink

Thank you, Florian, for providing the color on that, as well as from the audience for your questions. Now, we did talk quite a bit about power during the remarks, and there has been a lot of interest in that side of the business. For this next grouping on the power architecture, HVDC and ASP content value, what is the impact on HVDC, on ASPs, content growth, market share, timeline to contribution? Still very much along HPC lines. Florian, I am going to have to ask you to step forward again.

Florian Hettich
COO, BizLink

Okay, thanks a lot. We consider the HVDC as one of the most complex and consequential transitions underway in the AI infrastructure.

The move from 54 volts to first step 400 volts or then eventually 800 volts dramatically reduces current. On the other side, it is also increasing the design difficulty when it comes to, for example, electrical, thermal, or safety topics. This is going to be and also is increasing the complexity, but also raising the entry barriers and therefore reduce the number of capable suppliers. A lot of challenges, but at the same time, also a lot of opportunities. As I have said, we have engaged since the early engineering phases with NVIDIA and also other hyperscalers. This gives us, in our opinion, a first-mover advantage in qualification and also in the area of system integration. We expect to see the 400-volt adoption maybe from late 2026, and 800-volt scaling will be much later.

The contribution of those technologies is building up over the next several years. The content value per rack is, in our view, rising because HVDC is requiring other types of busbars, new connectors, need to redesign the power whips. All of those are areas where we have already proven capabilities. I think this is probably the answer. With that, back to you, Mike.

Mike Wang
Senior IR Manager, BizLink

Thank you once again, Florian. Of course, to those in the audience that asked about the power side of our HPC business, there are still a lot of questions about the AEC side for data within our high-performance computing business. Let me see. AEC business competition capacity, AEC 400G moved to 800G to 1.6 terabits per second later on, capacity competition, ASP differentiation, these kinds of topics. For this one, I would like to hand the floor over to Felix.

Felix Teng
CEO, BizLink

All right. Yeah. AEC remains one of our strongest growth engines. The 800G AEC ramp continues, and the 1.6T AEC is now under development for the next generations of GPU platforms. We collaborate closely with both silicon providers like Credo and hyperscalers to align on architecture and test standards. We excel in system reliability and speed of iterations. That means we design, qualify, and scale new specs faster than others, which gives customers trust. New entrance-based deep learning curves, the gap is not about receiving specs, but mastering process control, firmware tuning, and thermal performance. That is my take.

Mike Wang
Senior IR Manager, BizLink

Thank you, Felix, for sharing your insights. Again, for those in the audience who asked about the AEC business, we are trying to cover as much as we can. Apologies if we do not cover everything about HVDC, AEC, and the HPC outlook. Moving a little bit to other topics. The next grouping, sort of the fourth one, is, let me see, around the manufacturing footprint, CapEx, and capacity allocation, mainly regional capacity plans, 2025 - 2027 CapEx, allocation priorities, localization. Quite a few questions on that. For this, on the operational side, I'd like to hand it over to Florian.

Florian Hettich
COO, BizLink

All right, thanks, Mike. Yeah, this is probably one of our focus areas and challenges which we have or which we face currently. We continue for sure to expand our global manufacturing footprint in step with customer localization trends, but also with the multi-year AI infrastructure demand. This is the challenge that we set up our long-term footprint and align with the long-term customer demands and the customer wishes for supply chain localization. For example, in Malaysia, we have been aggressively expanding our Penang footprint.

This is serving as a key hub for high-precision and high-value manufacturing. At the same time, also, our other two new sites, one in Batam, Indonesia, and the other one in Tainan in Taiwan, recently had their grand openings there. This is marking another milestone in our regional diversification strategy. Batam will enhance our capacity, for example, for high-speed interconnect and cable assemblies. In Tainan, we focus our capabilities will be closer to semiconductor and system integration customers. If we look ahead, I think we can also say that we are evaluating further expansion also in North America in order to support potential local manufacturing, for example, for power whips or related high-value assemblies. We are also in early stages of assessing maybe additional countries for future capacity increases.

This is because the customer wants us also to have a regional blend supply chain, and we will align with also customer needs in the long term. Overall, I think we can say that across all regions, our approach still remains disciplined. We are focusing on capacity that is close to the customer and the technology depth. On the other side, we also will maintain strong utilization and also look at operational efficiencies. We think that these investments position us well for sustained growth in both areas, HPC business, but also capital equipment in the semiconductor business through the next several years. I hope this is giving you a flavor on the manufacturing footprint. With that, back to you, Mike.

Mike Wang
Senior IR Manager, BizLink

Thank you, Florian, for providing quite a bit of input into this set of questions. For the next one, looking outwards, the fifth group of questions is really around our competitive landscape, consolidation and moats, competitive dynamics in AC and power, supply chain reshaping, consolidation. For this one, bigger picture-wise, I would like to turn the mic to Felix.

Felix Teng
CEO, BizLink

All right. Yeah, as I mentioned earlier, and actually, I can just emphasize that again, our advantage lies in system integration, not just price competition. We bridge power and data domains, designing busbars, whips, and AEC solutions as one ecosystem. Competitors often specialize in one dimension. Very few can handle both high current and high speed reliability simultaneously. Industry considerations likely, but that tends to benefit the suppliers already embedded in hyperscaler qualification lists. We continue to invest in R&D techs and co-design capability to stay at the front of each architecture transition. That's how we succeed.

Mike Wang
Senior IR Manager, BizLink

Thank you, Felix. Thank you to the ones in the audience for sending over those questions. There continues to be a lot of interest in future data interconnects for us, even though, as we mentioned a little bit earlier, we have a long-term view on this. Now, this is going to be on CPO, CPC, non-optical transition. For this, it is CPO timing, coexistence with CPC, shift from copper to optical. Again, during the remarks, we did highlight this a bit. With that, I'd like to hand it over to Florian.

Florian Hettich
COO, BizLink

Yeah, thanks, Mike. This is, I think, a tough question. In our view, CPC and CPO, we see them more as complementary evolutions and not so much as a replacement for each other. As you might know, copper is having cost benefits, latency, and also power advantages. This is more towards short or mid-reach distances.

Optics becomes viable if we need to increase the reach and the bandwidth. CPO comes into play. Our roadmap, as you know, includes both domains, leveraging our AEC expertise in copper today. This is one part of it, but the other part is also that we are co-developing optical interconnect solutions to address also future hybrid racks demands. We expect that both technologies will coexist for several product generations. CPC and CPO will coexist in the future for quite a while. This is our view on CPO, CPC topic. Back to you, Mike.

Mike Wang
Senior IR Manager, BizLink

Thank you once again, Florian. Not to glance over some questions about numbers, but there is a grouping on margins, operating expenditures, operating leverage, asking about gross margin trends, optics stability, gross margin operating margin targets, and pricing power. For this one, I'd like to hand it over to our CFO, Charles.

Charles Tsai
CFO, BizLink

Okay, thank you, Mike. Actually, if we really look at our margin, we've been experiencing the concept of a four-year margin expansion. If we really look at this expansion, that tends to be, I would describe it as value-driven, mostly from a mixed improvement, yes, direct to high-value sectors. Also here, we have higher engineering value content. It has not really come from pricing, not so much from cost leasing. I would call that value-driven. As HPC scales, more of our revenue comes from co-design, higher complexity assemblies, such as active cables and HVDC bus bar. Also in this process, operating leverage improves as naturally as qualification, tooling, and learning curve. They are shared across multiple programs. I think in this whole momentum, we will continue, we will maintain cost discipline.

We will ensure OpEx grows slower than revenue while continuing to invest in innovation and regional expansion. We are going to take this balanced approach to ensure this stay. This is not just numbers. It is also our continued improvement in our productivity and compactness. Thank you.

Mike Wang
Senior IR Manager, BizLink

Thank you, Charles, for answering the first set of some questions around numbers. Now, not to ignore the SPE side, our capital equipment business, and this is mainly about the portfolio balance, our outlook for next year, diversification, synergy with HPC, high-performance computing, and of course, potential M&A opportunities. For this, I would like to hand it over back to Felix again.

Felix Teng
CEO, BizLink

All right. Our capital equipment and HPC business actually reinforce each other. As AI drives demands for advanced semiconductor capacity, our electrical and fluid distribution systems for toolmakers also scale. Conversely, learning from SPU integration helps us in HPC rack-level assembly. We continue to evaluate targeted M&A that accelerate technology depth rather than pure size, particularly in areas adjacent to power, data, and food systems. Those are the three areas I'm actually going to look at.

Mike Wang
Senior IR Manager, BizLink

Thank you, Felix, for providing color on that, and as well as for those in the audience that sent over those questions. We still have quite a few questions on M&A, but for that, let me see if I can read a little bit. Do you expect any consolidation in cable connector supply chain? How do we view our position and share gain potential in overall AI trend? And then which applications or markets could the firm could we further enhance through M&A? Given that we just very, very briefly touched on M&A, I'd like to hand this back over to Felix as well.

Felix Teng
CEO, BizLink

All right. Yeah. Several quarters ago, we noted that our industry is and has continued to consolidate. We have seen several major and minor deals since then. We expect this trend to persist, and AI may even accelerate it. The bigger and quicker we'll get more of the pie. We'll play our role in this, and we are preparing for it. Again, we will stay reasonable and disciplined. We are constantly on the lookout for new M&A opportunities sourced both externally and internally. However, as in the past, we will stay reasonable and disciplined. We continue to be interested in various technologies and capabilities, markets, customers, and capacity additions.

Mike Wang
Senior IR Manager, BizLink

Thank you, Felix, for the brief follow-up on M&A. I see, yeah, there's still quite a few questions about the numbers side of things. Let me see. Given accelerating revenue growth, how have we kept our OpEx relatively stable? Do we expect this trend to continue? A follow-up on that is going to be OpEx and tax rate targets in the midterm, let's say, one to three years horizon. For that, again, this is a numbers thing. I'd like to hand it over to Charles, our CFO.

Charles Tsai
CFO, BizLink

Yeah, thank you, Mike. Actually, over the past two to three years, we've made consistent progress by improving our efficiency and productivity through our various excellence initiatives. This program leveraged both our processes, plan, and technology to enhance OpEx productivity, allowing us to do more with assembly source even as business scales.

If I look at this, I would center it around three key drivers. Of course, our excellence programs and technology adoption, we use digital tools, we automate, and also best practice sharing across site to continue to lift productivity. Also, there is scale effect, of course, of revenue growth, our cost base scales more effectively because many of our core functions and systems are already in place. At the same time, we still make ongoing investment. It is not just that we save. We also spend. We continue to invest in R&D and process capabilities. Our efficiency can support both growth and innovation. If our scale expands substantially, we will see OpEx rise in absolute terms. Our OpEx to sales ratio will remain at a comparable level, reflecting our commitment to disciplined growth and continuous productivity improvement.

Of course, on the tax, we also have a tax question. On the tax front, we also remain focused on optimizing our growth structure and streamlining intercompany arrangement to enhance overall efficiency. At the same time, we will also ensure full compliance with evolving global tax framework under [audio distortion] . Plus, we progressively align our structure with new requirements. I hope that answered the question.

Mike Wang
Senior IR Manager, BizLink

Thank you, Charles, for that final color on these numbers for the questions for the audience. It is 3:30 P.M. right now. I want to thank Felix, Florian, and Charles. This concludes our Q&A section. A replay of the conference call today will be available on our website within 24 hours from now. If you have any further questions, and I see there are still quite a few, both Jimmy and I, please feel free to reach out to us. Thank you very much for joining today's call. You may disconnect.

Powered by