Hello, and welcome to the Napatech Q4 2025 IMS Conference Call. My name is Alex, and I'll be coordinating today's call. If you'd like to ask a question at the end of the presentation, you may press star followed by one on your telephone keypad. I'll now hand it over to Klaus Skovrup, CFO, to begin. Please go ahead.
Good morning. I'm Klaus Skovrup, CFO of Napatech. I'm pleased to welcome you all to the Napatech's presentation for the fourth quarter and full year of 2025. Joining me today is our CEO, Kartik Srinivasan. Our fourth quarter and financial year 2025 report was released earlier this morning on the Oslo Stock Exchange. It's also available on the Investor Relations section of the Napatech website. For your information, a recording of this webcast will be available later today. There will be a question and answer session following the presentation. During and after these prepared remarks, you may submit your questions via text on the webcast page, we can take your questions on the phone. If you would like to ask a question, please follow the instructions on the slide. Please note that this presentation contains forward-looking statements that are subject to risks and uncertainties.
Our actual results may differ from those discussed in forward-looking statements. For further information on risk factors, please see company announcement and the slides prepared for this presentation. With that, over to you, Kartik.
Thank you very much, Klaus, and good day, everyone. Here is the agenda for today's update. First, I'll briefly cover the leadership transition and how we've strengthened the executive team. I'll walk through the market opportunity and the customer segments we are focused on. We'll review our product portfolio and how our solutions are aligned to those opportunities. Finally, Klaus will cover our financial performance and our outlook. As the new CEO of Napatech, I'm pleased to step into this role at a time when our core business is growing steadily and the market opportunity ahead of us is expanding meaningfully, particularly in AI infrastructure. Napatech has built a strong networking business grounded in deep technical expertise, and we're now extending that foundation into what we see as the next phase of infrastructure growth driven by AI workloads.
I'd like to sincerely thank our former CEO, Lars Boilesen, who has transitioned into the role of chairman of our board. His leadership during a critical phase of Napatech's transformation provided stability, focus, and strategic direction as the company evolved its market positioning and product strategy. I'm grateful to Lars and the board for their confidence in me to lead this next chapter, and I look forward to applying my experience to accelerate execution and customer impact. I'm also pleased to welcome Klaus Skovrup, our new Chief Financial Officer. Klaus brings strong experience in finance and investor relations and will play a key role in strengthening our operational and financial discipline. Over the remainder of this presentation, we'll walk through how we are framing our target markets, the solution strategies we are executing against those opportunities, and how this focus is translating into tangible growth and momentum.
We've made meaningful progress against the key objectives of our strategic plan, strengthening our position in the evolving market for programmable networking solutions. Over the past year, we've delivered measurable improvements in our existing business while advancing the design wins and customer engagements that support our long-term growth ambitions. Execution has progressed according to plan. As we compare our position today to where we were a year ago, we are operating from a stronger foundation with improved visibility, clearer market focus, and better alignment between our product roadmap and customer demand. Importantly, this progress enhances our confidence in both the timing and the scale of our growth trajectory while continuing to reduce execution risk. With that context, let me walk through the current state of the business and our outlook.
Let me briefly highlight our 2025 financial performance and what it reflects about how we are running our business. Our results are increasingly the outcome of a structured operating model and disciplined execution across the company. In the fourth quarter, revenue grew to $7.9 million, up 48% year-over-year, as design wins and roadmap execution translated into expanding customer deployments. For the full year, revenue increased to $22.4 million, reflecting continued momentum across the business. At the same time, we maintained strong gross margins at 70% in Q4 and 69% for the full year, reinforcing the quality of our revenue mix and the growing contribution of higher value solutions and software. Importantly, we delivered this growth while actively managing costs.
Operating expenses declined year-over-year in line with our execution plan as we focus resources on the highest impact programs and improved efficiency across the organization. This discipline is also evident in our cash performance. Networking capital improved from DKK 98.6 million- DKK 83.2 million. Operating cash flow turned positive in the fourth quarter. We exited the year with DKK 127 million in cash and equivalents. Taken together, these results demonstrate a business that is executing its roadmap, scaling with financial rigor, and strengthening its operating foundation. We continue to optimize planning, execution, and capital allocation processes to further improve predictability and efficiency as we grow. Overall, the financial trajectory reflects not just growth, but controlled, scalable growth, positioning us well for the next phase of expansion.
The final section of today's presentation will include a detailed financial summary for 2025 by Klaus. Beyond the numbers, we are making great strides with our strategic accounts, prospects, and ecosystem partnerships. Our activities with the previously announced TierOne OEM vendor and AI inferencing vendor, d-Matrix, continue to act as a cornerstone for our product and technology planning. This provides us high confidence and low-risk investments to meet their exact requirements for solutions to applications and use cases that we aim to broadly replicate in the mass market. Throughout 2025, we extended our technology leadership. We extended our leadership by the launch of our new SmartNIC and DPU platforms, aligned to emerging AI networking requirements powered by Altera and Intel, that notably were specified by our TierOne OEM.
We continue to work hand-in-hand with Altera on evolving solution, product and technology roadmaps, and co-development, attacking the highest growth segments of AI, storage, networking, cybersecurity, fintech, and others. Many of you will have seen numerous industry and financial press and analyst reports about the launch of d-Matrix JetStream AI networking interface card, aimed at new AI data centers in the cloud, edge, and enterprise. Throughout this period, our R&D teams consistently met committed milestones on time and to specification for demanding customer programs. We continue to build scalable growth across the business. We are pleased with the continuous strength, increasing revenue, and opportunities from our core use cases, while new infrastructure applications are gaining traction. We see increasing validation that our software and hardware investments align with the highest growth use cases and are becoming more repeatable and scalable across customer segments.
Beyond our developments, we continue to build an ecosystem of partners to enhance both the breadth of our product offerings as well as the reach of our go-to-market motions across several use cases. We remain laser-focused on design wins. We continue to build pipeline opportunities of transformational scale, with new customers and use cases emerging regularly through both direct engagement and partner channels. In 2025, we secured 27 new design wins, providing a solid foundation for future production revenue. We continue to strengthen our ability to execute. We have strengthened the leadership team and operating discipline across the organization. We are embedding AI-enabled productivity tools across engineering, product, sales, and operations to accelerate time to solution, improve execution velocity, and drive greater efficiency at scale.
This operating model is increasingly aligned around market focus, solution prioritization, revenue alignment, and disciplined delivery, all designed to support consistent, scalable growth. As we look across the infrastructure landscape, we are seeing demand and requirements begin to clearly bifurcate. On one side, enterprise infrastructure applications such as security, storage, and performance networking continue to require deterministic performance, reliability, and efficiency. These environments value predictable behavior, long life cycles, and deeply embedded solutions, areas where Napatech has built long-standing leadership. At the same time, a new wave of deployments driven by production AI is scaling at an entirely different pace. These environments demand not only high performance, but rapid time to solution, efficient data movement, and architectures that can be deployed and optimized quickly at scale. While both demand profiles require advanced performance, the economics, deployment velocity, and solution expectations are increasingly diverging.
In response, we have deliberately reframed both our market focus and our solution strategy to align with these two distinct demand environments, ensuring we can scale efficiently in established applications while capturing the growth in emerging AI-driven deployments. On the next two slides, I'll walk through how we've structured these market segments and how our product portfolio is aligned to serve each opportunity effectively. Based on the demand bifurcation we are seeing, we've structured a market strategy around two distinct infrastructure segments. On the left is what we define as core infrastructure. This includes long-standing performance, critical applications such as cybersecurity, network monitoring, capture and replay, deep packet inspection, telecom infrastructure, and financial services. These environments value deterministic performance, reliability, and efficiency, and this is where Napatech has built its foundation.
Today, our packet capture software and programmable NIC platforms are deeply embedded across the core infrastructure use cases, driving durable customer relationships and consistent recurring revenue. This segment represents meaningful expansion opportunity within our existing customer base as workloads scale and performance requirements increase. From a market perspective, core infrastructure represents an approximately $6.5 billion-$10 billion opportunity over the next five years, growing at a CAGR of roughly 12%. On the right is what we define as AI infrastructure, the fast-emerging layer of the data center architecture built around production AI workloads. This includes inference pipeline acceleration for applications such as RAG and Mixture of Experts, scale up and scale-out networking, checkpoint acceleration, storage acceleration, and multi-tenant security. These deployments are scaling rapidly and placing entirely new demands on performance, latency, and efficiency.
This segment represents a significantly higher growth opportunity, expanding from roughly $6 billion today to approximately $20 billion by 2030, growing at a CAGR of over 35%. Together, these two segments provide Napatech with a balanced growth profile, a durable core business today, and a high-growth infrastructure opportunity driven by the expansion of AI into production environments. In parallel with how we structured our market focus, we've also deliberately structured our solutions portfolio to support both scale and strategic growth. On the left are what we call our turnkey solutions. These are full-stack package offerings that integrate Napatech hardware, software, and industry frameworks into repeatable solutions that are easier for customers to deploy while providing Napatech a strong digital moat of differentiation.
This is where we drive scalability, strong margins, and long-term differentiation with solutions that can be broadly replicated across customers and use cases. On the right are Frontier Solutions. These are advanced, performance-driven deployments developed through technical collaboration with customers and partners to solve cutting-edge infrastructure challenges, particularly in emerging AI-driven environments. While these engagements typically require more customization and technical depth, they deliver higher absolute revenue and margins, and they position Napatech at the forefront of evolving infrastructure architectures and cutting-edge technologies. Importantly, both solution types serve customers across our core and AI infrastructure markets. Over time, our strategy is to deliberately transition successful Frontier innovations into standardized turnkey offerings, turning cutting-edge solutions into scalable platforms that can drive repeatable revenue growth. This approach allows us to continuously innovate at the frontier while building a growing base of scalable, high-value solutions.
Napatech's differentiation in programmable networking is reinforced by our deliberate ecosystem strategy. Our long-standing partnership with Altera provides access to advanced programmable silicon and a mature development ecosystem, enabling scalable data center class solutions that combine hardware acceleration with robust software. This relationship strengthens our ability to address AI inferencing architectures, where performance, flexibility, and efficiency are critical. The products and solutions are now becoming generally available. In this next section, I'd like to share with you two updates and news related to examples of the success and potential we see from these new solutions. This slide highlights our technology is being adopted within AI inferencing environments by d-Matrix for Ethernet-based scale-out networking. In 2025, d-Matrix completed the public launch of their JetStream product, powered by Napatech.
We remain excited about the potential from this design. d-Matrix is the creator of Corsair, the world's most efficient artificial intelligence computing platform used for inferencing in data centers. Inferencing is where AI transitions from experimentation to real-world deployment, generating decisions and insights at scale. As these workloads expand, networking becomes a critical determinant of performance, latency, and overall system efficiency. Our programmable NIC solution is designed to support distributed multi-node AI inference across scale-out networks. Now we are closely working with the customer as they move toward broader deployment phases. We believe this positions us well as these platforms transition from initial integration into scaled production environments. Importantly, this solution is architected for replication.
The requirements we address here, low latency, deterministic performance, hardware-accelerated traffic steering, and CPU offload, are consistent with what we see across multiple AI infrastructure opportunities. This is how we invest at the leading edge, validate in demanding environments, and then extend those capabilities across additional customers pursuing AI deployments. In this slide, we highlight a second strategic proof point with a leading Tier One server platform provider, one many of you will recognize from our prior updates, and I'm pleased to share that the program continues to progress as planned. We have delivered the defined solution milestones on schedule and in line with customers technical requirements. As is typical with Tier One OEM product programs, this engagement is following a disciplined, multi-phase deployment and development process.
We are now in an active proof of concept phase, focused on two high-value acceleration vectors, supported closely by our silicon partner, Altera, throughout the program. The first is inline data reduction, including hardware-assisted compression and deduplication assist, aimed at improving storage efficiency and lowering overall infrastructure cost. The second is low-latency AI storage interconnect, leveraging RDMA-based data movement to enable deterministic, high-throughput access between compute and storage, a foundational requirement for scalable AI workloads. Most importantly, execution is tracking exactly along the structured path we established, from initial technology validation through proof of concept into early deployment stages, and ultimately towards scalable production. This disciplined progression continues to reduce technical and commercial risk while building momentum toward broader adoption. The last few slides of our update today provide more details on the financial results for our fourth quarter and full financial year of 2025.
I'd like to ask our CFO, Klaus, to provide this update.
Thank you, Kartik. Revenue in Q4 of $7.9 million was up 48% compared to Q4 2024, and up 26% compared to Q3 2025, both measures in US dollars. This brings the full year revenue to $22 million for 2025, which is an improvement of 33% compared to 2024. This was within our latest expectations, whereas the revenue in DKK for the full year 2025 ended at DKK 147 million, being slightly below the latest guidance provided of DKK 150 million-DKK 190 million, mainly due to FX and a few orders being postponed into 2026. Our gross margin in Q4 of 69.9% was up more than 4 basis points compared to Q4 2024.
Gross margin in 2025 of 69.5% was up 1.3 basis points compared to 2024 and was within the guidance provided in November. Our staff costs and other external costs in Q4 amounted to DKK 40.4 million, down 10% compared to Q4 2024, mainly due to reduced costs of subcontractors and personnel during 2025 to balance cost to the revenue. More details can be found on this slide. You can also read that our EBITDA in Q4 amounted to a DKK -3.8 million, which was an improvement of DKK 13 million versus Q4 last year. For the full year of 2025, EBITDA was DKK -58.6 million. Net cash flow in Q4 was positive of DKK 10 million, and thereby our cash position was expanded to DKK 127 million.
This development was driven by a tax payment received due to the tax credit scheme in Denmark and a positive impact from inventories. We continue to focus on reducing the inventory, while of course, also ensuring we have sufficient to deliver on orders. Further, we experienced a positive development in payables and provisions, which was partly countered by a higher trade receivable balance end of year, mainly due to the strong revenue at the end of 2025. Net working capital at the end of Q4 was DKK 83 million, a reduction of DKK 8 million compared to Q3, 2025. Net cash flow for the full year, 2025, was DKK 66 million, positively supported by the successful capital raise of NOK 200 million, or around DKK 130 million in Q2, 2025.
For 2026, we are guiding unit sales between 8,700 and 10,700. Units sold are expected to be split equally between core infrastructure and AI infrastructure. With the current exchange rate, this corresponds to a revenue of DKK 200 million-DKK 240 million. Ending in the middle of the range would be equal to a growth of approximately 50% compared to 2025. As the units for AI infrastructure are expected to come in at a lower gross margin, the gross margin interval is expected to end between 60%-70%. We currently don't plan to expand our workforce significantly in 2026. Therefore staff expenses and other external costs are planned at DKK 170 million-DKK 180 million, slightly up from 2025 due to salary increases, normalized bonuses, inflation, et cetera.
Staff cost transferred to capitalized development costs are expected to be DKK 5 million-DKK 8 million in 2026, in line with 2025. With these assumptions and a performance in the middle of the range, we're given an EBITDA of DKK -25 million in 2026. Units sold would be around 9,700 in the middle of the range, which is slightly lower than earlier communicated, following a shift to high-value products coming with a higher average selling price. As we wrap up today's presentation, we would like to invite you to visit Napatech at one of these upcoming events. Our full year event plan is shown online at the link provided. If you happen to be in one of these great cities during the first half year of 2026, we would love to meet you in person.
With that, we are now ready for the Q&A. Operator, we are now ready to take the first question.
Thank you. As a reminder, if you'd like to ask a question and have joined via the telephone lines, please press star followed by one on your telephone keypad. If you'd like to remove that question, you may press star followed by two. Our first question for today comes from Christoffer Bjørnsen of DNB Bank. Your line is now open. Please go ahead.
Yes, thank you. This is Christopher from DNB Carnegie. I just wanted to follow up on the kind of the opening up for a lower, low end of the guidance range for volumes in 2026, and the kind of the even distribution of the total number between the kind of the existing business and the new business. Can you just help us understand if the lower end of the range reflects caution on the kind of the legacy business as well, or it's primarily about the uncertainty around timing of the ramp with d-Matrix or other new big sockets like the TierO ne server OEM?
Oh, thank you, Christopher, for that question. The overall state of the business still remains positive for us, and the reason we're providing the guidance the way we did on the units is there is a mix of products, especially when it comes to speed of connectivity.
That is migrating from lower end of the speeds of 100 gig- 400 gig, and that speed mix causes a unit volume shift, which is one of the components in play, where the customers are migrating from the lower end of the Ethernet speed to the higher end. Naturally, their number of units goes down, but it does not necessarily impact the revenue profile as much because the ASPs are correspondingly adjusted as they migrate to the higher speed. That's what we are kind of providing as part of the guidance in there because of the migration of the Ethernet speeds to the higher end of the portfolio.
Okay, thank you. I appreciate that. Then the final follow-up on the volumes. I think you've indicated on the previous earnings call or on a podcast or something, that d-Matrix was likely to be in kind of the 2,000 range or something like that for 2026. Does that imply that you are expecting also the TierOne server OEM to drive significant volumes towards the latter half, like, part of this year? Is it all primarily d-Matrix for the new business?
We do have d-Matrix baked in as part of the planning for 2026. For the TierOne server OEM, the timing of that still remains something that we're constantly watching. We haven't necessarily broken it down to a point where that level of volume is baked into the 2026 numbers in here. Like I said in my prepared comments, it's going through that predictable, disciplined approach of POC into qualification and launch. We remain extremely excited about that prospect with that TierOne server OEM. For now, the 2026 number is biased towards what we do in our core infrastructure and what we're doing with d-Matrix. It definitely is positive on the TierOne .
All right, thank you. Finally, just on the cost base, if I remember correctly from the Q2 presentation, where you gave, like, this guidance for volumes and OpEx for 2026 and 2027. It seems you're guiding for significantly lower OpEx in DKK million for 2026 than you're currently now guiding with the updated outlook. I'm just excited to hear if that's... You know, you're not adding a lot of people, you said, but is this reflecting you guys seeing a lot of opportunities and chasing those more aggressively? Or just like, it just seems like you're now planning for higher, like, cost base than you initially did for 2026, but you were planning to cut, well, now you're not cutting, kind of.
Which is, I think is a positive sign, hopefully. Yeah, interested to hear your thoughts on that.
Yeah, we're definitely looking at new business and how we can go after that business in the most meaningful manner there. The idea is to keep that, keep our costs balanced to a point where we're going after business that we have a high level of assurance on. That's kind of how we're playing this for 2026, Christopher.
All right. Thanks. That's all.
Thank you. At this time, we currently have no further questions from the telephone lines, so I'll hand it back to the management team.
Thank you. We have received a couple of questions. Let me just get in here. The first one is coming from Louis Lamar, where he's asking: What is the impact of DDR4 pricing and availability on your unit and revenue forecast? Thanks.
Yeah, good question, Louis. The overall industry is definitely watching memory allocation and availability very closely. We are no exception. We do use DDR4 as part of our portfolio. The shortage isn't necessarily too much of a surprise for us. We put in mitigation in place right at the start of the year. We have balanced our entire planning all the way through the end of the year. I'm not worried about the availability of DDR4 in fulfillment of our 2026 orders, or am I worried about the impact on the pricing. We're good there, both on the pricing and the availability.
Thanks, Kartik. Then a connected question from Lars Christian Luel: How does the tight capacity constraints on memory affect your value chain and delivery by increasing production to stock?
Yeah, it's kind of, is similar to what Louis' question was. The constrained availability of memory is something that we were made visible to us a while ago, and we put mitigation plans in place with our contract manufacturer. We are not necessarily concerned about that through the end of the year, at least, and we'll continue monitoring it and plan for 2027 also.
Thank you. There's a question from Lars Eric Hansen: Could you please elaborate on the status and future of the AMD collaboration? Which products are developed, how many products/revenue for the products and the roadmap going forward?
Well, that's also a good question. As a company like Napatech, we are actually fortunate to have very good relationships with both AMD Xilinx, as well as with Altera. Our current shipping core infrastructure business with our turnkey solutions has a high mix of Xilinx FPGAs in there, and now we're transitioning our AI infrastructure with a healthy mix of Altera. We have a good balance of both AMD products and Altera products in our mix, and we'll continue to do so because both of those relationships are important to us.
Thank you. There's a little bit longer question here from Olav Mellingstad: You are clearly positioning Napatech as the open Ethernet alternative, as highlighted by a collaboration with d-Matrix for AI networking interface cards. If I have understood it right, with its new Rubin platform and Spectrum-X Ethernet switches, NVIDIA has started integrating optics directly into the chips and is pushing to sell the entire network stack as a single bundle. How will Napatech avoid being marginalized now that NVIDIA is going all in on Ethernet and can aggressively price its SuperNICs as part of the complete package? What exactly is Napatech's technological mode when going head-to-head with NVIDIA's Spectrum-X in these data centers?
Good question, Olav. The way I see this is, I actually do not consider Napatech to be an NVIDIA competitor. In fact, we are complementary solutions. In fact, if you see NVIDIA's strategy in some of their engagements and announcements that they've had with the likes of Groq, you'll see that NVIDIA is saying that inference play is quite a bit different than training, and it requires specialized hardware, which is along the lines of what our partnership with d-Matrix. Right? If you look at the inference acceleration vendors, vendor landscape in there, and the networking requirements that that places on products, that's kind of where our digital moat is, to provide a high-value, highly efficient inference accelerator scale-out network.
That's kind of where Napatech's moat is, and that's why you see companies like d-Matrix providing high levels of efficiency in a scale-out network with a Napatech solution.
There's another question here from Lars Eric Hansen: What's the assumed share of delivery of number of units for the largest customer?
Our current customer base and the customer profile is evolving. Our turnkey solutions going to our core infrastructure customers is now slowly giving a way to our frontier solutions going into the AI infrastructure. That is the number of units going to our largest customer is beginning to go down, but that's all positive news for us because the mix overall of our business is changing. The share is down of the number one customer, but that's expected behavior from our side because frontier solutions going into AI infrastructure. NETSCOUT had a very strong 2025 with us. NETSCOUT still remains a very, very important and critical customer of ours, but the mix, like I said, is changing.
Yeah. Then there's a last question here from Lars Christian Luel: Will you need more cash in 2026? Perhaps I can answer that one. We ended the year with having cash of DKK 127 million, up DKK 10 million from Q3, so the expectation is that they will be able to live on that for a long time. If you also look at our guidance, if we end up in the middle of the guidance here, we will be having a negative EBITDA of DKK 25 million. We should have sufficient cash to not do any capital increases in the near future. I think that was the last question we had in the chat. Are there any questions on the call, operator?
Currently, no further questions.
Thank you. I think we can close the webcast for now. Thank you everyone for joining us today, and thank you for listening in. You can find the webcast later on our website.