Hello. Can people hear me now?
Yes. Sorry. I'll just turn that off.
Sorry. It appears we did have a glitch there with my audio, so I'll actually just go back a slide. Sorry, everyone. What I'll do here is I'll just start again. My apologies, so the first thing I'll start with is these management objectives. We had three core management objectives: to maintain a strong balance sheet, invest in AI-focused R&D, and develop new market opportunities. Number two and three really counter to number one. We knew that this year wasn't the year that you wanted to go out and raise capital. We wanted to have a lot of firepower to take advantage of the AI opportunities that are out there, and so we wanted to make sure that that was strong.
We wanted to invest in our innovation, and we wanted to make sure that we were building new channels as AI started to change the way in which things work. Just on the numbers, again, they speak for themselves, and I was just talking to myself a couple of minutes ago about them, but I'm sure you can see here that focus on cash, that focus on a strong balance sheet, but also the focus in investment, the focus in technology to ensure that we have high gross margins. The team did a fantastic job here, and then also just on the left-hand side here, we're talking about the register. I really put that up there so that people could look at what the - sorry, people could see that as founders and management, we are very invested in the company.
We have a lot of skin in the game, and we're very aligned with shareholders. So we see the AI opportunity as being one of the largest opportunities we've had since the business started in terms of building substantial shareholder value. And so we remain committed to that, and we will win like all shareholders as we execute our strategy. So just on to AI innovation. This is about as much about evolution as it is about revolution. So we've been able to evolve our core technology platform of some of our core base assets. So if you look at some of the assets that we have, for example, for our Turing language model, we have 15 years' worth of data. We have a huge range of expertise in our team around machine learning way before AI became as popular as it is now.
So we've been able to take that data and build a model that can outperform most of the large language models or just about all of the large language models that we have tested it against. We've then got a huge focus in AI agents and to connectors into the way that we can streamline content and automate processes. We have Verify AI, which is our core engine that basically uses quality estimation and human in the loop to completely change the way in which translations are completed, saving our customers significant, offering significant cost savings to our customers and Orchestrate and collaborate, enabling our customers a way to configure that suits the way that they work. And then finally, you've got this user interface layer. So we've talked about workplace apps for a while now, things like Slack and Teams.
And we absolutely see that the future and the way in which our customers will work is going to be based around the tools that they use every day. So complete as a stack, we're talking about the Straker Language Technology Platform. It is the future of the industry, and we're going to be at the heart of building one of the most innovative platforms that's going to have significant value for our customers. So just a little bit more about Turing. Development of Turing has come around because a couple of years ago, we invested in an AI team. And we put that team together at the start of the AI hype wave, I guess. And these are people who are incredibly talented AI engineers.
What they have been doing is focusing very heavily on comparing our models to many other models, scoring and ranking those models, and helping us understand the underlying AI technology that will add the most value to our business. We've tested it. It outperforms general-purpose large language models quite considerably. It is a smaller language model. It's very specific, and it can be integrated into either customer-specific use cases or language-specific use cases or vertical industry-specific use cases. Being a much smaller language model, it's also far more cost-effective. The base cost of that compute power is lower. Turing is revolutionary for us. We had a lot of components of it, and now we have this very substantial offering which customers understand and we can deploy quickly.
If you look at Verify, Verify is about having quality estimation as a way, which is based on the Turing language models, as a way to rank and score content and to see whether it needs to be human translated. So again, what we're doing here is we're offering customers a way to use a lot more machine in the process in a far more effective way of getting stuff translated than it is to use a traditional post-edit human sort of tidy up a Google Translate type process. So the business impact for us is it's higher margins, it's faster turnaround, it's lower cost for the customers. It's a far more efficient way to translate. Now I'll move on to some of the vertical applications. So as we've said, we've got Turing, we've got Verify, we've got Orchestrate.
We've got all of the components that allow us to now start deploying very innovative solutions into verticals, so one of the verticals that we have focused on is the Japanese financial markets. Now we've talked about the SwiftBridge AI project. Basically, the Tokyo Stock Exchange have made it mandatory for customers of theirs to do market releases in both Japanese and English. That's come into effect, and what we have been working on is building a tool that allows these companies to easily do that, and again, it's about having a Turing language model that's trained on Japanese financial services data. We can further refine that for the specific industry or customer that we're talking to. It can very quickly do a summary within a day, which is what is needed to go out automatically.
It can understand the nuances of that customer or the industry and what they need to say. And it's a secure and robust platform. So we've signed up a major technology distributor in Japan. They have a 60-strong sales team. And this project is well underway. Obviously, as I just said before, it launched or that became mandatory a month or so ago. We're not actually in the reporting cycle yet for Japanese companies. So as they get into their reporting cycle, as we start to onboard some of these companies and get through that process, we'll report probably a little bit more information around this, possibly the AGM, or at the half year as we start to get some data that we can share with shareholders. So this is very exciting. And again, it is based on the base platforms that we have in place.
And it does open up new opportunities as this starts to work in this market. Korea have a similar market. There's similar processes in Indian markets. And obviously, just anywhere in the world where there is financial data, we will have a solution. So just a little bit of a quick story about where we feel our technology can go as we look at AI innovation. Because while we're building for the now, we're also building for the future. So AI from 2011 to 2022 was pretty much post-edit translation automation. The present, it's about AI verification of translation with human in the loop and AI agent automation. But the future is going to be about verifying any type of AI content. And we do see that we have a very strong platform that has a lot of utility in that space.
What I've seen recently on some popular tech podcasts and also commentary from some of the AI leaders in the world is that they can see a whole industry around AI verification with human in the loop because of the number of models that are going to be built, the number of verifications that need to happen. Straker obviously already has that platform. We have the workbenches for the verification. We have the ability to test and validate models. We have our own models. We have 100,000 human experts across a whole range of industries and verticals. So we're actually already in a position where that's something that we can do. But we do see it as something that possibly people haven't realized we have that has a lot of value.
And if you look at across multiple industries, obviously, we're in the financial industries at the moment. We have markets like healthcare. We see a lot of potential in our ability to build a Turing healthcare model. Legal, we're already experimenting in conjunction with IBM around some IP patent technology and models. E-commerce, media education. So we see that there is a lot of potential both in the language side of things but also in the greater AI verification space. So that's something I think that we will talk about in the coming months. It's not something that's a solution particularly right now, but it is something that we are getting interest about and we are starting to explore. So next part of outside of innovation is to grow our distribution base.
So one of the things that we already have in place is that we have a very strong strategic reach. We have staff in 10 different countries. We've got channel partners. We've got technology partnerships. So we're well placed already. We don't have to invest in growing out our channel base. We're already there. And now it's about enhancing that base. So one of the ways that we have looked to segment how we see AI playing a part in customers and revenue as it grows is to think about the different types of tiers of customers that we might approach. So we have customers who spend 100,000-plus estimated spend, what we would call enterprise customers, longer sales cycle, more complex implementation, different stakeholders that you have to deal with. So everybody knows that enterprise takes longer but also has some significant rewards once you get there.
We reach these people through industry conferences, enterprise sales teams that we have set up. You then get these vertical application type opportunities that are coming up. I just talked about the SwiftBridge AI one and some in healthcare. This is where there might be $10,000, $20,000, $30,000 a year that a customer might spend in that process. Very vertically focused vertical solution. The potential, if you look at a SwiftBridge and there's, say, 1,600, 1,700 companies on the Tokyo Stock Exchange and most exchanges, I think, across the other exchanges in Japan, maybe anoth 3000 or 4,000 . You can multiply that by that sort of $10,000 number, and you can see that there's a lot of potential inside of those markets. The third tier that we look at is ecosystems.
How do we embed inside of an ecosystem and then reach the end users of that ecosystem? So we are, again, talking to and exploring different opportunities around this. We're building connectors into these systems, and we've got a team set up for it. So you can see that there are a range of tiers that we are focused on and exploring opportunities, particularly as AI automation starts to change the game. So that naturally leads into our ecosystem integration opportunity, which was really the tier three, the bottom tier that we were just talking about there. Many companies are plugging large language model translation into their platforms. That is a natural opportunity for our Verify product.
We are also seeing, as I said, AI automation tools like n8n, where we have the opportunity to sit at the heart of the content flows as people start to use AI agents to change the way they work, and so we are building these integrations. That's how we're spending our R&D, and we have got enterprise and ecosystem salespeople out there talking to these types of companies and trying to do partnerships where we can look at what a new partnership model looks like around this, so again, a very exciting time and something that didn't exist two years ago, this type of opportunity of being at the heart of an AI content workflow, so transitioning to AI, if you look at it, this is just really one simple slide, but the current state we're in, we're cash flow positive. We're profitable. We've got a good solid customer base.
We're going to take those customers on a journey across to that Straker Language Technology Platform, and we want to transition them from a per-word traditional way of doing translations into this higher margin SaaS AI token consumption revenue model, and as you can see, a simplistic way of looking at it, but as services and technology or historical technology drop, and we start to move into AI services and AI technology, you start to get higher margins, and you have seen some higher margins reflected in our financials this year, so it is something that we're really focused on right now, and it's something that, as we engage with customers, it gives us the opportunity to get some feedback, see where customers are at, and see how easy we can make it for them to transition, so now I will hand over to David.
Let me just bring this up.
Thank you, Grant. I'll now take you through our financial results for FY 2025. Despite a 10% drop in revenue, we've delivered record adjusted EBITDA. This performance reflects our focus on margin expansion, operational efficiency, and our evolving revenue model towards high-margin AI services. Let's dive into the key details. Looking at our P&L, revenue came in at NZD 44.9 million at the upper end of our guidance. As expected, revenue was affected by the sunset of legacy IDEST contracts and broader macroeconomic headwinds in Europe and North America. However, gross margin increased 310 basis points to a record 67%, driven by improved project mix, automation, and growth in high-margin services. Adjusted EBITDA rose to NZD 4.8 million, or 10.6% of revenue, our highest ever. This is a 6% increase on last year, despite the lower top line.
Underlying operating expenses, which exclude non-cash impairment losses and depreciation and monetization, so we'll struggle with that one, improved 9%, offsetting the decline in gross profit. We also recorded several non-cash charges, including NZD 6.8 million in impairment losses. And this is primarily due to the full write-down of IDEST and North American goodwill reflecting contract non-renewals in these segments. Finally, a revised software amortization policy changing useful life from five to three years led to a one-off NZD 2.9 million increase in amortization. These are accounting changes that don't impact our cash position, but they do affect net profit, which came in at a loss of NZD 10.2 million after tax. That's the next slide. To better understand the revenue movement, this slide breaks down the drivers. The exit of IDEST contracts impacted revenue by NZD 3.7 million.
Other language services declined by NZD 6.1 million, largely in EMEA and North America, reflecting project-based nature of the work and customer budget cuts. On the upside, Verify added NZD 1.1 million, while managed services grew strongly, contributing NZD 4.7 million in its first full year. Overall, we delivered NZD 44.9 million, landing at the upper end of our NZD 43-45 million guidance. Now, if I get this next slide. So just on that, despite the top line pressures, we achieved solid improvement across key performance metrics. Gross margin had a significant improvement thanks to stronger revenue mix and operational efficiencies. The chart on the right visualizes the improvement of 12.7%-67% over the last four years. Adjusted EBITDA margin rose from 9%-10.6%, our first step into double-digit EBITDA. On the balance sheet, we closed FY 2025 with a robust balance sheet.
Our cash position increased to NZD 12.9 million, up NZD 750,000 year-on-year. Working capital also strengthened by 21%, reaching NZD 13.4 million. Importantly, we remain debt-free, which positions us well to continue investing in AI innovation and scale initiatives while maintaining strategic flexibility. Next slide. So turning to cash flow, our focus on cost discipline and operating efficiency allowed us to deliver NZD 3.4 million in operating cash flow and NZD 1.2 million in free cash flow. This was achieved despite softer receipts, which fell 13% year-on-year to NZD 45.6 million. Our CapEx also reduced by 17% to NZD 2.2 million, reflecting maturing product development cycles. And that concludes the financial section. Back over to you, Grant.
Thanks, David. So just as a last couple of slides before we get into Q&A, and please, there is a Q&A option here. So if you've got any questions, please just put them in the Q&A.
But look, in terms of the investment highlights, we have proprietary technology. We've built these models. It's based on stuff that's very unique to us. We're starting to commercialize these models. We've got growth infrastructure in place. We're not a startup that has to have a lot of money to try and figure out how to go global. We're already global. We already have these things in place. We've got AI revenue growth, and we're transitioning to a higher-margin AI-powered service with product-market fit. And fair enough, at the other side of that ledger, there is a legacy side of our business, which, as we've said, will decrease as this increases. So like many companies, we are being brave enough to transition across to this AI world as quickly as we can. We have a healthy balance sheet.
So we have some firepower that enables us to take advantage of the AI opportunities as they come up. And we're in a large market opportunity. I mean, there are several people that sort of give different numbers out on the size of the industry. And I think it is morphing a bit as AI starts to play a bit more of a part. But all of these five components leave us in a very, very strong position as we go forward. So outlook and management priorities, we're going to maintain this strong financial position, but we also want to deploy our balance sheet and our capital towards high-growth initiatives. So if we start to see SwiftBridge take off, and we know that we could be in three other countries, we're going to start to deploy our balance sheet to take advantage of those opportunities.
We can see, again, expansion into AI ecosystems. We really are trying to push that. Lee has set some big targets around what he wants to do inside of those ecosystems. We're backing him up with our investment in R&D, and so that comes down to the targeted R&D investment. We are looking to make sure that this is the year that we have done all this innovation and we start to turn it into commercial success, and we can see some green shoots of that starting to happen. Margin optimization is always top of our thinking. We know very well that the higher margins that you make, the more that drops to the bottom line as a rule, so we are very focused on that margin optimization and operational efficiency, so we are looking, like many others, to use AI agents in the process.
We are well advanced in this. We have the technology that we've been rolling out and testing. And so we're very comfortable with where we're going across all of these fronts. So that concludes the main meetings. And now we can focus on some Q&A. So I'll just get past all the messages where people couldn't hear me. So first one, how is SwiftBridge AI priced? What is the structure of the relationship with Iguazu? And who else do you compete with there? So we are pricing a base level fee and then a consumption fee for customers. We have different tiers in that market. As a rule, it's around about that $10,000 as a price for customers. But obviously, there is a consumption element to that depending on what they do. So the relationship with Iguazu is that they are a distributor reseller.
They work closely with IBM as well. We've got a good relationship with IBM. I went to Japan and signed that relationship with Iguazu, so they are very motivated to sell the solution. When I was in Japan, one thing that really stood out to me was how Japanese companies really want to embrace AI and do more, but they're very conservative, so they haven't done the same level of innovation. Perhaps some New Zealand companies or Australian companies might have done, so there is good synergy there with where they're at. Next question is, what's the ballpark cost of training Cherry on a new vertical e.g., healthcare? Really good question. Now, if you look at what we've found out about our models, we know that if you have 50,000 segments of text, you can train a model to outperform a large language model, so we need 50,000 segments.
And then what you can do, if you get another 10,000, you can train it very specifically in a vertical or in a niche part of what that model's trying to do. It takes about eight hours to try and train a model. So again, these smaller models are super energy efficient, but you get best bang for buck on the output. So it does not take a considerable amount to train it. I think in developer time, it's about seven days as a rule for maybe two or three developers, AI developers to focus in there. Next question is, how much does charging per word versus per token fundamentally change pricing? How correlated are they? Correlated. Yeah, again, very good question again. So the pricing per word structure was really about saying it was very convoluted for customers.
You would do a machine translation, and then somebody would come and post that, a translator, and tidy it up, and you would sort of calculate how many words, and you would discount based on how many words had already been translated. What happens with Cherry X and Cherry is that it's the amount of tokens it takes one to do the first machine translation. So just completely different to anything you've seen in the historical translation context. And then as it does quality estimation on content, it's another token price. It's a different token price for that engine because that's very specific to doing that.
What might happen is you could actually ask it to do two or three rounds of quality estimation and what we call quality boost to try and change that content to get it right using the machine, or you could send it to a human in the loop straight away. This is where we have Orchestrate because that allows customers to configure their settings to decide how they want to pay. If they only want 90% quality output, then it's a lower price for them. They could set the document and what the use case is for that content. This is why it's very unique and very new. That's all the questions I can see. There's nothing in the chat. I would like to thank everybody for attending. I apologize again for the sound issue at the start.
First time, I think we've had that sort of issue in one of these presentations, so my apologies. Thank all the shareholders for their support. Really like to thank the Straker team for delivering a fantastic result in pretty dynamic macro business conditions, and look forward to giving everybody an update at the half year on our progress, if not sooner as we start to have successes. Thank you very much.