Appen Limited (ASX:APX)
Australia flag Australia · Delayed Price · Currency is AUD
1.605
+0.020 (1.26%)
Apr 28, 2026, 4:10 PM AEST
← View all transcripts

Earnings Call: H2 2024

Feb 26, 2025

Operator

I would now like to hand the conference over to Mr. Ryan Kolln, CEO and MD. Please go ahead.

Ryan Kolln
CEO, Appen

Thank you very much, our operator, and good morning, everyone. Welcome to Appen's FY24 results presentation. Today I'm joined by our CFO, Justin Miles. There are four sections to the presentation as per the agenda on page three. First, I'll share some highlights from our FY24 performance. Second, Justin will provide greater detail into the financial performance for the year. Third, I'll share an update on the market and our strategy. Finally, we'll provide a 2025 outlook statement. Moving to page five in the presentation, where I will share highlights from FY24. There are six of these. First is that group revenue grew 16% year on year if we exclude the impact of Google. Second is that China grew a very impressive 71% year on year. Third is that we continue to win work in large language model-related projects.

Fourth is that our AI data annotation platform is becoming increasingly important for our large technology companies, particularly for LLM-related projects. Fifth is that we were able to deliver revenue growth while reducing OPEX by 26% compared to FY23. Finally is that we returned to profitability. We achieved $3.5 million in underlying EBITDA in FY24, up from a loss of $23.9 million in FY23. I'll now step through each of these in a little bit more detail. Turning to page six in the presentation, excluding the impact of Google, we experienced a return to revenue growth in FY24, largely due to the rise of generative AI-related projects. Excluding Google, revenue for Q4 FY24 grew an impressive 37% compared to Q4 FY23, reaching $66.7 million. Growth continues to be driven by large technology companies, both in the US and China.

Turning now to page seven, China had a breakout year in FY24, delivering 71% year-on-year growth. The growth is on the back of a very impressive set of customers, including major LLM model builders, along with leading technology and auto companies. It's worth noting that most projects in China utilize an in-facility workforce rather than a crowdsourced model. This results in a more predictable revenue profile, as commitments are typically longer in duration. In the LLM market, there is strong competition between the US and China. Appen has the unique position of working with both US and Chinese customers on their AI data needs. This enables us to participate in both sides of a very competitive market and also brings insight to our customers about the broader AI ecosystem. Over to page eight. As discussed, generative AI has been a major growth driver for Appen.

In H2 FY24, 28% of our revenue was from generative AI-related projects. This is up from 6% in H2 FY23. Looking at the chart on the right-hand side, you can see that our traditional non-LLM work has been very stable, growing around 2% half-on-half. The LLM growth is on top of a very stable core business. Turning now to page nine, our annotation platform called ADAP is improving to be a valuable asset for our large technology customers. Our global product segment represents services that are delivered using ADAP. This has the benefit of giving us more control over projects, including the ability to bring in automation and real-time quality controls. For our largest US technology customers, most of our work has traditionally been delivered on their internal platforms.

With the rise of LLM work, we are seeing strong use of our platform due to advanced features designed specifically to support complex LLM projects. Over to page ten. While delivering positive revenue growth, we managed our costs very tightly and reduced OPEX by 37% from H1 FY23 to H2 FY24. There were a few main drivers. We significantly reduced product and engineering spend by establishing a technology hub in Hyderabad, India. We consolidated business units and rationalized delivery resources, and we minimized corporate overhead. We remain highly focused on managing the cost base in line with the revenue opportunity. Now on page 11, the culmination of revenue growth, improved gross margin, and cost discipline resulted in a return to profitability for Appen. In Q4, we achieved an underlying EBITDA profit of $4.7 million, a major improvement on performance throughout FY23.

I'll now hand back over to Justin, who will go into more detail on our financial performance for the year.

Justin Miles
CFO, Appen

Thank you, Ryan. Good morning, everyone. A reminder that we report in US dollars and that all comparisons are to the full year end of 31 December 2023, unless stated otherwise. Starting with the FY24 snapshot on slide 13, total revenue decreased 14% to $234.3 million, reflecting the termination of the Google contract. Pleasingly, when excluding the impact of Google, revenue grew by 16%. Within our operating segments, global services revenue decreased 38% to $118.1 million. This was impacted by the Google contract termination. New markets revenue grew by 43% to $116.2 million due to strong growth in China and global products. This growth is pleasing as it is driven by continued traction in generative AI projects. Our gross margin percentage, which is revenue less crowd expenses, increased 3 percentage points to 39.3%.

The increase was mainly due to a change in projects and customer mix over the course of the year. Underlying EBITDA before the impact of FX improved $23.9 million to a $3.5 million profit. The significant improvement is due to a return to revenue growth following the loss of Google and cost-out programs executed during FY23 and H1 2024. I won't talk to slide 14 as we cover revenue in further detail at later slides. Over to underlying EBITDA on slide 15. As I just mentioned, group underlying EBITDA before FX improved $23.9 million to a profit of $3.5 million. The significant improvement is due to cost-out programs executed, with our operating expenses decreasing 26% compared to FY23. The global services division reported EBITDA of $14.7 million, down 16% on the prior corresponding period. The decrease reflects lower revenue and gross margin, partially offset by the benefit of the cost-out.

New markets EBITDA improved by $24.6 million to a loss of $8.1 million. The improvement was driven by growth in revenue and gross margin for global product in China. Looking at H2 compared to H1 for FY24, H2 improved by $7.7 million to a small loss of $200,000 compared to a $7.9 million loss in H1. Slide 16 shows quarterly revenue, underlying EBITDA, and underlying cash EBITDA, both before FX. As you can see, EBITDA improved quarter on quarter during the year, driven by significant traction in generative AI projects, as well as the cost-out program executed during H1. Turning to slide 17. This slide shows quarterly global revenue, with Google excluded. The reduction in spend from a large customer experienced during FY23 stabilized in H2 2023, with growth returning in Q2 2024. Global product growth is driven by multiple generative AI projects.

It is important to call out, given the LLM market is evolving rapidly and there is significant experimentation, volumes for these projects can be inconsistent, with large volumes over a short period of time. Global services growth is driven by an increase in projects and volumes across multiple customers. Over to slide 18. Ryan has already talked about China's impressive 71% revenue growth compared to FY2023. As Ryan mentioned, China has a more predictable revenue profile. However, it is important to highlight that gross margins for China are generally lower than other divisions. Slide 19 has revenue for the balance of the new market segment, being enterprise and government. The decrease in revenue was driven by lower volumes within some existing large enterprise projects, including some projects coming to an end. Despite the disappointing result, we have conviction in the revenue opportunity; however, timing is unclear.

Uncertainty continues around how enterprises will proceed with generative AI investment. There is a healthy government pipeline; however, awards continue to be infrequent and linked to government budget cycles. It is important to note that our investment is being carefully managed to ensure it is proportionate to existing volumes and the near-term opportunity. Turning to slide 20 for a summary of the profit and loss. We've already covered most line items; however, there is additional data on this slide worth noting. Employee expenses are down 29%, and all other expenses are down 20% compared to FY23. This is due to the cost-out programs that were executed in FY23 and H1 FY24. EBITDA has improved by $98.1 million due to the cost-out, lower restructure cost compared to the prior period, reduction in depreciation, and amortization. Also, FY23 included a non-cash impairment charge of $69.2 million.

To the balance sheet on slide 21. The cash balance of 31 December 2024 was $58.4 million, up $22.7 million from December 2023. The reported balance was impacted by a $10 million payment from a major customer that was received in the first week of January 2025 and not December 2024 as scheduled. This did not have any impact operationally. Receivables and contract assets combined increased $0.9 million, despite lower revenue in Q4 FY24 compared to Q4 2023, primarily due to the $10 million customer receipt just mentioned. Non-current assets include intangible assets of $30.1 million. The decrease in non-current assets of $7.1 million was mostly due to the amortization of platform at a higher rate than new investment in product development.

Total liabilities decreased $6.1 million due to the $3.8 million settlement of the Quadrant earn-out liability by the issue of ordinary shares, as well as the decrease of non-current lease liabilities. The increase in net assets to $114.3 million reflects the equity raised in Q4 2024, offset by trading during the period. Turning to the cash flow summary, sorry, on slide 22. As just mentioned, the cash balance at the end of the period was $54.8 million. Cash flow used in operations improved by $22.4 million to $1 million. The cash balance and cash flow from operations were impacted by the timing of the $10 million customer receipt in the first week of January 2025, already noted. Cash flow used in investing activities was $7.8 million lower compared to FY2023 due to the lower investment in product development.

Cash flow from financing includes $42.1 million net proceeds from the equity raised in Q4 2024. Cash was used to fund operations, some CapEx, and one-off costs associated with the cost reduction program executed during H1. That concludes the financial performance slides. I'll now hand back to Ryan.

Ryan Kolln
CEO, Appen

Thanks, Justin. I'll now provide an overview of our strategy and share a 2025 outlook statement. I'll start on page 24 with a high-level view of Appen's role in the AI ecosystem. It's well known that high-quality AI requires high-quality data. With better data, better performance of models. Appen specializes in the creation of high-quality data that brings human expertise into AI model development. The work we do is highly customized to the needs of our customers, and there are three main categories of work. The first is data sourcing, where we are creating unique data sets for our customers. An example of this is where our crowd records their voice, and the data is used to build speech recognition models. The second is data annotation, where we enrich existing data sets.

An example here is where we are provided prompts for a large language model, and our workforce is tasked with creating a response. This data is then used in generative AI model development in a process called supervised fine-tuning. The third is model evaluation, where we provide feedback on the performance of models. You can think of this as model QA. An example here is where our contributors are provided multiple responses for a specific search term. The task is to provide feedback on which response they prefer, including the reason for their preference. These are simple examples; however, the work we perform is highly customized for our clients and increasingly complex. Tasks are often multi-step, and in some instances, it can take more than two hours to complete a task. I'll go into a bit more detail on this on page 25.

The first two rows of this chart outline some of the most common AI use cases that we support. These are some of the many different AI solutions that our customers are building, and they come to us for data to support the model development. We provide a wide variety of AI solutions covering recommendation systems, search engines, computer vision, speech recognition, and generative AI. As discussed on the prior slide, there are three main services we provide: data sourcing, data preparation, and model evaluation. The services we provide are underpinned by our AI data platforms. We have a dedicated annotation platform in China called Matrix Go that is highly customized to the needs of the China market. ADAP is utilized across remaining customers that do not have their own data annotation platform. Finally, the work is underpinned by the breadth of our workforce, covering many languages and domains.

On slide 26, I'll go into more detail in our workforce. One of the differentiators for Appen is the breadth and specificity of our workforce. Our customers often have very specific requirements around the demographics and capabilities of the contributors that they need for their projects. We have access to over a million people in our crowd who speak over 500 languages and dialects, with a broad array of domain specialization. We offer our customers both a crowd and in-facility model with many sites around the world. I'll now turn to page 27, where I'll talk about the role we play, more specifically, in the generative AI ecosystem. At a broad level, there are three stages to building a generative AI model: pre-training, post-training, and evaluation. Pre-training is the initial phase of model development, where the models learn general knowledge.

The main data source for this phase is publicly available text, images, and code. Most of this data is scraped from the internet, and the process is highly automated and operates at very large scale. There is little human involvement in the data preparation phase for this step. Post-training is the next phase, where models are adapted to specific styles, tasks, context, and languages. If pre-training is where the knowledge is obtained, post-training is where the models learn how to communicate that knowledge in the most effective way. Humans play a critical role in this step. For models to communicate effectively, the best teachers are humans, particularly those who are experts in their field. It's worth noting there's a common belief that pre-training has exhausted all of the usable data available. Therefore, the major focus for model development going forward is in the post-training phase.

Finally, evaluation is an important step to ensure that models are accurate and safe. You can think of this step as QA for generative AI. Humans play a critical role here, especially for evaluations that require subjectivity. In many instances, this work is very similar to the search and ad relevance projects we have been excelling at for a long time. The takeaway here is that human data is critical for two of the three major steps in generative AI model development. Moving on to slide 28, where we provide some case studies about recent generative AI projects covering both pre-training and evaluation. I will not dive into all of the details of these case studies, but I do want to highlight a few strengths that set Appen apart. First, multilingual data is a core strength for us. We have recently supported large-scale projects to improve multilingual capabilities in LLM models.

In one case, we supported over 70 languages and dialects at the same time. As LLMs expand in non-English languages, there's strong growth potential from multilingual projects. Second, large-scale evaluations that are very important for LLM model development and something that we do very well at. This work builds on our long history with search and ad relevance projects and often requires a very large-scale workforce in short time frames. Third, we're seeing more domain-specific projects. These need deep expertise from our workforce, like math, physics, coding, and other hard sciences. Fourth and last, our annotation platform powers a lot of these efforts. They enable us to support highly complex and iterative workflows that are often required for LLM projects. Moving on to slide 29, where I'll share some perspectives on the market outlook for generative AI.

The generative AI market's evolving very quickly, with new approaches to model development driving a lot of that change. We see a strong outlook ahead, and it's tied to three trends that we're observing. First is that it's getting less expensive to build large language models. For example, the innovations that came out of the DeepSeek lab show how research can make the process for model development much more efficient. The lower cost means that we're likely to see a large number of models being developed in the future. Second, the cost to run models is coming down. As they get more efficient, the price per unit drops. Until recently, running LLMs at very large scale was prohibitive for most enterprises. Now that's changing, and we expect to see much broader adoption. Third, investments in infrastructure continue to grow.

Companies are putting serious capital in development and inference setups, which points to more model innovation coming forward. The combination of more models, increased usage, and faster innovation will continue to drive rapid growth and unlock huge potential for the market. Now moving to slide 20, where I'll talk about our 2025 focus areas. In 2024, we made a significant amount of structural change to the business. In 2025, our focus is all about the fundamentals of quality and speed. There are six elements to our 2025 focus. First is about our market. We have high conviction in the growth of our core market, in particular the work to support LLMs. We're gearing our sales and marketing efforts towards a more technical audience and are focused more on large technology companies who are investing heavily in generative AI. Second is operational efficiency and speed.

We continue to evolve our operations, including incorporation of LLMs into our internal processes. A recent example is utilizing generative AI to respond to questions from our contributors. Third is to grow our people. There is tremendous expertise in our team, and we are committed to supporting the growth and development of our people. Fourth is accelerating our technology innovation. In 2024, we re-platformed a large-scale replatform of our crowd management software. This replatform has enabled us to accelerate development and bring new features to market to better serve our crowd and customers. As I mentioned earlier, our ADAP platform is critical for many generative AI projects, and we continue to build new capabilities and features into ADAP specifically for generative AI. Fifth is a focus on the evolution of our crowd workforce. The requirements of our workforce are changing rapidly due to the needs of LLM projects.

We're seeing greater demand for domain specialization and high cognitive load projects. Finally, there's our ongoing focus on prudent cost management. We continue to look for opportunities to optimize our cost base even as we pursue market growth. That concludes the strategy section. I'll now provide a 2025 outlook statement. As shared throughout the presentation, we continue to see positive signals on the LLM-related growth, including from our global and China customers. The LLM market is evolving rapidly, and there's significant experimentation. Therefore, we expect to see month-on-month revenue variability. Year-to-date, LLM projects are tracking lower than Q4 2024, largely due to annual planning by some major customers. However, we remain very confident in the potential for growth in 2025. High-cost controls remain in place in keeping with the company's focus on managing costs in line with the revenue opportunity, and we remain highly focused on ongoing cash EBITDA positivity.

Thank you. That concludes the presentation today. I'll now hand back to the moderator for questions.

Operator

Thank you. If you wish to ask a question, please press Star 1 on your telephone and wait for your name to be announced. If you wish to cancel your request, please press Star 2. If you are on a speakerphone, please pick up the handset to ask your question. The first question today comes from Josh Kannourakis with Barrenjoey. Please go ahead.

Avi Uger
Analyst, Pride Investor

Hi, Ryan and Justin. Can you hear me okay?

Ryan Kolln
CEO, Appen

Hey, Josh. How's it going?

Avi Uger
Analyst, Pride Investor

Yes, good. Thank you. Guys, I just want to clarify just within the outlook statement. I think, obviously, you've noted that the volumes are tracking lower than Q4. Q4 historically had been a slightly stronger seasonal period, and obviously, customers have their budgets, so usually the start of the year is softer. I'm just trying to work out whether there's a particular reason as to why you're sort of saying that and whether or not you're still confident in LLM volume growth for the entirety of the year rather than sitting there just talking about the first quarter.

Ryan Kolln
CEO, Appen

Yep, sure. Josh, look, we remain really confident in the LLM growth outlook for 2025. We're getting positive feedback from our customers that the growth is there and the work is there. We just wanted to be transparent around that we are seeing lower volumes compared to Q4. As you called out, that's pretty normal in the business. There are two drivers of that traditionally. One has been the seasonality and more of our core work. The other is these replanning cycles. We just wanted to be transparent around kind of year-to-date performance, but nothing out of the norm, I would say.

Avi Uger
Analyst, Pride Investor

Got it. In the context of how investors should be looking at it, broadly, you still think on a maybe obviously on a month-to-month basis it'll jump around, but in terms of the feedback and the conversations you're having with your big customers, that you're still confident in growth for the when we sort of look at the entirety of the year?

Ryan Kolln
CEO, Appen

That's correct.

Avi Uger
Analyst, Pride Investor

Got it. Okay. All right. I think that's worth clarifying. Secondly, just in terms of the non-LLM work, what sort of visibility or context do you have around that? Obviously, that seems to have stabilized. Are there any other sort of moving parts or trends that you guys are looking for in terms of what could sort of give us some indications or a bit more color around the 2025 outlook on the non-LLM side?

Ryan Kolln
CEO, Appen

Yeah. Look, all signals kind of point towards stability, but everything on that looks positive there. There's no real indicators that we will see any major change. As things move quickly in the market, we're not providing guidance there, but there's no reason to think there should be any difference.

Avi Uger
Analyst, Pride Investor

Got it. Final one for me, you gave some good context just around, obviously, the different parts of the value chain that you guys work within. When we were sort of talking before around the, obviously, post-training, the post-training and evaluation, can you give us a bit of a feel for at the moment, if we sort of think about what the LLM revenue is, just how much is across those things? How much is the evaluation versus post-training, just even if it's just broad splits?

Ryan Kolln
CEO, Appen

The projects, they change a little bit based on the focus of the customers, clearly. Look, it's probably a good split between those two. There's not one that's highly dominant. Sometimes the projects we do, there'll be a mixture between doing some supervised fine-tuning, as an example, which would be in the post-training and evaluation at the same time. We don't categorize it internally too much, but at the net, it's a good mix between those two.

Avi Uger
Analyst, Pride Investor

Got it. Just final one for me, just one quick extra one. Just in terms of cost base, as we look into 2025, how should we be thinking about the cost base? Therefore, if you're having growth, how the sort of operating leverage should flow through?

Ryan Kolln
CEO, Appen

Yeah. Look, I think we're thinking about the cost base as being fairly consistent. I think we can absorb some growth with the cost base that we've got other than paying the crowd, of course. Not looking to add anything significant to the cost base for the year.

Avi Uger
Analyst, Pride Investor

All right. Thanks, guys. I'll give someone else the turn. Cheers.

Ryan Kolln
CEO, Appen

Thanks, mate.

Operator

The next question comes from Wei Sim with Jefferies. Please go ahead.

Wei Sim
Analyst, Jefferies

Hi, guys. Just a bit more of a, I guess, follow-up from Josh's question on kind of that commentary on Q4. I'm wondering if you might be able to kind of give us a sense as to, on a PCP ex-Google basis, how we're tracking and yeah.

Ryan Kolln
CEO, Appen

Yeah. So we're not providing the numbers kind of for year-to-date. Is that what you mean, Wei Sim?

Wei Sim
Analyst, Jefferies

Yeah. I mean, just directionally, how we'd be tracking on a, I guess, comparable basis. It's not whether it's up or down, not looking for numbers, but just directionally. To Josh's point, I think the seasonality, people would have expected year-to-date to kind of be down versus Q4.

Ryan Kolln
CEO, Appen

Yeah. Look, I mean, I think the commentary's there. It's down on Q4. We're not providing, yeah, much more visibility other than that at this stage.

Wei Sim
Analyst, Jefferies

Okay. Right. My other question is just regarding you kind of called out DeepSeek and cost of models going down and whatnot. Have you seen any pickup within the China market since DeepSeek? Or I do not know if there is any kind of color or commentary that you would be able to provide around that.

Ryan Kolln
CEO, Appen

The China market's moving very rapidly, and there's a tremendous amount of innovation there. We're working with many of the LLM model builders. We're super optimistic on the outlook for China this year coming forward, and we think there's good growth prospects. I think the thing that's unique about Appen is that you'll get the benefit of the China market growth and the US market growth. Super excited about supporting both customers.

Wei Sim
Analyst, Jefferies

Okay. My, I guess, recollection of the China market is we did have quite a bit of that growth previously coming through from the auto manufacturers, maybe doing some of that driving annotation and self-driving and stuff like that. Has that mix changed in any way? Are you able to kind of give us a bit of color as to where you are seeing the growth coming through from China?

Ryan Kolln
CEO, Appen

Yeah. We are seeing the mix change, and the growth is coming from the LLM model builders, for sure. We're super excited about that. We think there's big, big upside there.

Wei Sim
Analyst, Jefferies

Okay. Just in terms of, I guess, our project length right now, obviously, there's some where you're having talks with companies at the start of the year to plan out for the rest of the year. On a kind of project-by-project basis or on an average project basis, are you able to give any senses to typically what length a project is, as in how long it typically lasts for?

Ryan Kolln
CEO, Appen

Yeah. There is pretty good variability. The way I would think about it is for our traditional non-LLM work, the projects are typically much longer in duration, largely in the search and ad relevance space where we have been doing it a long time. You can see that coming through that chart at the beginning of the presentation, which splits out the LLM and non-LLM work. In the large language model work, because it is really fast-moving and highly experimentative, the projects are typically shorter in duration, but they can be very intense and high magnitude, short duration, and high intensity. That is what explains some of that month-to-month variability that we call out on the LLM work. Large projects in the core are more consistent, long-lasting. The LLM work is short shots, sprint, but can be very high volume.

Wei Sim
Analyst, Jefferies

Okay. I probably haven't had a chance to look entirely through the, but just in terms of that non-LLM work, when you're saying that it's shorter, are we talking about a couple of weeks, or are we talking about one or two months? What's the general?

Ryan Kolln
CEO, Appen

Look, it does vary. Some of the projects are days in duration. Some are months in duration. It really does vary well.

Wei Sim
Analyst, Jefferies

Okay. No problem at all. That's great. Yeah. I mean, I've got no sense whatsoever, so it's good just to get a bit of color. Okay. That's it from me. Thank you so much.

Ryan Kolln
CEO, Appen

Okay. Thank you.

Operator

Once again, if you wish to ask a question, please press Star 1 on your telephone and wait for your name to be announced. The next question comes from Avi Uger with Pride Investor. Please go ahead.

Avi Uger
Analyst, Pride Investor

Hi, guys. My question, I guess, more related to the Google contract and how that has some detail around the ending of that contract. Is that something mutually beneficial, or was it considered a negative? Is that something you could revisit in the future? Are you looking at other big tech collaborations? Thanks.

Ryan Kolln
CEO, Appen

Yeah. Thank you. Look, Google ended the contract with us, and they did not provide a good rationale for that. We continue to look to grow our customer base across all of the large technology companies. It is a really big focus of ours. That is where a lot of spend in the market is, and that is where our focus is, clearly. There is a lot of value that we bring to the large technology customers. We have been working with many of them for a very long time. We have got strong capabilities and expertise to give them the high-quality data that they need for their models. Absolutely aligned with you. Focusing on the big technology customers is a really important strategy and part of our business.

Avi Uger
Analyst, Pride Investor

Okay. Is there any kind of detail in any kind of, I guess, initiatives currently reaching out for those kind of sort of names? I do not want to drop any names, but just is there anything in the pipeline that we can discuss?

Ryan Kolln
CEO, Appen

Yeah. There is like the we're having lots of conversations with these customers. We do not provide a huge amount of detail on the pipeline at this stage. The market is moving very quickly. What is really important for us at the moment is the deep partnership with these customers, bringing our perspectives on what is happening in the market more globally, and really, when the projects arrive, partnering with them to deliver the highest quality data as quickly as we can.

Avi Uger
Analyst, Pride Investor

Yeah. Okay. Thanks.

Operator

The next question comes from Conor O'Prey with Canaccord Genuity. Please go ahead.

Conor O'Prey
Senior Analyst, Canaccord Genuity

Yeah. Morning, gentlemen. I'm just thinking back to earnings for call last week. One of the other operators in the sector guided to 40% revenue growth this year. If we assume that that is in the LLM space, I guess probably a couple of questions. Number one, would you see that as the sort of market growth rate for this year, let's say? And then maybe a bit more difficult question given you haven't put it in the other channels, but what would be the percentage around Appen matching that sort of growth rate in that part of your business?

Ryan Kolln
CEO, Appen

The market growth rate's pretty challenging to predict at the moment. There's uncertainty in the large language model builders around the visibility and the line of sight they have over an extended period. It's probably not unreasonable. Maybe it's a little bit south of that. Maybe it's a little bit north. There's not a good market indicator, I would say, that exists at the moment. In terms of us reaching that level of growth and the puts and takes there, clearly, expansion with our existing customers and continuing to deliver high-quality work for them at high speed, that's going to be an important factor. Also growing into new customers. It's, yeah, a mix of we need to continue to deliver high-quality work. That will lead to expansion with the customers. That's going to be a very large driver of growth for us.

We are also highly focused on getting into new customers and new areas within existing customers also. These companies are very large, and we've got the opportunity to work with many different divisions.

Conor O'Prey
Senior Analyst, Canaccord Genuity

Thanks.

Operator

There are no further questions at this time, which concludes our Q&A session.

Powered by