Good day, ladies and gentlemen. Thank you for standing by. Welcome to the MiniMax 2025 Full Year Financial Results Conference Call. Please note that English simultaneous interpretation will be provided for management's prepared remarks and Chinese Q&A session. This English line will be in listen only mode. I will now turn the call over to [Ms. Meredith Yu], Director of IR at MiniMax.
Thank you, operator. Good evening and good morning to everyone. Welcome to MiniMax 2025 full year financial results conference call. Before we start, please note that today's discussion may contain forward-looking statements which involve a number of risks and uncertainties. Actual results and outcomes may differ from those discussed. The company does not undertake any obligation to update any forward-looking information except as required by law.
For important information about this call, including forward-looking statements, please refer to the company's public information or 2025 full year announcement and the December 31st, 2025 issued earlier today. During today's call, management will also discuss certain non-IFRS financial measures. These are provided for additional information and should not replace IFRS-based financial results. For a definition of non-IFRS financial measures and reconciliation of IFRS to non-IFRS financial results and related risk factors, please refer to our 2025 full year results announcement. For today's call, management will use Chinese as the main language. A third party interpreter will provide a simultaneous English interpretation in prepared remarks session and a QA session. Please note that English interpretation is for convenience purposes only. In the case of any discrepancy, management statements in the original language will prevail. Lastly, unless otherwise stated, all currency units mentioned are in USD.
I will now hand the call over to our founder and CEO of MiniMax, Dr. Yan Junjie.
Dear investors and analysts, good evening. This is Yan Junjie. Thank you all for attending the first earnings call following our IPO. I would like to take the opportunity of today's earnings call to share our progress over the past year and our strategic priorities for the next phase of growth. Let me start with a look back at 2025. For MiniMax, the key theme of this year was solidifying our foundation. In 2025, we built full modality R&D capabilities with global competitive models now in place across key modalities, including language, video, speech, and music. Meanwhile, we continue to upgrade our products through ongoing technological innovation. This includes our enterprise and developer-facing open platform, as well as the customer products such as MiniMax Agent, Hailuo AI, Talkie, and Xingye.
We also made a further progress in deepening our global footprint. In large language models during the fourth quarter of last year, we launched three updated models M2, M2.1, and M2-her. M2 redefined the balance among performance, cost, and speed, and incorporated three key capabilities, coding, tool use, and deep search. Its performance has approached the leading global standards. Following its release, M2 saw rapid adoption within the global developer community, becoming the first Chinese model on OpenRouter to exceed 50 billion tokens in daily consumption, while ranking first on the Hugging Face global trending leaderboard during the week. Building on M2, we quickly launched the M2.1 with a focus on improving performance on complex real-world tasks, particularly in coding and workplace scenarios, where it demonstrated stronger capabilities in understanding and executing multi-step instructions.
Additionally, M2-her serves as the foundation model supporting our AI interactive products, namely Xingye and Talkie. It is designed to deliver more natural and personalized conversational experiences and was ranked first globally in overall performance in 100-turn launch context dialogue testing. In February, we released M2.5, which achieved a global leading performance across key productivity scenarios, including coding tool use and workplace applications. In coding, M2.5 set a new industry record on the SWE-bench verified benchmark while delivering a 37 efficiency improvement compared with the previous generation, M2.1. More importantly, M2.5 makes the operation of complex agents economically viable. Running continuously for 1 hour at an out speed of 100 tokens per second costs only $1. This means that a budget of $10,000 for agents can operate continuously for 1 entire year.
Breakthroughs in model capability have also driven rapid growth in usage, with M2.5 quickly topping the OpenRouter rankings following its release. From M2 to M2.1, now M2.5, each generation has delivered significant improvement in both capability and adoption. In February 2026, average daily token consumption across the M2 tech, the model series, was more than 6 x the level recorded in December 2025, with the token consumption from coding plan growing by more than tenfold. On the multimodal front, we have now established a model coverage across video, speech, and music. In October last year, we released our video model, Hailuo 2.3, which delivers significant improvements in character motion, visual quality and stylistic expression. We also introduced a faster, fast model which can reduce batch content creation costs by up to 50%.
We further upgraded Media Agent within Hailuo AI, which supports full modality content creation to generate the final output in one click. As of the end of 2025, our video models have helped creators worldwide generate more than 600 million videos in total. In October last year, we released our speech model, Speech 2.6, which was optimized for voice agent scenarios and significantly enhanced the voice interaction performance. It achieved a global leading ultra-low latency, and it supports more than 40 languages. As of the end of last year, our speech model had helped users worldwide generate over 200 million hours of speech in total, making it one of the core infrastructure platforms in the voice intelligence ecosystem. Our newly released music models, Music 2.0 and 2.5, also achieved significant advancements.
They can reliably handle a wide range of vocal styles and emotional expressions. In the process of developing these models in a product, we've also continuously advanced our AI native organizational evolution. Internally, our agent interns now support nearly 90% of employees with the use cases spanning software development, data analysis, operations management, talent recruitment, and sales and marketing. We view ourselves as a testing ground for the evolution of AI native organization capabilities, one that will steadily improve our R&D efficiency. In January this year, we productized the capabilities we had accumulated and released a MiniMax Agent 2.0, enabling agents to directly access a user's local workspaces. At the same time, we launched the Expert Agents feature, allowing users to create a domain-specific agent tailored to professional use cases.
As of the end of February, professional users had cumulatively created over 50,000 Expert Agents, addressing specialized challenges through deep knowledge and capability integration. Even speaking of the OpenClaw, which is very popular. Actually, even before the OpenClaw project gained broad attention, its founder, Peter, had already spoke highly of MiniMax models, describing the M2.1 model as his preferred and the best open source model. Following OpenClaw's official launch, the combined performance and cost advantages of the M2 series enabled more developers to adopt the models at significantly lower cost. Our agent products also proactively supported OpenClaw launching Max Claw, which further lowered the barrier to entry for users. Next up, we would like to talk about our progress on monetization.
Alongside For the whole year, we generated $79 million in revenue for the full year, representing 159% YoY. Along this, revenue from AI-native products reached $53 million, up 143% YoY, while revenue from our open platform was around $26 million, up 198% YoY. We are seeing the revenue accelerating in 2025. For example, our open platform serving enterprise customers and individual developers. Our new user registration February 2026 were more than 4 x available recorded in December 2025.
As of December 31st, 2025, we had cumulatively served more than 236 million users across over 200 countries and regions, as well as 214,000 enterprise customers and developers from more than 100 countries and regions. Revenue from international markets accounted for more than 70% of our total revenue in 2025, and international revenue represented over 50% of total revenue for our open platform. Since the release of M2.5, we have seen strong traction in international markets, attracting significant inbound interest from new global customers with positive word-of-mouth continuing to build momentum. Leading global cloud providers and AI native cloud platforms, including Google Vertex AI, Microsoft Azure AI Foundry, Fireworks AI, and Nebula AI have deployed MiniMax models.
We have also become the default model on leading coding platforms such as OpenCode and KiloCode. Early this morning, Notion launched M2.5, which is its first and only open source model option. In addition, while offering our services above, we further enhance compute efficiency by driving engineering optimizations and delivering meaningful gains, benefiting from iterative improvements in algorithm optimization, operator implementation, and encoding and decoding engineering. As of February 2026, inference co-compute cost per million tokens for the M2 text model series has declined by over 50% compared with the December 2025 levels. Over the same period, inference latency for Hailuo video generation model has decreased by more than 30%. As our model capabilities continue to iterate and improve, the benefits of scale have emerged.
For the full year 2025, gross profit reached $20 million, up 437% YoY, with gross margin improving to 25.4%, up 13 percentage points from 12.2% in 2024. On the expenses side, sales and marketing expenses decreased by 40% YoY, while R&D expenses increased by 33.8% YoY, though significantly below our revenue growth rate. For the full year 2025, adjusted net loss was $250 million. As commercialization continued to advance and model optimization drove cost efficiencies, our adjusted net loss margin narrowed significantly. In the first 2 months of 2026, we have already seen strong growth momentum. As of February 2026, our ARR has exceeded $150 million. Next up, I would like to share our outlook for the future.
We believe that in 2026, intelligence levels will advance significantly. Our own efforts will focus on the following three aspects. First, in software development, we expect to see the emergence of L.4 to L.5 levels of intelligence, marking a shift from AI as a tool to AI as a colleague-level collaborator. Second, across professional workplaces, we expect to see a pace of progress similar to what we saw in coding last year. In particular, the deliverable capabilities and penetration of AI agents in workplace scenarios will improve meaningfully. Third, multimodal creation this year will move toward the direct generation of production-ready mid to long-form content, with formats emerging that are increasingly closer to streaming and real-time output. Taken together, these three developments signal new technical challenges, a significant expansion in the supply of intelligence at scale, and a substantial window of innovation at the application layer.
They're also implying a meaningful increase in the demand placed on our platform, with the token volume likely to grow by one to two orders of magnitude. Our next generation M3 and Hailuo 3 model series are designed with these needs in mind. In parallel, we are rapidly strengthening our infrastructure and continuing to attract top talent, shifting our focus from optimizing training efficiency alone to driving higher R&D and iteration efficiency. At the strategic level, we are evolving from a large model company into a platform company for the AI era. In the internet era, platform companies primarily served as gateway for traffic. In the AI era, however, platform companies are those that define and advance new intelligence paradigms that are able to capture the products and commercial value created by those paradigm shifts.
This requires the ability to shape emerging intelligence frameworks, to sustain innovation in both technology and products, and to provide a scalable infrastructure and highly efficient token throughput capability. We believe that we are one of the few companies that have established and continue to strengthen these capabilities. The value of an AI era platform company can be simply framed as the density of intelligence provided, multiplied by token throughput. When both dimensions are sufficiently strong, platform value naturally emerges. Getting out to this historical inflection point of industry, our confidence is grounded in two factors. The acceleration of the AI industry is increasingly evident. Breakthroughs in model capability, deployment of agent applications, and a maturation of monetization models are also continuing to expand at the industry ceiling. We are already strong, seeing strong growth momentum.
We're confident in becoming a core builder of the AI platform ecosystem. That concludes our prepared remarks. We're now ready to take your questions.
We open the call for questions. Please dial in Chinese line if you want to ask question. Your first question comes from [Gary Yi] of Morgan Stanley.
Thank you, management. Thank you for your sharing. You aim to become an AI platform company, but so do AI and OpenAI. How do you define an AI era platform company? Why do you think a startup like MiniMax can become one? Thank you.
Thank you for your question. This is something we have been discussing and thinking about internally for a long time.
As we mentioned earlier, when the boundary of intelligence is pushed forward, it creates many new scenarios, new customers and new users, forming a new ecosystem and generating new commercialization dividends, such as, for instance, in coding and visual or image generation, there are companies that have already emerged. Why MiniMax have the opportunity to become a platform company for the AI era? I think there are several reasons. For one, the AI market is not a zero-sum market. The incremental market each year is larger than the existing stock. It is also not a winner takes all market. As long as you have unique, differentiated innovation, then you have your market fit. We believe that our over the next two to three years, our model R&D capabilities and infrastructure capabilities are highly likely to create new scenarios.
There is tremendous innovation market space in areas such as coding, office productivity, and interactive entertainment. In such a fast-growing market, we feel like the opportunity lies in 3 areas or 3 layers. First is on the model layer. I think the critical element is that we rely on the long-term accumulation of model and faster iteration. For instance, over the 180 days, we successfully released M2.1, M2.5, each bringing rapid growth in user numbers and API calls. Since day 1, we have been accumulating capabilities in cross-modality. We are the only company who has adopted this strategy, you know, which position us advantageously in this inevitable trend of multimodal fusion. The second is on the product layer. MiniMax is the first domestic company who focus both on product and model.
The model plus product forms a stronger barrier to entry. The model as a product is something hard to replicate by other peers. The third layer is on the ecosystem side. We have leveraged our differentiated capability and have created a open system, for example, in OpenClaw. OpenClaw used many of our models to develop. Also it's very fit for large throughput product. Also using it to further integrate it. This also further, you know, break down or reduce the barrier to entry for our users. That's why we are seeing a lot of, you know, code contribution. We can help. We have the ability to help the ecosystem grow at rapid speed. Looking ahead, this is just the beginning of our own internal ecosystem or ecosystem we are building.
Going forward, we are gonna focus on building our next generation M3 series of full modality model, establishing clear model differentiation. On the other hand, we hope to build a distinctive product and ecosystem around the intelligence that we offer. We believe that alongside of the major tech incumbents, we are the only company capable of executing on both product and models at the same time, even in Asia. Thank you.
Next question, please.
Next question comes from Alex Yao of J.P. Morgan. Please go ahead.
Thank you, management, for taking the time. Congratulations on the strong results. I wanna ask about multimodality, which is something that you emphasize as the end game.
If competitors focus on perfecting a single modality first and then switch to cross-modality main stock, might they move faster than you? Your approach of focusing on cross-modality in the first place will be burdensome for you. Will that be the case?
Thank you for the question. This is something we have been asked since the first day when the company was founded. I would like to take this opportunity why we focus on cross-modality. We believe that the integration of modeling modality is the fundamental prerequisite for continuously improving intelligence. Over the past six months, several models have validated this trend by achieving breakthroughs through multimodal integration. For example, models like Nano and Nano Pro, you know, integrate visual understanding and generation, further expanding intelligence boundaries. For us, it's two-stage approach for us. We are now in the second stage.
Over the past 4 years, which is stage 1 for us, we have steadily built industry leading models in each modality, creating strong positive influence and market recognition. There are many models that we are offering across modalities, and we have made a quite a significant achievement in respective field. Next up, the critical thing is to integrate them and fuse them to make greater breakthroughs. In the second half of this year, the M3 models is designed to achieve that goal. Along this approach, we want to emphasize that accumulation in each modality is a long-term process. It takes time from data to single modality and then to multi-modal integration. The entire chain requires significant time. This is the foundation of our long-term capabilities and what sets us apart.
We are one of the only three companies in China that have achieved the leadership across every modality. The second point I want to share is that for video generation, other than coding and agentic tasks, you know, is the largest market. We believe that we are able to see mid to long-term form content near real-time formats. We believe that we can also achieve that kind of ability, and that is a significant opportunity out there for us. Like you mentioned, will our strategic approach hinder our R&D development? Well, how do we put it? I think there are challenges, but they are inevitable. Since our founding, AGI has been always multimodal input and output. We have built an organizational structure that enables reusing foundational capabilities across modalities.
As you can see, under this AI native organizational architecture, our cost of building full modality is not higher than that of other startups, and is far lower than the investment of large tech companies. Moreover, each individual modality has achieved a competitive model, in some cases, outperforming companies focused solely on a single modality. Our technical judgment and forward-looking positioning have been continuously validated over the past few years and will only become clearer going forward. Thank you.
Next question, please.
Your next question comes from UBS, Kenneth Fong.
Congratulations on the strong results following your IPO. You mentioned that L4 to L5 levels of programming intelligence are approaching, and there are many claims that many software companies may be replaced by agents. How should we view this transformation and what is your position within it?
This is a very important question. Let me first explain what L4 to L5 levels of intelligence mean, and the future direction of programming intelligence and where we are within the transformation. For L3, these are just agents we're using today, while L4, L5 represents colleague level and organizational level intelligence. To give you an example, we want to build, you know, world-leading models. It takes collaboration of many people. Innovation and experimenting algorithms, optimization of programs, processing of data and tech ops. Lots of work. We feel that L4 will be able to handle a lot of innovative tasks. For example, conducting experiments based on a research paper and come up with efficient solutions to many of the challenges in the research paper. A lot of innovations.
For L5 level of intelligence, it takes not only one person, but also a collaboration of many people, many employees. I think coding is only a part of agents. It was the earliest productivity capability validated. Other than that, we believe that office productivity will replicate last year's rapid progress in coding over the next year. We believe the market is even bigger than coding. With that being said, how do we see ourselves? How do we position ourselves? I think we are having a huge market in front of us. Coding models allow more people to write code, but, you know, to code it even better. Again, I would like to emphasize that coders remain a small portion of the labor market. A much larger portion of the workplace now is handled by non-code software.
Use cases such as data analysis as well as financial modeling or presentation slides, you know, that, you know, used to support a financial results conference. These use cases, work use cases represent a far larger market than coding. We have already achieved early progress in coding and agents, securing a unique market position with minimum resources. The bigger market is just You know, penetration into the bigger market is just the beginning. For us, we move fast. Like I said, the evolution from M2 all the way to M2.3 takes only 100 days. Literally, we maintain the fastest iteration speed in the industry, with each generation achieving significant improvements in both capability and usage. That underscores our R&D capability and our ability to handle scale.
We build M2 with limited resources, but our resources are scaling up. I believe with that, model improvement will accelerate, and better models will raise the ceiling further. Our historical performance was built on the M2 series models. We expect the M3 model series will unlock even greater potential, creating a positive flywheel effect. Other than our, you know, fast move, we are able to create differentiated models, which has been repeatedly validated over the past few months. You know, like I said, the market is huge and technical paths will diverge. You know, for us, we need to know whether we have the ability to define technical roadmaps. We do not aim to win across every dimension. Instead, we focus on defining model capabilities that showcase our distinct strength.
For M2, Hailuo 2, and Speech 2 series model, each establish a clear differentiation, and we're able to gain a rapid market attraction. Its characteristics by low latency, high cost efficiencies. These characteristics set us apart, helping us to gain bigger market share. As our organization and resources continue to scale, our deep understanding of model evolution and technical roadmaps will further strengthen this differentiation and its value. In summary, we are confident in further increasing our share and achieving more breakthroughs in coding-free agents and the broader productivity market. We hope to make greater breakthroughs in gaining a bigger market share with our faster iteration and a stronger differentiation position. Thank you.
Your next question comes from Goldman Sachs.
Thank you for your sharing. We know in this industry there are tech giants, startups, and open source models.
I would like to know where do you compete? What are your priorities?
As mentioned earlier, we are building and hoping to become an AI era platform company driven by the continuous increase in intelligence density combined with its scalable commercial growth. Compared to other AI companies, we differentiate in several ways. First, our strategic positioning. From day one, we have focused on full modality models to increase intelligence and density and expand boundaries, creating differentiated value. At the same time, we build scalable products and business around model intelligence density, concentrating our resources on areas where we create different value. For example, in 2023, we decided not to build a general mobile assistant, i.e. Doubao and ChatGPT. We decided not to build a product like that because we did not believe we would create a distinctive value in this space.
Instead, we focus on differentiated model R&D and product innovation rather than, you know, burning cash. Take our Hailuo and our MiniMax Agent products. These are our focuses. This strategic decision reinforces our differentiation and increases our win rate. Another example is our commitment from day one to developing foundation models across the three models. As mentioned earlier, accumulation in each model is crucial. We have now reached a key stage of cross-modality integration. This position us advantageously in the inevitable trend toward full modality fusion. Secondly, I want to talk about our R&D efficiency. In the AI era, success is not ultimately determined by how much money or resources you've burned, but by the speed at which intelligence improves. That speed comes from R&D efficiency. That will turns into bigger market share and the bigger higher efficiency.
We have been emphasizing on that and executing on that. We apply that across every stage of R&D, including algorithm optimization, experiment design, iteration cycles, and et cetera. We fully leverage our agile organizer structure, combining top-down and bottom-up approaches while reusing experience and infrastructure across modalities. That ensures that we are always maintaining our leadership. In the long run, we believe that globally, only a small number of AI platform products will automatically leave the industry. We are one of the few independent companies with both meaningful advantages and clear differentiation to win.
Your next question from CICC, [Jialong Shi].
Hi, management. Congratulations on the strong results. You mentioned that in the first two months of 2026, token consumption for the M2 series is already 6 x that of December last year.
Is this explosive growth a one-time dividend or the beginning of a sustainable long-term trend?
As we have noticed the explosion in the token consumption on OpenClaw. That's why I'm asking this question. Do you think this is a one-time phenomenon or beginning of a long-term trend?
Thank you for your question. We see this as the beginning of a long-term trend rather than a one-time dividend. Of course, the growth in the industry tends to follow a step function pattern where, you know, it's not moving linearly. We are able to constantly launching new models enable us to capture industry opportunities. I think a core part is our R&D strategies, preparing resources, capabilities in advance, and defining every generation of model based on our understanding of how intelligence evolves.
Other than the M2 model, the next wave of growth will come from or is underpinned by several factors. Starting from second half of 2025, we have been proactively preparing for capabilities that will capture the multiple high impact productive market opportunities emerging in 2026. We believe the growth will be increasingly diversified. Coding still has a significant headroom. I mean, it's a quite a decent tool as an assistant. We believe that it will continue to improve and will evolve from assistant level tool toward a colleague level collaborator, even toward higher order intelligence, you know, intelligent operator. From our tech reserve and also R&D progress and our judgment, we believe what we mentioned will likely happen this year.
The second point is on workspace scenarios, because this is far larger and market and far broader market compared to coding. There are a lot of more complicated problems or issues. It involves many professions, use of a wide array of tools. You know, many of the tasks conducted in these professions cannot be verified, and these created challenges. We have been proactively preparing for such challenges. We expected that in workplace, we are going to rapid pace of progress we have seen in coding. Moving on to the multimodal domain. We believe that we're going to significantly lower the barrier to adoption in producing better models that would produce a production ready and longer videos. Model competition involves wins and losses with every company faces this reality.
No company can guarantee permanent state of the art leadership. However, we're confident in our ability to continue winning in these areas that matter most. I think a key strategy for us is to pushing the technical boundaries and leveraging that breakthrough. To create a bigger ecosystem underpinned by our products and models. As a final goal, you know, leveraging that to capture the dividends. We are confident in growing along with this industry, scaling our model differentiation, R&D efficiency, product innovation capabilities, and global monetization capabilities into enduring organizational competitive advantages.
Your next question comes from Thomas Chong of Jefferies.
Good evening. Thank you, management, for taking my question. You mentioned that internal agent interns now nearly covers 90% of employees.
What insights has this change brought to you, and how does it feedback into your product and technology development?
Thank you for your question. We are not a only an AI company. We aim to build a truly AI native platform company. While researching AI models, we wanna turn ourselves into an AI native company. That's a key goal for us as an organization, and there are two things we're focusing on. Number one, speed. That is the speed of progress. By that I mean, I think the fundamental reason why we want to be AI native is that we have limited resources as a startup. We need to maximize our efficiency so as to survive and achieve success. We have been leveraging AI agent intern, and many of our employees are using that in their day-to-day work. We have observed a clear trend.
In many cases, the dynamic is shifting from people teaching agents how to work to people observing how agents work. At times, agents even surprise us. This has not only shortened our organizational workflows, but also allowed every stage to benefit from improvements in intelligence. From model iteration and product innovation to customer service, our feedback and iteration loops are accelerating. At the same time, our employees can focus more on higher value work, further accelerating how we think and innovate as an organization. This also feedbacks to our R&D of models because this allow us to define model intelligence objectives. For example, when agents are deployed within the company, we can clearly observe that even the best models today still get things wrong or couldn't get things done properly.
These gaps exactly show highest economic value, and they inform the R&D priorities for the next generation of models and agents. This enable us to define our objectives more clearly. The more we deploy these agents, I mean, the clear direction we have on the iteration of these models. Over the past few months, our model iteration speed, revenue growth, customer service capability, and token throughput have all improved. This allow us to define new model objectives faster. We are maximizing the value of AI internally within the company. We believe that Go like we set building an AI native company, we are seeing positive flywheel already within the company, and I believe this will become one of the key competitive advantages of our organization.
Thank you once again for joining us today.
If you have any further questions, please contact our IR team at any time. Thank you.