SenseTime Group Inc. (HKG:0020)
Hong Kong flag Hong Kong · Delayed Price · Currency is HKD
1.940
-0.080 (-3.96%)
Apr 28, 2026, 4:08 PM HKT
← View all transcripts

Earnings Call: H2 2023

Mar 26, 2024

Jessie Lin
Joint Company Secretary, SenseTime Group

Good evening, everyone, and welcome to SenseTime Group's 2023 annual results presentation. I'm Jessie Lin, the Joint Company Secretary and today's emcee. Let me introduce the management representatives joining us today: Dr. Xu Li, Chairman and CEO of SenseTime Group; Mr. Xu Bing, Co-founder and Executive Director; and Mr. Wang Zheng, CFO. First of all, let me read the disclaimer. Today's discussions may contain forward-looking statements. Forward-looking statements involve inherent risks and uncertainties that may cause actual results to differ materially from current expectations.

For a detailed discussion of such risks and uncertainties, please refer to the latest filings of SenseTime Group to the Hong Kong Stock Exchange. All forward-looking statements provided during this conference call are based on assumptions that we believe to be reasonable as of today. SenseTime Group undertakes no obligation to update such statements unless required by applicable law.

This discussion only contains certain non-IFRS financial measures, which are provided for comparison purposes only. In addition, during today's conference, management will deliver their prepared speeches in Chinese. An interpreter will provide simultaneous English interpretation through the English conference line. Please refer to our results announcement for details. In case of any discrepancies between the original speech and the interpretation, please take management's statements in their original language as the standard.

Next, I will invite Dr. Xu Li to first give everyone an overview of SenseTime's latest developments. Then, Mr. Xu Bing will discuss our business review in detail. Lastly, Mr. Wang Zheng will review the company's financial performance. After the management's presentation, we will have a brief Q&A session. Now, please welcome Dr. Xu Li.

Speaker 4

Thank you, Jessie. Thank you all for attending our full-year results briefing.

This is the first time we are communicating with the market since the passing of our beloved founder, Professor Tang Xiao'ou, last year. Professor Tang was not only our founder but also our mentor and friend. For over 20 years, he dedicated himself to promoting original Chinese innovation and nurturing outstanding talents in AI. He established platforms such as the Lab at the Chinese University of Hong Kong, SenseTime, and the Shanghai AI Lab, and trained thousands of distinguished AI scholars. Professor Tang had a keen insight into the overall development of AI. He led one of China's earliest teams to fully invest in deep learning. When the management team proposed to invest billions of RMB to build our AI data center in 2019, Professor Tang provided his unwavering support for this long-term investment. Subsequent developments have thus proven the foresight of his decision.

Here, I wish to express gratitude for the heartfelt condolences and support following Professor Tang's passing and to his family for their steadfast trust in the management. Our management team promised to work tirelessly to continue Professor Tang's legacy, driving innovation and industrial transformation of AI in China. Currently, we are witnessing the most profound technological breakthroughs in the field of AI. With rapid developments in large models represented by ChatGPT and the emergence of Sora this year, the scaling law has been reaffirmed. By expanding model size, data volume, and computational training, we can achieve unprecedented AI performance.

We've also seen massive investments in AI computing powered by global tech giants. In 2023, SenseTime underwent business restructuring, focusing on Generative AI business, which entails model training, fine-tuning, and inference services for generative AI, experiencing significant revenue growth, achieving a revenue of RMB 1.18 billion for the year.

This is the fastest-growing new business segment for the company in its 10-year history. Our two main development focuses were AI infrastructure and large models. Our SenseCore AI infrastructure experienced a remarkable breakthrough, reaching a computing milestone of 12,000 petaflops with 45,000 GPUs in operation. Our SenseNova large model performance improved rapidly, achieving leading training capabilities and reaching national forefront in foundation model, multimodality, coding, function calling, million-token lossless context, and small models for end devices. Our new Model-as-a-Service business model also resulted in numerous new customer collaborations. We are fortunate to be one of the first eight large models which obtained approval in China to serve a broad range of public and enterprise customers.

The total annual revenue for the group in 2023 was RMB 3.4 billion, with a decrease of 11% mainly due to our strategic reduction in reliance of the Smart City business, which dropped from about 30% of revenue in 2022 to less than 10% in 2023. Our efforts to improve operational efficiency paid off. We collected RMB 3.8 billion of trade receivables in 2023, and there was a year-on-year growth of 48.5%. We also streamlined our workforce, resulting in a reduction of employee numbers by 11% compared to 2022. Operating expenses in R&D, G&A, and sales decreased by 10.6% year-on-year, with SBC included. Expenses in R&D, G&A, and sales decreased by 10.6% year-on-year, with SBC included. EBITDA losses narrowed to RMB 5.5 billion. We will continue optimizing operational efficiency, employing strategies including incubation and spinning-off businesses with strong industry attributes.

We will continue our resources on the rapidly growing Generative AI business to optimize cash flow and to reduce losses. Our decade-long experience has provided us with four key competitive advantages in the AI 2.0 era. Firstly, our experience in perception and decision intelligence, coupled with a substantial repository of multimodal data, has bolstered the company's foundational model's understanding of the physical world and their ability to handle multiple modalities. Secondly, supported by tens of thousands of GPUs and guided by the scaling law, the large model continues to improve upon its performance.

Additionally, the profound understanding gained from large model research and development empowers the company to design AI infrastructure with a forward-looking approach, resulting in industry-leading computational efficiency and scalability. Thirdly, we have extensive experience in smart device deployment.

Since 2015, we have been collaborating with global Android smartphone manufacturers and automobile companies and empowered over 2 billion smartphones and millions of cars while optimizing on-device inference. The company has successfully launched edge-site small models that lead the industry in performance and inference speed, unlocking new application scenarios. Last but not least is our proven ability in AI deployment. SenseTime has a robust system and the necessary capabilities to cater to enterprise customers. This empowers them to promptly address customer needs and to provide highly cost-effective generative AI solutions.

Following the business restructuring, SenseTime has reidentified our three core business segments: Generative AI, Traditional AI, and Smart Auto. The Generative AI business encompasses model training, fine-tuning, and reasoning services. Smart City, Smart Business, and Smart Life fall under the non-generative AI business, while the smart auto business remains unchanged.

Under this new structure, the group has placed a strong focus on developing the Generative AI business while also ensuring that the traditional AI and the Smart Auto business segments remain competitive. In terms of revenue, the Generative AI business achieved an impressive growth rate last year, reaching RMB 1.18 billion, representing a 200% year-over-year increase and accounting for 35% of the group's total revenue. The Traditional AI revenue generated RMB 1.84 billion decreased by 41%, with the revenue share of the Smart City segment significantly declined, reflecting a drop in the group's reliance on the contribution from this business segment.

Meanwhile, our Smart Auto business continued to exhibit a steady growth, generating RMB 384 million in revenue last year, a 31% increase, and accounting for 11% of total revenue. Generative AI business has become more than just a revolutionary technological innovation.

It has replaced Smart City and emerged as our core business. This business is expected to continue experiencing rapid growth in 2024. The growth of SenseTime's Generative AI business is attributed to the widespread demand for large model training and inference across various industries. This signifies the beginning of a new cycle of investment in China's deep technology sector. By deeply integrating generative AI capabilities across its various business sectors, SenseTime is attracting new customers and driving comprehensive improvements in efficiency and productivity. Next, we will provide a detailed overview of the three business segments.

First and foremost, our Generative AI business has been the fastest-growing new business segment for SenseTime in our 10-year history, surpassing revenue of over RMB 1 billion. We offer industry-leading AI infrastructure and model services to meet the rising demand for model training, fine-tuning, and inference.

In the past 12 months, over 70% of our customers in this business segment were new to SenseTime. Existing customers will also witness a substantial increase of about 50% in average revenue per user. We have received numerous orders exceeding RMB 10 million, and the usage of the model exhibited a growth by approximately 120-fold. Services included public cloud and private deployment.

Our client base represents diverse sectors, including telecom operators, banks, and security firms, leading internet companies such as JD.com, Xiaomi, and China Literature, AI startups such as HiDream.ai and Langboat Technology, as well as academic institutions such as Tsinghua University, Shanghai Jiao Tong University, and Nanyang Technological University in Singapore. In the 2023 China AI Development Platform Market Report released by Frost & Sullivan, SenseTime ranked first in the country in the comprehensive score of Growth Index and Innovation Index.

The results demonstrated recognition of our innovation capabilities and market expansion speed in generative AI. Our SenseCore AI infrastructure is at the forefront of large model training in China. SenseTime's AIDC, located at Shanghai Lingang, has served as a benchmark for the construction of intelligent computing centers in China. In 2023, SenseCore enabled the production of large models with trillion parameters. We have achieved unified scheduling of computing power managing nationwide resources and expanding our computing nodes in cities such as Shanghai, Shenzhen, Guangzhou, Fuzhou, Jinan, and Chongqing. This has resulted in a total operational power of 12,000 petaflops with 45,000 GPUs in operation.

Looking ahead to 2024, we anticipate future expansion of our national computing nodes accelerating the country's new productive forces. For large model training services, the scalability of GPU interconnection, acceleration, efficiency, and stability of the infrastructure are the three most critical metrics in the industry.

We have achieved 90% acceleration efficiency on clusters with 10,000 GPUs, leading the industry in this aspect. In terms of training stability, we have achieved uninterrupted training for over 30 days with diagnostic recovery time optimized to half an hour. In the realm of large model inference services, we have achieved industry-leading optimization for ultimate inference performance. Within a year, we have tripled the cost-effectiveness of our inference services, enabling us to provide customers with the most cost-effective and scalable elastic inference services.

Over the past year, there has been a more diverse selection of training and inference chips used by customers on the SenseCore AI infrastructure. This year, we added support for domestic chips such as Huawei Ascend and Cambricon, enabling us to utilize a fully domestic technology stack to support training, fine-tuning, and inference of large models.

The key to achieving these milestones is our jointly developed DeepLink platform, a leading domestic open computing platform in China. It serves as a bridge between domestic hardware and mainstream deep learning algorithm frameworks. DeepLink provides over 300 standardized operator interfaces, effectively covering 99.5% of AI computational needs supported by CUDA. With DeepLink, domestic chips can seamlessly integrate with mainstream large model training frameworks and algorithm libraries, including PyTorch, DeepSpeed, and other commonly used open-source training frameworks. It extends compatibility to our OpenMMLab, OpenDILab, and OpenGVLab, maximizing the performance of domestic chips.

Now, let's explore the rapid iteration of SenseTime's large models. Guided by the scaling law, large models continue to improve upon its performance. Since its release in 2023, our SenseNova large model undergoes major upgrades every three months, establishing a leading position in the industry with each iteration and showcasing our profound technical expertise.

In April 2023, we launched SenseNova 1.0, which marked SenseTime's first release of the large language model. It encompassed a comprehensive suite of generative AI models, including text-to-image and 3D content generation. In the months of July and August, we rapidly iterated with SenseNova 2.0 and 3.0, significantly improving the natural language process capabilities by enhancing the quality of the training data. SenseNova became the first foundation model in China to surpass the performance of the GPT-3.5 Turbo. SenseNova was also among the first batch of approved large models in China.

Earlier this year, we released SenseNova 4.0, which achieved performance parity with GPT-4 in various scenarios such as code writing, data analysis, and medical Q&A. We also open-sourced 7B and 20B parameter models. I'm delighted to take this opportunity to announce that our upcoming SenseNova 5.0 is scheduled to be launched in April 2024.

This version will fully meet the standard of GPT-4 Turbo, support lossless context of millions of tokens, and its multimodal model capabilities will be on par with GPT-4V. We are eagerly looking forward to achieving this milestone and will assure that our clients can benefit from its capabilities as soon as possible. The following are demonstrations of the capabilities of the SenseNova foundational model. In the current SenseNova 4.0 commercial version, both the coding, reasoning, and language capabilities have reached industry-leading levels. We have also made significant efforts to enhance function calling capabilities. We have been actively building an open-source community and nurturing a developer ecosystem.

Our open-source platforms such as OpenMMLab have received over 100,000 stars on GitHub. We have open-sourced 7B and 20B-based models, which have since been established as leading open-source models as it outperforms Meta's Llama 2 and Google's Gemma.

This is a recent leaked table on Hugging Face showing the number one position across the same-size 7B models domestically and internationally. We've also built industry-leading on-device models, including language models and text-to-image models, offering performance comparable to larger models but small enough to run offline on laptops or mobile devices. Currently, we provide 7B and 1.8B on-device models with industry-leading inference speed and performance. Last year, Qualcomm and MediaTek showcased our on-device generative AI capabilities at their new chip launches. Our 7B model achieved an industry-leading inference speed of 16 tokens per second on Qualcomm's latest chips.

We believe 2024 will be a breakout year for on-device large models. The interconnection between on-device models and cloud-based models can significantly reduce the cost of cloud inference and enhance user experience.

Large models can also invoke various applications on smartphones to complete complex tasks, realizing the true potential of an intelligent assistant. SenseTime has collaborated with brands like Xiaomi and Honor to jointly develop these innovative features. The demo on the right, co-developed with our clients, demonstrates the precise and smooth operation of on-device models on smartphones. The user can simply use voice commands to carry out tasks without manual intervention, such as putting together a dining-out schedule, checking for weather updates, trip planning based on traffic conditions, and making restaurant reservations.

Our large models have demonstrated exceptional proficiency in processing complicated spreadsheets with our inference capabilities and have been widely adopted by leading clients such as Kingsoft Office. In the SuperCLUE Code Evaluation, our models topped the leaderboard, outperforming GPT-4 in accuracy across thousands of data analysis test questions.

Our AI programming assistant, Coding Copilot, and AI data analysis tool, Office Copilot, are available for free trials for 2C customers. They significantly enhance programming efficiency and enable users to input various documents for data analysis and visual representation, thereby greatly boosting Office productivity. Based on our vision perception capability, our multimodal model can process complicated tasks. Here are some examples of the multimodal capabilities. For example, when uploading this picture of a watch, our model can correctly identify the time displayed and recognize that it is a Rolex watch. Next, here is a multiple-choice question on optics. Our model can accurately identify the English question and provide the correct answer in Chinese.

Even with internet memes, it can describe the details in the image and interpret its meaning, which in this case is a humorous portrayal of a scene where the parents repeatedly call their children for dinner.

Alongside rapid iterations of large language models, we've developed the best text-to-image model in China, SenseMirage. Since its initial launch in January 2023, we have updated it four times, elevating it to a model with 100 billion parameters. Furthermore, we have incorporated tenfold inference acceleration optimizations, making SenseMirage 4.0 the most user-friendly domestic text-to-image product. SenseMirage 4.0 excels in swiftly generating accurate and eye-catching image outputs. Through two evaluation cases, we can gain a comprehensive understanding of SenseMirage's capabilities compared to those of leading manufacturers in the text-to-image model market.

This elephant image example above, a competitor's model has mistakenly generated an elephant with five legs, a noticeable error. In contrast, SenseMirage's rendering showcases more natural lighting and finer skin texture details, resulting in an image that is closer to realistic photography.

In the architectural image shown below, SenseMirage not only reproduced the precise details of the building but also seamlessly blends it with the surrounding natural landscape, bringing a vibrant and realistic scene that closely aligns with reality. The outputs from competitors one and two failed to fully match the textual description, particularly in terms of accurate representation of the building's façade. These examples demonstrate SenseMirage's superiority in understanding users' needs, mastering stylistic forms, and depicting details, significantly outperforming its competitors. Building upon the strengths of our language models and text-to-image capabilities, we're developing a text-to-video large model.

This demo showcases the current capabilities of our text-to-video large model, which can generate detailed narrative-driven videos with multi-shot and 4K quality. We plan to launch a text-to-video large model product in 2024, aiming to establish it as the leading text-to-video product in China.

Next, I'll pass the time over to Xu Bing, who will continue to provide an overview of our business development.

Speaker 5

Thank you, Xu Li. Let me share more on our Generative AI business practice. Our SenseNova large model has proven its value and established a leading position in various industries, with strong emphasis on five key areas: finance, healthcare, enterprise Copilot, character interaction, and mobile devices. In the financial industry, we have effectively addressed the challenge of hallucination, which is commonly associated with large models. By leveraging RAG technology, we map financial data across huge databases and reach the model's financial knowledge base. This enables accurate response to professional financial queries. Our clients in this sector include banks such as Bank of China, China Merchants Bank, and Bank of Shanghai. In the healthcare field, we have harnessed our extensive medical knowledge and imaging data to develop the multimodal SenseChat-DaYi model.

The model excels in professional medical Q&As and exhibits proficiency in recognizing various medical images, including CT, MRI, and pathology samples. Notably, SenseChat-DaYi ran second in the MedBench Pharmacist exam, second only to ChatGPT. For instance, when the user uploads a photo of a medical box, the model can identify the drugs names and provide guidance on its usage, possible side effects, and necessary precautions. It has been already implemented across our hospital clients. Another application of large models is character interaction or role-playing, where a user can have conversations with virtual characters like Elon Musk or Doraemon.

In this field, our models have achieved a next-day user retention rate of over 60% on character AI . Our character chat application, powered by large models, has emerged as a market leader in China, showcasing high levels of user engagement.

The application interface shown here is the AI ChatBuddy feature, which we developed in collaboration with Mobile QQ. Moving on to our Traditional AI business, we underwent a business reorganization in 2023, consolidating our non-generative AI portion of Smart City, Smart Business, and Smart Life into the Traditional AI segment. As a result of the remarkable growth of our Generative AI business and our strategic move to scale back Smart City operations, the Traditional AI segment contribution to our total revenue decreased from 82% in 2022 to 54% in 2023. This decrease is primarily driven by a significant reduction in our company's reliance on Smart City before, which revenue contribution has reduced from 29% to now less than 10%. Throughout the past year, our primary focus within the Traditional AI business was to improve the quality of cash flow.

We have proactively leveraged our expertise in various industry domains and customer resources to drive the expansion of our Generative AI business. Now, let's take a look at our SenseAuto business. In 2023, SenseAuto achieved a total revenue of RMB 380 million, marking a year-on-year growth of 31%. Both SenseAuto Pilot and SenseAuto Cabin contributed to this positive performance. Throughout the year, SenseAuto Solutions were successfully implemented in 1.29 million vehicles, reflecting a significant year-on-year increase of 163%. Furthermore, the gross profit per vehicle for SenseAuto Cabin grew 30%.

To date, SenseAuto Solutions have been deployed in a total of 1.95 million smart vehicles with more than 90 car models. Additionally, we received confirmation letters as a designated supplier for over 16 million additional vehicles and expanded to 41 new car models. In 2023, SenseAuto fully leveraged the capability of our AI infrastructure and large models.

Our infrastructure model and application layers were specifically tailored to cater to the needs of car makers, realizing mass productions. We maintained close collaboration with automakers to develop AGI capabilities on cars. For autonomous driving solutions in 2023, we achieved a significant milestone by successfully delivering high-speed navigation on Autopilot NOA functions to flagship models of GAC AION and Hozon New Energy, receiving widespread acclaim from both customer and industry experts. Building upon this success, we plan to mass-produce urban NOA functions in 2024. Through the application of our technology path, combining large language large computing power with large models, our autonomous driving R&D has yielded impressive results.

Notably, our UniAD algorithm was awarded the 2023 CVPR Best Paper Award. This algorithm has demonstrated superior performance compared to traditional autonomous driving algorithms.

In the second half of the year, we introduced DriveMLM, surpassing other industry algorithms in autonomous driving planning and control. This breakthrough has led to a 25% reduction in manual intervention per mile. This significant algorithm advancement is now being integrated into the mass production pipeline. For SenseAuto Cabin, we have also made remarkable progress over the past year. Not only have we continued to mass-produce existing SenseAuto Cabin features, but also made a significant step forward in integrating generative AI functions into mass production. Based on our large language model and text-to-image models, we have introduced 11 groundbreaking SenseAuto Cabin features that combine utility and entertainment.

These include health diagnosis, travel planning, and role-playing. Currently, we are collaborating with several leading domestic and international car makers to test and integrate our new features into their car models.

We anticipate the large language model will be applied to more mass-produced car models at a faster pace than anticipated. This accelerated integration of technology will provide users with an unparalleled SenseAuto Cabin experience. Looking ahead to 2024, we have set three strategic goals. Firstly, maintaining leading-edge technology. We will continue to capitalize on the huge collaboration advantage brought by AI infrastructure and large models, expanding the scale of our computing power, strengthening the integrated service capability of AI infrastructure and large models. We will also continue to iterate the capability of SenseNova large models to maintain its leading position in the industry.

Additionally, with innovation strategies like cloud device connection, we can reduce the inferencing costs largely, broadening the scope of future applications. Secondly, drive business growth. We will accelerate commercialization and market penetration of generative AI and provide customers with generative AI solutions with the best value.

At the same time, we will maintain the steady growth of traditional AI businesses and improve revenue quality. Lastly, promote profitability of the core business. We will continue to optimize overall operational efficiency by incubating and spinning off non-core businesses and adopting other strategies and focusing on our resources on Generative AI to improve cash flow and reduce losses. With huge collaboration advantages brought by AI infrastructure and large models, cloud device collaboration capability, and rich commercialization experience across scenarios enjoyed by SenseTime, we are confident about the future. Although we face many challenges during the transformation period, we are determined to adopt a steady and forward-looking strategy to ensure that we can secure a leading position amid the competition in the AI 2.0 era and achieve long-term leapfrog development.

Finally, we warmly invite everyone to attend our upcoming Technology Day in April, where you can experience the rapidly evolving SenseNova large model version 5.0, along with several new generative AI products that will be launched alongside. Next, I will pass the time to our CFO, Wang Zheng, who will provide a detailed overview of our financials.

Speaker 6

Hello, everyone. I'm pleased to share our financial performance for 2023. SenseTime experienced a revenue decline of 10.6% in 2023, a challenging year of transformation that also presented great potential opportunities.

Years of large model technology development and advanced AI computing powered infrastructure brought us from generative AI R&D to commercialization in 2023. Generative AI revenue grew close to RMB 1.2 billion, tripling from 2022 and accounting for 35% of total revenue. We will further allocate our resources to this new core business, which led to our reclassification revenue stream breakdown.

The only revenue stream that remained unchanged is Smart Auto, which achieved 31% annual growth. We strategically transformed our Traditional AI business, particularly proactively reducing our Smart City-related business and focusing on optimizing cash flow and revenue quality. The decline of Traditional AI, especially Smart City-related businesses, which represented a large percentage of total revenue in the past, was the main driver resulting in the group's overall negative revenue over the year. SenseTime's gross margin was 44.1% in 2023, slightly down from 45.3% in the first half of the year and meaningfully lower than 2022's gross margin of 66%. It was primarily due to higher hardware and AIDC-related costs and as a percentage of revenue in 2023. As mentioned before, this trend is usually driven by changing customer demand, which could vary from period to period and be difficult to predict.

In 2023, we have significantly reduced various controllable operating costs, allowing us to narrow EBITDA loss levels even with a decrease in gross margin. As we expand self-built computing capacity, the absolute value and proportion of depreciation in our cost structure are also gradually rising. This partially explains why our gross loss is still increasing by 6.6% compared to 2022. As mentioned, we significantly enhanced cost control in operating expenses. We have seen an overall reduction of 10.6% from 2022. For the first time, all three operating line items declined year-over-year. R&D expenses, a large part of operating expenses, dropped by 13.7%.

As AI inherently is a cost-saving tool, we see significant potential improvement in our own R&D and commercialization efficiency in the era of generative AI. There were impressive achievements in operating capital efficiency in 2023 while we still face challenges.

The left side of this chart is showing the key turnover days using the period and method. The cash conversion cycle significantly decreased by year-end of 2023 after a consecutive increase in previous years, mainly driven by a substantial reduction of about 100 days in trade receivables turnover days over the past year. We continue to focus on receivable collection, and our revenue quality is improving with more focus on Generative AI, reflecting a record high RMB 3.9 billion of total collected trade receivables in 2023, surpassing our total revenue, with a 48.5% increase from the previous year. The biggest challenge in receivable collection is those related to Smart City, which was lower than expected, leading to a notable increase in impairment provision ratio on the balance sheet and a slight rise in net financial asset impairment losses in the income statement.

Our CapEx in 2023 remained focused on large-scale computing power expansion. Thanks to our previous investment in computing power and collaboration with strategic partners, we could access even more computing power with limited CapEx from us. Importantly, technology accumulation in AIDC and experience in operating have continuously enhanced the scale and efficiency of our GPU clusters, maintaining high CapEx efficiency even as we capture explosive growth opportunities from large models. Our total CapEx in 2023 decreased by 71% from 2022. Even excluding the one-off purchase of office buildings, there was a 20% decrease year-over-year.

A good indicator to measure a company's overall cash strength is the total cash reserve plus balance of bond investment. Our total bond investments are low-risk and managed by third-party professional institutions, which can be quickly converted into cash.

Under this definition, our cash equivalent assets exceed RMB 13 billion by the end of 2023. In addition, our total bank credit line also close to RMB 13 billion, with about RMB 3.8 billion remaining unused. With our continued control over operating and investment cash flow, we have ample resources to seize the exciting long-term opportunities in generative AI. I will conclude the financial section here, and I will pass it back to our moderator. Thank you.

Operator

Thank you, management, for the presentation. I believe by now you have a better understanding of SenseTime's latest developments. I will now open the floor for Q&A. We will first invite our dial-in participants to ask a few questions, followed by our online webcast participants if time allows. Please type your questions into the Q&A window on your device. The first questions come from Brenda Zhao of CICC.

Good day to the management. I'm Brenda from CICC. Thank you for taking my questions. I'm particularly interested in SenseTime exploration in the multimodal domain, and I have three questions. First, I would like to inquire about the direction of SenseTime's current academic exploration at the forefront of multimodal research. Second, I'm curious about the overall pace of the development of large multimodal models, as well as the current state of engineering capabilities.

Third, how does the company address the scarcity of data in the multimodal domain and what is its perspective on the future adoption of synthetic data in the multimodal technology? Thank you.

Speaker 5

Thank you for your questions. These questions are more academic and future-oriented. Multimodal research must build on historical technology accumulation, and we have invested substantial resources in the area of perception and decision-making, which can serve as core tools for extensive data labeling and data generation. By utilizing a composite architecture, we retain detailed effects while not increasing additional acronyms required. Regarding synthetic data, our past accumulation in the 3D field, whether it's neural rendering or 3D Gaussians, can contribute significantly to the generation of synthetic data during this wave of generative advancements, in addition to other rendering techniques. Third, we predict that synthetic data will be an important data solution.

It's not feasible for industry data collection to cover low-frequency scenarios. This type of data will have to come from synthetic sources. Furthermore, past capabilities in understanding will develop synergy with production capabilities. Regarding the pace, I understand it's more connected to the allocation of resources. The emergence of Sora certifies a vast amount of computing power will be devoted to industry applications. SenseTime has a long history of experience in the video and image domains. How to use fewer resources to achieve the same or even better results is also a direction that we have been exploring. By leveraging our engineering and algorithm advantage to compensate the gap in computing power, we have already achieved quite significant results.

Operator

The second questions come from Yong Lin of Haitong Securities.

Yong Lin
Deputy General Manager, Haitong Securities

Thank you, management. I would like to ask about the computing power, which has always been your strength.

What are the expansion plans for computing power for 2024 or beyond? Thank you.

Speaker 5

Computing power is an extremely scarce resource. Last year, when we served a large number of domestic customers, almost all of them came to us every quarter to rent more computing power and subscribe services for model training and inference. This is also the original driving force that allowed our Generative AI business to grow rapidly. In 2024, we will continue to capitalize on our ability to expand computing power. Our overall scale of computing power is expected to double, which is approximately the same growth that we had at the end of last year as compared to the beginning of the year. However, this growth was actually less than we planned. Last year, whether in China or the U.S., computing power was extremely scarce.

By the end of the year, parameter size of models became larger, and the multimodal capabilities of models became stronger. During the Chinese New Year, new models like Sora emerged. Both training and inference are seeing an exponential increase in computing power demand. The industry estimates that the demand for computing power will expand by an order of magnitude every year. We predict that for the next one to two years, the global demand for computing power will outstrip supply. The challenge in China will be greater, but this will also provide fertile ground for the development of domestic chips. Domestic chips are becoming increasingly efficient, not just in terms of chip production, but more importantly, in terms of how these chips operate and how properly configured the software on them and how to achieve high efficiency in training and inference.

These are all major trends that the industry participants are working together on. This presents a very good opportunity to domestic chips. The use of chips will also diversify. Last year, most of the demand came from training. Overall, in terms of large models, China is about one to two years behind the U.S. At this point, the demand for training is indeed stronger in China, while the U.S. has already entered an inference phase. NVIDIA also mentioned that about 40% of their GPUs they shipped have started to begin to use for inference. We predict that this year in China, the growth rate for inference will exceed that of training. It's expected that by the end of the year, inference will account for 20%-30% of the total computing power consumption.

This differentiation between inference and training also provides development opportunity for diversified chip design. We know that the cost of inference for multimodal models is several times higher than that of traditional language models. This will also offer many opportunities for optimized chip design. Ultimately, chip design is part of a national network, and the shortage of chip supply has created opportunities in the chip scheduling market. Last year, the cost of chips in terms of hardware was relatively high, and the prices were continuously rising. This year, how to schedule chips will be very important and will improve operational management capabilities, software capabilities on the chips, and inference acceleration capabilities. These skills will become more important, and customers' willingness to pay will also increase.

Our SenseCore capabilities in both hardware and software will be better leveraged in the market to meet the growing customer demands for training and inference.

Operator

The third question came from Helen Fang of HSBC.

Thank you, management. I'm Helen from HSBC. I would like to ask about the Generative AI business. Firstly, what are the current business models for Generative AI? And in the medium to long term, what is our growth expectation and the contribution proportion of this business in terms of the company's total revenue? Additionally, will generative AI increase our gross margin or might drag on it?

Speaker 5

Thank you for your questions, Ms. Fang. Let's start with the business models. Our approach to generative AI is currently based on the synergy between SenseCore and large models. There are mainly three business models: public cloud, private cloud, and model customization services. Public cloud services are well understood.

Models are hosted on the public cloud and accessed via standard APIs. These can be foundation models or vertical models tailored to different industries or products. When using these models, clients can be charged based on traffic, for example, the price of the number of tokens input and output, or the amount of computing power resources locked in for the customers. Private cloud deployment is primarily for clients with stringent data security requirements, such as banks, securities houses, insurance companies, hospitals, etc. The specific services vary according to customers' needs, and we usually provide them with dedicated models. We charge model licensing fees based on the number of active user accounts with both permanent and specific terms licensing available. For permanent licensing, we often include a clause that additional fees are required for upgrade and maintenance.

Lastly, customized model services typically take the form of a service fee, mainly for customized model training, fine-tuning, or development based on the client's needs. As for financial forecasts for Generative AI, our revenue of these models has tripled from 2022 to 2023. Then from 2023 to 2024, we anticipate the business volume could double again, with Generative AI possibly accounting for about half of the company's total revenue in 2024. It is indeed challenging to predict long-term growth very accurately, but it should be very high growth. We expect a CAGR of around 50% for 2023 to 2028 in this segment. While Generative AI is expected to contribute 50% of our revenue this year, then by 2027 or 2028, it can account for nearly 75% of our total revenue. Regarding the impact on gross margin, it is hard to judge with precision.

As I mentioned earlier, there could be fluctuation in gross margin due to various business models being developed in generative AI. We expect that the long-term gross margin impact will be neutral for the company and won't get too far from the 40%-ish level. Due to the time constraints, we will answer the last question from the web portal. After the unfortunate passing of Professor Tang, who inherited his shares, what are the major impacts on SenseTime shareholding and the company's strategy, and how will the company reasonably reward investors in the future, especially due to its low stock prices? These questions represent deep concern of many investors. Firstly, I would like to thank our investors for their interest in SenseTime. Following the passing of Professor Tang, his equity was inherited by his family.

In fact, his family has always expressed unwavering support for the company's management and continues to hold the shares through Amind, with a commitment to maintain their lockup status. Our management team remains stable and will continue to drive changes in AI industries, particularly in this transformation to AI 2.0. From the group perspective, the stability and consistency of the group's governance are guaranteed. Strategically speaking, the strategy set by Professor Tang was to generate good returns for the industry through original technology. In AI 1.0, we entered a period of cost-benefit, meaning we were able to generate stable returns and positive cash flow through cost reduction.

This capability will also go through the same process in AI 2.0. First, the industry needs to address the can-or-cannot issue, breaking through the red lines of industrial usage. Second, on top of the can-or-cannot issue, it needs to resolve the cost issue.

Third, at the product end, we need to add services, eventually forming a closed loop of industry value. SenseTime has gone through the whole cycle and is capable of leveraging our past experience to promote industrial transformation in the generative AI. Regarding our stock price, I would like to apologize to our shareholders first. Our current stock price is around 20% of the IPO price, and there are many reasons for this. First is the overall macro environment. Second, there have been sales by pre-IPO shareholders. Last year, our second-largest external shareholder exited their holdings. This year, our largest external shareholder has been selling, which was reflected in the stock price. Of course, the selling had some systematic reasons of their own. From management's perspective, the stock price is severely undervalued as this is already less than our net asset value.

SenseTime's assets, apart from the cash portion as introduced by our CFO, Wang Zheng, also include a large amount of computing power and infrastructure. On our book, our computing power is classified as assets net of depreciation, so its actual value should be higher. From an assets perspective, SenseTime's stock price is currently very low. We hope that through the transformation last year and the first half of this year, our business will turn into a very good shape to meet future challenges and also extend our commercial capabilities into AI 2.0. At the same time, we aim to maintain a healthy and stable cash flow for the remaining business from AI 1.0. Finally, from management, we will pay our best efforts, including maintaining the lockup of our shares, recovering our stock price, and to reward our shareholders and our employees.

Operator

Thank you, Dr. Xu Li.

Now, time is already 7:00 P.M. Thank you again, Mr. Xu Bing and Wang Zheng, and all the participants today. Thank you for your attention. Thank you for attendance today. Our results briefing is still here. That's the end of our session. Thank you.

Powered by