SenseTime Group Inc. (HKG:0020)
Hong Kong flag Hong Kong · Delayed Price · Currency is HKD
1.940
-0.080 (-3.96%)
Apr 28, 2026, 4:08 PM HKT
← View all transcripts

Earnings Call: H2 2022

Mar 28, 2023

Michael Chan
Independent Non executive Director, SenseTime

Hello, good afternoon, everyone. Welcome to SenseTime's 2022 annual results announcement. I am today's host, Michael Chan. With us today are Dr. Xu Li, Chairman, and CEO, Mr. Xu Bing, Co-founder and Board Secretary, and Mr. Wang Zheng, our CFO. Before we begin, we would like to remind you that today's discussion may contain forward-looking statements which involve a number of risks and uncertainties. Actual results and outcome may differ materially from those mentioned in today's announcement and this discussion. For a detailed discussion of these risks and uncertainties, please refer to the latest announcement submitted by SenseTime to the Hong Kong Stock Exchange. The company does not undertake any obligation to update any forward-looking declaration except as required by law. During today's session, management will also discuss certain non-IFRS financial measures for comparison measures only. For today's announcement, management will use Chinese as the main language.

Interpreters will provide simultaneous English interpretation in our prepared remarks session. Kindly note that English interpretation is for convenience purposes only. In case of any discrepancy, management statements in their original language will prevail. For today's call, our CEO Xu Li will first provide an overview of our technological developments and results, followed by Mr. Xu Bing, who will go over our business developments, and lastly, followed by Mr. Wang Zheng, who will go over our financials 2022. To end, CEO Xu Li will wrap up with our strategic outlook. After that, we will go straight into our Q&A session. Now, I will now pass the time over to Dr. Xu Li.

Speaker 3

Hello, everyone. Welcome to our 2022 annual results presentation.

Although the macro environment was rife with challenges in 2022, it was a year of remarkable progress for the development of AI, which gained a lot of attention in the market. Allow us to share with you some of the progresses we have made this year. In the past year, the company expanded into the fields of AI applications and diversified our business scope. We are pursuing a strategic realignment of our business, as mentioned previously. Our Smart Life and Smart Auto business segments have flourished and resulted in significant revenue growth, thus raising their revenue contribution. On the other hand, our Smart Business and Smart City contributions have decreased. In 2022, total revenue of our SenseTime Group was RMB 3.8 billion, which was a 19% decrease compared to last year.

Gross profit was RMB 2.5 billion. Our gross profit margin is at 20.67%. Excluding SVC expenses, share-based compensation expenses, R&D investment amounted to RMB 3.8 billion, representing a year-on-year increase of 24%. Our net losses also significantly narrowed by 64% to RMB 6.1 billion. Our adjusted net loss was RMB 4.7 billion. This year, significant progress was made in the field of general AI, general Artificial Intelligence, AGI, driven by large computing SenseTime has always been committed to building a large models plus SenseCore technological system. We are glad it's time for our efforts to bear fruits. Allow me to share with you some of the significant important technological breakthroughs we've achieved in 2022. First off would be our SenseCore AI infrastructure.

As most of you know, complex and massive AI models require plenty of GPUs for training. Since our inception, we have invested more than RMB 10 billions into our infrastructure, the results of which are paying off and are being validated by the industry now. We have established one of the largest AI supercomputing infrastructure in Asia. It's now officially opening up its capabilities to external customers. With the support of SenseCore, more than 10 large model training projects have been kicked off, covering user-defined large model development across visual-based, language-based, and multi-modal based areas. SenseCore also reaches an impressive computing output of 5.0 exaFLOPS based on its parallel computing capabilities of more than 27,000 GPUs, which can effectively support simultaneous training of 20 large foundation models while utilizing thousands of GPUs in parallel.

SenseCore allows our customers to efficiently train large AI models by providing high-performance computing resources, a rich library of pre-trained models, easy-to-use development tools, and professional technical support. With such powerful resources on hand, we are the infrastructure services of choice in the era of AGI. We actually empower our customers to achieve their business goals and technological breakthroughs. In the last decade, demand for supercomputing power has grown exponentially. With the breakthroughs this year that we have seen in large AI models, the demand for quality, all-rounded supercomputing services is insatiable.

In the past two to three years, actually, SenseTime has extended past our original CV-focused frontier, and it actually expanded into other realms of AI, covering areas such as NLP, speech, synthesis and generation, AI-generated content, decision intelligence, and AI chips. Lately, we have successfully developed a pre-trained large language model with over 100 billion parameters. Now, this model and these capabilities will actually be available on the market by mid-2023, which we are actually very excited about. Moving on. SenseTime's roadmap in large models can be traced back to 2019 when we released the first computer vision-based large model with a parameter in the billions. To give you a better idea, the supercomputing power needed for a billion-based visual model is equivalent to that of a language-based model with more than tens of billions of parameters.

Since 2021 to 2022, we have been training vision-based large models with parameters in the scope of tens of billions. The requirements of which are equal to that of a language model in the parameter scale of hundreds of billions. Ever since 2021, we have gone into the area of NLP. We have developed natural language models, as well as decision intelligence models, as well as multimodal models. We outsourced OpenDILab, a decision intelligence platform in 2021, and it has actually received over 10,000 stars on GitHub, which is a significant achievement. We have also successfully developed the world's largest CV foundation model with 32 billion parameters, realizing higher performance object detection, image segmentation, and multi-object recognition capabilities. It has been widely applied in areas such as autonomous driving, industrial quality inspection, and medical imaging.

For example, our BEV, which stands for Bird's Eye View, perception algorithm actually won the 2022 Waymo Challenge Championship in the main track. In terms of AIGC, our models actually support the creation of 6K ultra-high definition images and can even actually comprehend traditional Chinese culture, such as generating correspondent images using Chinese poetry as prompts. When it comes to speech recognition and synthesis, our models can generate speech with different tones and timbres according to users' needs. This year, we launched an open source, our multimodal multitask universal large model, also known as Intern 2.5, and it's one of the most powerful integrated open source model available on the market. All of our breakthroughs allow us to make further inroads into the power stack of AGI. [crosstalk]

Which is something that we are all very excited and happy about, and we believe that it will create significant inroads into the development of Artificial Intelligence. Now, training large models cannot be done without proper infrastructure, and SenseTime has been investing into this area for years. Not only does our SenseCore infrastructure serve our own researchers, as mentioned just now, its services are also now open to external clients. With more users from different areas, these bring along significant advantages. Firstly, we are able to receive more feedback and hence subsequently improve our services through a positive feedback loop. Secondly, opening up our AIDC allows for better economies of scale and hence lower cost margins, and hence will lower the cost of AI adoption for our customers.

Thirdly, with more users, we are able to expand the AI ecosystem, which can further enhance the network effect to contribute towards a general trend in AGI. As of March 31, 2023, our SenseCore is capable of stable parallel training from up to 4,000 GPUs and can support more than seven days of uninterrupted training, all of which are widely considered to be impressive achievements when it comes to supercomputing facilities. Currently, SenseCore has supported more than 10 large model training projects covering user-defined large model development across visual-based, language-based, and multimodal areas. We have served eight leading large-scale customers, providing them with more than 7,000 specific dedicated GPUs for training their very own large models. These customers cover key industries across internet, across the spectrum, including internet companies, leading startups, gaming companies, commercial banks, and even research institutions.

Since 2018, our accumulated R&D investment has reached RMB 12.8 billion. As of December 31, 2022, our R&D comprises of 3,466 employees, this actually accounts for around 68% of our total employees. Our per capita R&D efficiency also continues to improve. On average, the number of models produced by each of our R&D personnel per year has increased actually 90% from 2021. The cumulative number of commercialized AI models has also increased by 93%.

Reaching 67,000 in total. In terms of the capabilities of our various AI models, allow me to provide you with a few specific examples. Our large models span across natural language processing, short for NLP, AIGC, and decision intelligence. Since 2021, SenseTime has been developing its own NLP model, which has now reached parameters of 180 billion. It performs well when it comes to interactive Q&A, comprehension, and generation of new knowledge, as well as adaptability to different knowledge fields. For example, last year when we first launched our digital avatar product, we sought the advice of our NLP model to come up with a product name. I informed our model that this digital avatar product was a cloud-based service provider. Our model then suggested the name of RuYing Digital Avatar.

I asked the model how the name was derived, to which it explained that this name was excerpted from the Chinese traditional slash phrase of "like a shadow," essentially implying that the digital avatar is a copy of the original user. I went on to request the model to provide a general description of the product, as well as provide a commercial jingle for our product launch. Both requests were completed in the correct context, and it even managed to provide a rhyming jingle for our RuYing product, all of which are impressive accomplishments within the NLP landscape. In another example, here you will see SenseTime's latest results announcements, which was published earlier this afternoon. Given the essence of time, perhaps some of you may not have time to read it.

Here, our product SenseChat was able to summarize it down to 100 words. It did so pretty concisely. Moving on, let's look at an example of AIGC, where AI is able to generate dynamic pictures from mere words. As mentioned just now, our model can come up with an accurate painting by just interpreting text from abstract Chinese traditional poems. In this case, you will see that we have just provided them with a few simple phrases and excerpts from traditional poems, which are known for being difficult to interpret due to their abstractness. Even so, our model was able to come up with dynamic and detail-oriented photos that match the meaning of the poem itself. Not only can we recreate some traditional photos, we can also generate some pretty realistic and even photography-like images from words.

Here, images are created using cues related to Hong Kong, since actually all three of us here speaking to you today are streaming live from our Hong Kong office. In this case, even though this was just generated by our AIGC software, we can see that the light and shadows in each image look incredibly authentic, and there is a significant aperture effect. Generating an aperture effect itself is not difficult, but knowing what to focus on and what to blur actually involves interpreting the essence of a photo. Let's look at a few more application scenarios. In this case, it demonstrates the virtual try-on effects of items such as clothing and apparel on various models. You can now even specify the exact pose of the model and generate a resulting avatar wearing that specific piece of clothing or accessory.

As a result, customers can get a general sense of actually how the clothing will look on themselves, as well as get a general feel. As mentioned just now, we launched our digital avatar platform product, RuYing, last year. Linking this platform with our NLP large model, we can generate a fully performable digital avatar with just a press of a button. In this case, you'll see a digital avatar generated right here, which is a very popular example right now. This 2D digital avatar is based on an actual real human being, and all her features are captured from her real-life persona. In this case, she is also able to converse with the audience as if she were a real-life human being. She is currently introducing herself to you all.

Now, next up, we will show you how we can apply a cartoon effect to our human-life avatar. We were able to use different effects and, in this case, put a Pixar-like effect to our avatar, and we can change the styles of our different avatars as well. Last year, during the launch of this product, I introduced our platform using a digital cartoon avatar during the entire process, which was definitely a point of amusement for all of our audience. Now, for China to develop its own large models, it actually needs localized computing power, which is quickly catching up to the quality of that from overseas. SenseTime has always been a first mover in this area. We started software adaptation for domestic chips several years ago, and some of it have already been optimized for use within our SenseCore infrastructure.

Apart from hardware, software is also key. We have our own deep learning platform, SenseParrots, as well as an all-rounded open source platform, such as OpenMMLab, OpenDILab, OpenXRLab, as well as OpenGVLab universal vision open source platform, and lastly, OpenPPL. We aim to provide diversified algorithm and model support to enhance the community as a whole. With that, I will now pass the time to our Co-founder and Board Secretary, Xu Bing, who will provide an overview of our business developments

Speaker 4

Thank you, Xu Li. I will now briefly go over our four business segment. Here's a snapshot of our segments. Last year, focus has been on large model development. With the support of SenseCore, we've undergone a comprehensive upgrade of our product matrix across all four business lines, allowing us to achieve a strategic transformation and alignment of our business structure. We have diverted less attention to a small model base and one-off customized offerings, and instead focus more on standardized offerings based on large models. You can see Smart Life saw a year-on-year increase of 130% in terms of revenue. It's reaching new height of RMB 960 million . Smart Auto business also is up 59% year-on-year, reaching RMB 290 million .

Smart Business was impacted by general macro conditions. We actively refocused our resources to AI-as-a-service related revenues, which account for 20% within this segment alone. Smart Business saw a year-on-year decline of 25%. We hit RMB 1.5 billion last year. Smart City also experienced a strategic realignment this year, with its revenue down 49%, reaching RMB 1.1 billion. Our focus for this sector remains on revenue quality optimization. Let's take a look at a deep dive on each session. Smart Life had the fastest revenue growth. It's a record high for us actually, RMB 955 million, a 130% year-on-year increase, as mentioned.

Its revenue contribution also increased from 9% to 25% of the overall pie, while the ARPU and total customer number also increased by 86% and 23% respectively. Thanks to our strong capabilities in large models, we actually achieved high-speed growth in multiple business line, including sensors and AI ISP chips on internet mobile devices, AIGC multi-modal empowering for metaverse, SenseCare, and AI education, and our first consumer-grade AI robot for households. Due to time constraint today, we'll specifically focus on AIGC and mobile device sub-segments. You see the company deploy large scale pre-trained generative models, we SenseMARS's AIGC capability have empowered over 200 apps, including Xiaohongshu, Weibo, Bilibili, Line, and Kakao, for instance. SenseMARS's application to offline metaverse has also expanded to 15 million square meters , encompassing top-tier revenue venues like large amusement parks, shopping malls, museums, and banks.

Our digital human generation platform achieved an industry breakthrough. Thanks to our self-developed NLP interaction capabilities, our digital human product score full marks in a customer POC. As a result, one of the major banks in China has now adopted our product. According to a report released by Frost & Sullivan, SenseTime's digital human product has entered the maturity stage and became a market leader, achieving the highest score in six out of 10 evaluation indicators and ranking first in overall competitiveness. Our AIGC segment is a core driver for our Smart Life business, and we have more to come this year. Now, let's take a look at mobile device. We offer a product combination of AI SDK, sensors, ISP chips, empowering clients with powerful capabilities.

In the year 2022, the production of new smartphones equipped with our SDK reached 450 million units, with our super resolution and portrait series functions capturing the highest market share. Our video highlight moments and intelligent album features successfully capture clients such as OPPO, Vivo, and Honor. For the AI sensor, we deliver six high-performance AI sensors. In particular, our RGBW AI sensor has been mass-produced and launched in Vivo's flagship phones. Our first AI ISP chip has been successfully activated, achieving a 50% reduction in power consumption compared to similar products while processing AI 4K ultra-high-definition video. The chip has also enhanced video quality and resolution. Let's take a look at ISP chips. We continue to produce new generations of ISP chips.

Let's take a look at our Smart Auto business. Our Smart Auto segment achieved revenue of RMB 293 million in 2022, which is a year-on-year increase of 58.9%. Its revenue contribution increased from 4% in 2021 to 8% in 2022. Last year, our SenseAuto Cabin and SenseAuto Pilot products were adapted and delivered to 27 vehicle models, with over 500,000 vehicles equipped with SenseAuto's technology for mass production. Mass production revenue increased significantly, with over 500,000 vehicles delivered in 2022 and more than eight million new orders. The main customers for SenseAuto Cabin include NIO, GAC, BYD, Changan.

SenseAuto L2+ and L2++ advanced driving assistance systems, ADAS products, were mainly delivered to the flagship models of GAC and Audi. In 2022, we added more than eight million new vehicle orders. Our customers include GAC, Nissan, BYD, Hozon, NIO, Zeekr, Changan, Chery, for instance, covering more than 80 car models of more than 30 auto companies and continue to lead the industry. Thanks to accumulation of technology and products across multiple business lines, we have completed the industry's first launch of over 10 intelligent cabin products, including multi-model interaction, sensory mode, emergency alert, air touch, AR Karaoke, intelligent screensaver, and much more. Our market-leading SenseAuto Cabin solutions are the result of a continuous effort to innovate and improve our product offerings.

This has led to increased value for single vehicles, comprehensive coverage of cabin functionality for automotive companies, and a virtuous cycle amongst our SenseAuto platform, automotive companies, and consumers. Last year, our business milestone was the mass production of our L2+ and L2++ ADAS solutions. Key functions such as high-speed piloting, city piloting were widely received. We're the only Smart Auto Tier One software provider to mass produce L2++ functions, and we're able to cater to Guangqi and Hozon's flagship models. These offerings have allowed us to increase our pricing flexibility and have bigger bargaining power over vehicle manufacturers.

In December of last year, the flagship model of GAC Aion, equipped with our SenseAuto Pilot solutions, had to launch and achieve intercity highway autonomous driving with zero takeovers throughout the 100 kilometers highway test from Guangzhou to Shenzhen, excel in line changing, lane changing, intelligent recognition, and avoiding vehicles, overtaking intelligent recognition of optimal lanes, and automatic acceleration and deceleration in road sections with speed limits. We believe in 10 years' time, the market size for these products will reach up to 100 million cars with. We shall occupy a big pie out of that slice. In 2022, our Smart Business division underwent strategic realignment and experienced a 25% year-on-year decrease in top line, reaching RMB 1.5 billion . We served 717 clients, which was a 22% decline year-on-year.

We also saw a 3.8% year-on-year decline in ARPU. Last year, We opened our capacity to industry customers, empowering their AI model development. We have started to generate revenue from external services, which comprised 20% of the overall revenue to the Smart Business segment. The revenue generated by SenseCore is expected to increase significantly this year. Despite short-term challenges posed by the pandemic and microeconomic environment, we observed that central state-owned SOEs and leading companies in various industries have increased their investment in digital transformation and intelligent upgrade in response to the 14th Five-Year Plan. We believe our large models will help to lower the cost of AI adaptation, hence furthering our business. Last but not least is the Smart Business, Smart City business.

We took the initiative to strategically reposition our Smart City business to reduce the Group's reliance on it. Last year, the revenue was RMB 1.1, account for 29% of Group's total revenue. It's a decline from 46% in 2021. Moving forward, we aim to optimize our revenue by generating more high-quality revenue, serving top customers, so as to fostering a solid growth of our Smart City business. Despite our repositioning, we see that Smart City business has started to recover in our first quarter of 2023. With this, I'll pass the time to our CFO, Wang Zheng. It's my pleasure to share with you our SenseTime's full year performance. SenseTime's had negative revenue growth of 19% last year, mainly due to the negative external impact to our historically dominant Smart City and Smart Business segments.

That said, we anticipate these segments will recover as the macro condition stabilize. It's worth noting that our long-term investments in Smart Life and Smart Auto has paid off with revenue growth rates of 130% and 59% respectively. This is a historical high for these two segments in terms of publicly disclosed revenue information. Our comprehensive capability in SenseCore and large models were integrated and presented at the Shanghai Lingang AIDC last year, serving both internal R&D needs as well as external commercialization needs. Our overseas business achieved revenue growth of 16%, increasing its share of total revenue from 12%- 17% last year. This reflects our diversified business and geographies, as well as our co-continuous improvement in risk persistent as a leading enterprise in the industry.

Our full year gross margin was 66.8%, slightly higher than the 66% in the first half of this year, but lower than the 69.7% for the full year last year, the year before. The main reason for the decline was a slight increase in the proportion of hardware the products delivered to customers last year. The proportion of hardware costs increased from 29.6%- 29.4% last year, which often fluctuates based on changes in customer demand. In addition, as we started to generate revenue from AIDC, depreciation appeared for the first time in cost of goods sold, account for 1% of revenue. Despite the temporary decline in revenue and gross profit in 2022, we are optimistic about the AI industry and SenseTime's leadership position in it.

We're pleased to see our strategic segments such as Smart Life and Smart Auto is bearing fruit and is growing healthily at an increasing rate. Our long-term investments in SenseCore and large model capabilities will also be of great use. In this context, we continue to focus on long-term strategic investment, especially in R&D. These combined factors, coupled with an increase in provision for impairment on trade receivable and FX-related losses, have led to the corresponding adjustment, adjusted net loss for 2022 as shown on the right. Let me introduce our revenue breakdown. While our full revenue segment was previously dominated by Smart Business and Smart City, the two emerging verticals, Life and Auto, now account for 1/3 of total revenue last year.

We expect the long-term growth rate of these two segments to continue to outpace the company's overall growth rate, and we anticipate them to play a significant role in driving the company's future success. Furthermore, our AIDC platform has achieved commercial breakthrough in 2022, generating high-quality cloud revenue and is gradually becoming the next engine of revenue growth. As we have previously provided detailed introductions on each of our segments, we will not repeat here. On the operating expenditure side, we continue to invest in R&D areas with long-term strategic value and expand in selected markets based on our long-term optimistic outlook on AI. At the same time, we have further increased our cost control, especially G&A expense in the face of economic uncertainty last year.

As shown on the right side of the slides, after excluding SBC, our total operating expense increased by only 20% in last year, which is the slowest growth rate in our track record period. Some of the efficiency improvement measures taken in 2022 might not be immediately reflected in the operating cost figures, but will lay a solid foundation for sustained growing, for sustained slowing of the growth rate of these expenses in the future. We face some challenges in our working capital. On the left, you can see our various turnover days based on the period ending method. Based on this method, our cash conversion cycle is relatively stable from 2020- 2021, but deteriorate by about 8% in 2022.

The key factor is still the increase in trade receivable days as cash collections become more challenging, and the increase in provision for the impairment reflects this factor. Fortunately, we have double our trade payable days through better management of payables, which partially offset the increase in trade receivable turnover days. As a result, the total cash conversion cycle is similar to that of 2021. The significant increase in the net loss from impairment of financial asset in the lower right corner of the income statement is a natural result of the increase in the provision of impairment on the upper right. The significant increase in the proportion of income is directly related to the decline in income. Last year was a significant year for SenseTime's CapEx.

The largest CapEx item of about RMB 3.3 billion, which is around 63% of the total CapEx, was the one-off purchase of a new office in Shanghai West Bund. The other part is related to the constructions of computing powers in Shanghai Lingang AIDC, and a considerable part of the computing power has already begun commercialized externally. The company's net cash reached RMB 11.9 billion at the end of 2022, which includes the structural deposit but excludes equity and bond investment balances listed on the right side, which amount to RMB 6.7 billion at 2022. The chart on the right shows the fair value year-end balance, and the bond investments are all investment-grade US dollar bonds managed by third-party professional distributors.

In addition, the total unused credit line granted by banks reached RMB 9.9 billion at the end of 2022, which is an increase from RMB 8.1 billion by end of the year before. Our overall financial strength still give us a considerable advantage compared to most competitors. That concludes the financial update. I pass the time to Xu Li. Thank you so much.

Speaker 3

Thank you, Wang Zheng. Lastly, I'm delighted to close with our AGI strategy. A new stage of human evolution has arrived, and AGI will open up endless possibilities, which is incredibly exciting. Looking forward, SenseTime will make breakthroughs in AGI core strategy. To accomplish this goal, we will delve into and explore several key areas. Now, to start with, we do consider high computing power, supercomputing power, to be a core driving force. Computing power is a driving force behind achieving AGI. SenseTime will use its SenseCore as core platform and continue expand its capabilities by investing in high-performance computing infrastructure, conducting in-depth research on parallel computing and distributed systems, and accelerating training and iteration of more powerful and complex AI models. Secondly, large multi-modal models.

SenseTime's strategic investment is focused on the R&D of large multi-modal models that can better understand and generate multiple data types, spanning text, images, audio, video, while also having multitask generalization abilities. Thirdly, hardware/chip collaboration. SenseTime will collaborate with leading global chip manufacturers to promote the development of AI hardware and chips, improve AI computing power, energy efficiency, and reduce latency. The last one would be AI for all. SenseTime is dedicated to promoting AI technology in a way that benefits all of humanity. To that end, we have established and are implementing the quote, unquote, "AI for all development goals." We strive to make AI more accessible and affordable by lowering the cost and threshold of AI technology while promoting open source and collaboration to explore the diversity and inclusiveness of AI.

We place a strong emphasis on ethics and fairness issues and will work with different parties to develop policies and regulatory frameworks of our AI technology. Lastly, business empowerment. We will continue to empower our different business segments through our SenseCore and large models and enhance our ecosystem. Now, finally, we would like to announce that, and also let our investors and shareholders know that we will have a product day in early April, and more large model products will be released on this day. We look forward to announcing this product launch. Do stay tuned. Thank you very much for your support.

Operator

With this, we will now open the floor for Q&A. We will first invite our in-person participants to ask a few questions. For our webcast participants, please type your questions into the Q&A window on your device. For our in-person attendees, kindly raise your hands, and our staff will provide you with a microphone. Do state your name and company before raising your question. I will now open the floor for any questions. Thank you.

This is Chen Kai from CICC TMT Research here. ChatGPT has been a big hit since its release, and SenseTime is definitely a farsighted company that has invested in large models at a very early stage. We have been in SenseTime discussion since an early era, and we know about your development history. What is your view on this industry trend now that ChatGPT is generating such heat?

What is your view on this industry trend, what kind of changes do you think multimodal large models will bring to the industry? Secondly, how do you think the Model-as-a-Service business model will be implemented in China? I would also like to address the question about the CV frontier. SenseTime as an early starter in the CV field has accumulated sufficient data in CV, and the key to further ecosystem development is obtaining more data sets and more clients. How will you overcome the lack of textual data? How will you overcome this issue when you develop cross-modal large models? What do you think are the different, vertically focused partners would you consider cooperating with?

Thank you very much, Kai Chen. We believe that cross-modal large model is the obvious trend for the future development of Artificial Intelligence.

Currently, various industries are gradually applying Artificial Intelligence technologies, but many application scenarios require processing cross-modal data such as text, images, and speech. Unfortunately, just a single modal model is no longer sufficient to meet the demand. Therefore, the emergence of cross-modal large models is of great significance, which is why we at SenseTime started researching computer vision-based large models early since 2019, and we have expanded to multimodal domains approximately two years ago. With this current trend of NLP-based big models now being widely received on the market, this can be considered as a major paradigm shift. Since vision-based information is much richer and more diverse than natural language itself, the former has not been fully explored as compared to the latter.

That language-based big models can serve as a bridge to a large number of originally unsolved problems and can even bring about a huge number of previously unheard of possibilities. We are confident that this paradigm shift is bound to bring about huge industry changes. Why do you think we emphasize multimodal modality in the future, you may ask? Essentially, since our multimodal large model itself is capable of interpreting visual-based information, it can be connected to the decision-making model, which will further step up its capability. We believe that these different inflection points will bring about revolutionary changes to the industry. In the future, with the increase of data, improvement of computing power, and advancement of technology, application of cross-modal large models will become more and more widespread, bringing significant changes to many fields.

In terms of business models, Model-as-a-Service can deploy pre-trained AI models across platforms with a single click to the cloud, edge, and end devices, thus helping enterprises reduce the threshold and the cost of using Artificial Intelligence, improve model performance and service levels. Today, as Artificial Intelligence is being widely applied in various industries, we hold an optimistic view of its future landing in China. Model-as-a-Service is also an important part of our SenseCore cloud-native services. We not only provide a massive model library, but also tools related to model inference and deployment, thus making the landing and commercialization of algorithms more convenient and accessible to everyone. In terms of natural language-based data, it is not uncommon to easily come up with approximately tens of TB of data, i.e., tens of trillions of tokens.

However, once we involve multimodal-based information, this will involve an even more incredibly massive amount of data. In our traditional sense, the network parameters multiplied by the number of tokens processed or the amount of data processed equals the GPU time. It is conceivable that the next time parameters become larger and multimodal data continues to increase exponentially, so will demand for supercomputing infrastructure. In this context, we definitely hold a couple of advantages. Firstly, we have the ability and experience to train ultra-large-scale visual models with 32 billion parameters, which is still the largest in the world. Second, our previous accumulation of multi-industry visual data provides a strong advantage on such a data set. The combination of these two factors, along with our basic computing power and GPU accumulation advantage, leads us to believe that SenseTime is still in a very strong position in the development of multimodal models.

We are grateful for the support from the industry and our previous years of accumulation, which has allowed us to make such upfront CapEx investments. The impact on future industries, I believe, will definitely be far-reaching. For example, in finance, healthcare, education, and in some niche segments, sections within the Smart Life are being refreshed. This is because we are now connected with clients in these industries and discovering that the changes and transformations for them are quite significant indeed. As a result, we will see some large-scale uprisings in domestic industries that are not purely involved on the open internet. This will then gradually radiate to more and more people. I think this perception starts with changes in some large B2B industries, followed by the gradual development of more new products and applications based on the consumer end.

I believe that this will be an eventual progress. Thank you, management.

This is Marley from CMBI here. I have some questions related to our computing power. With the U.S. imposing restrictions on the export of high-end GPUs, do we have enough GPUs in reserve, do you think? What is the progress of domestic GPUs adaptation? In addition, what are your plans for AIDC computing power scale this year? How does the demand side actually look like? In the medium and long term, will the proportion of your AI-as-a-Service revenue further increase? Here ends my questions. Thank you.

Thank you, Marley. Allow me to briefly respond. First of all, it is true that the U.S. embargo on high-end chips has a great impact on China's development of big models. This matter also reflects why we should be forward-looking in our strategic planning.

While the restrictions imposed by the U.S. will certainly affect the supply of computing power to Chinese companies, SenseTime has proactively procured GPUs and adapted to domestically produced chips early on. All of these have helped mitigate the negative impact of chip restrictions. Currently, the company has 5,000 flops of computing power and also nearly 27,000 GPUs, the latter of which was purchased relatively early on. We are able to train up to 20 large models with 100 billion parameters simultaneously. We also have the bandwidth to provide services to both internet and external parties, including large model training for eight clients and other AI as a service customers to support others' development of their large models. We observe that demand for AI computing infrastructure continues to be strong.

SenseTime is renowned in the industry for our GPU parallel computing capabilities and AIDC services, which is why a lot of stellar internet companies and outstanding startups are coming to us for collaboration in this area. As Shirley mentioned earlier, expanding our customer base via our infrastructure services can further help to iterate and enhance its capabilities and to strengthen its economies of scale, thus essentially lowering our per user cost. Hence, in this context, we're essentially expanding the GPU computing power of our infrastructure in an orderly fashion, and we are also importing the world's best GPU chips, such as the A800 series chips, in accordance with the compliance regulations related to export control. In terms of domestically produced GPUs, as we introduced, SenseTime started the adaptation of domestic chips a few years ago.

Computing power used for large model training cannot currently be replaced by domestically produced chips, as domestic chips are currently far better suited for non-large-scale training and lowering inferencing cost in very specific scenarios. However, we do anticipate more progress in domestic replacement of some fundamental capabilities within a year. Current trend of our large models have actually led to a surge in demand from our clients. Our AI cloud service has a series of specially designed architecture for AI-specific scenarios, thus providing resources and export expert support, reducing training barriers, and improving efficiency to help clients effectively use AI resources to complete their tasks. We have always been optimistic about our AI as a service business, and we expect its proportion to continue to grow.

In general, we believe that not only will the world's leading GPU vendors continue to invest resources in this area, but we also expect more and more domestic GPU vendors to actively participate in enriching this business model. We believe SenseTime is the link behind enriching this entire ecosystem and value chain.

Shirley here. I would like to add that in the future, whether it's the 800 series or domestically produced models, from the perspective of large model training, the issue is not about availability, but rather about efficiency and cost. Domestic chips will be getting more and more connected, but it could also have lower efficiency or even involve a higher cost. Therefore, the main problem we need to solve here is connectivity. Also, the question of availability can actually be resolved within the domestic ecosystem itself and even within the broader Chinese ecosystem. Thank you. Hello, management.

This is Ryan from DBS. Due to a series of factors such as the pandemic in 2022, your Smart City and Smart Business segments faced pressure in terms of payment collection and revenue recognition. How do you expect the situation to improve in 2023? We have also observed a rapid growth in the Smart Life and Smart Auto segments, which indicates a significant structural change in the revenue contributions provided by your four different segments last year. How does the company anticipate that future revenue proportions of these four businesses and their proportions will change, and what is their importance in generating revenue for your company? Thank you.

Thank you very much, Ryan. This is Wang Zheng here. The overall macroeconomic situation seems to be stabilizing now.

After the easing of the pandemic, we have seen the efficiency of the government and enterprises largely return to previous levels. In the long run, we believe that urban and enterprise digital upgrades are still in high demand. Therefore, we think that after the twists and turns of 2022, these two business areas should improve in the long run. Looking at 2023, we believe there are opportunities for these two sectors to resume growth. Let me elaborate a bit more on the Smart City sector. The government itself is still actively supporting regional digital upgrades through various forms, such as project-specific bonds. We are strategically focusing on higher quality opportunities with faster payment cycles for smart city digital upgrades to promote the healthy and stable development of this business. As we mentioned, enterprises have a very high demand for the capabilities of SenseCore's AI infrastructure.

In 2022, the revenue generated by SenseCore for external clients accounted for roughly 20% of Smart Business. We expect that in 2023, due to the release of this huge demand, the contribution of this segment will definitely increase significantly. That is, the external revenue of SenseCore should experience substantial growth, possibly even doubling or even more. This will drive our overall Smart Business segment to resume growth in 2023. Your question mainly concerns 2023 and looking further ahead. There is indeed a rather difficult to quantify factor, which is, from our generative AI, short, AIGC, which we mentioned just now. In the long run, I believe it will have a profound impact on the use of AI in both urban and commercial settings. However, it is indeed challenging to quantify this impact clearly at the moment.

Nevertheless, the use of AI should be a positive process overall. It may not seem very evident in 2023, we are confident that in the long run, this will experience high speed, long-term growth. Regarding your second question about structural changes among our segments, we have actually already experienced rapid structural changes from 2021- 2022. The two fast-growing businesses, Smart Life and Smart Auto, have increased their share of revenue from 13% to around 33%. I remember six months ago, we said we hope by 2025 that these two businesses would account for 40%-50% of revenue. It seems like we need to further adjust the proportion upwards. In other words, fast-growing segments will make up more than half, while the other two segments will also grow, but the proportion will decrease slightly.

I want to emphasize that our AIGC segment is a variable right here. Also, for the SenseCore related revenue that Zheng has just mentioned, most of your understanding is that the demand and usage is for training. However, keep in mind inferencing also requires large-scale infrastructure as inferencing with large models is a computationally intensive task. When we consider inferencing businesses, we also find that the demand for GPUs is very high. Thank you, everyone. Due to limited time, we will now end the Q&A session. Thank you, everyone, including our management team, for your participation. This concludes our conference today. If you have further questions, please feel free to contact our IR team. Thank you.

Powered by