Good afternoon, everyone. Welcome to Meitu's 2025 Investor Day. I'm your host today, Coco. Today's event is divided into two parts. Firstly, we will demonstrate several. Secondly, we have invited Mr. Wu Xinhong, Chairman and Group CEO, Mr. Yan Jinliang, CFO, and Mr. Chen Jieyi, CPO, to engage in a discussion with everyone. Without further ado, let's begin the product demonstration. As you know, we categorized our products into products for leisure and productivity tools based on users' needs. Today, we'll showcase the products for leisure: Meitu app for photo editing, Wink for video editing. Under productivity tools, we'll present XDesign for e-commerce material design and KaiPai for talking video creation. Finally, we'll demonstrate our AI agent, RoboNeo.
First, let me demonstrate the Meitu app. Your impression about it tends to focus on portrait beautification, but it delivers excellent results in image editing scenarios. Let's have a basic look at an example for you guys. We just passed our national day. You felt the crowd. Suppose we are here at a tourism attraction. We have many elements and factors, so we cannot have a photo focusing on the main character. Now we want to remove all those elements in the background. Now we can click the button in the middle, the removal brush. Based on traditional function, we can use a classical removal function to just eliminate some elements in the background. For example, the people. You will find some bigger objects may not be that clear after removal. Because we do it based on the stretching pixels, you will find the blurring signs. Now let's look at the right part of the picture. We use the GenAI-powered removal function. We remove some elements we don't want.
This technology, first, it recognizes the objects we want to remove, and then, based on its understanding about the background, it generates something similar to the original picture. Free function. The right function needs to be paid. This example can tell. Instead of free options available because the effects are better. This is also one of the contributors for us ranking in the top five of many app stores. Next, the picture editing. We can see inadequate lighting. We can see the dark parts about the face and body of the model. We can improve the lighting effects. First, the backlight. One of the main issues is that the overall brightness is dark. Traditionally, we just adjust the overall brightness, for example, like this.
You will find after we improve the overall brightness, we do not have the quality or the atmosphere of the magazine photos, because some shadows that should be dark are also brighter. Very satisfactory. By GenAI, the AI backlight after generating, it just imitates and simulates our natural lighting. For example, some areas should be brighter, maintaining darkness for the dark area should be dark. You can see the final result. After comparison, you can easily find the function powered by GenAI can create better quality. Besides what we mentioned above, powered with GenAI, some group photo with your younger self or with a friend from far away or your childhood with your parents' childhood photo together. Actually, AI can do this. Recently, we launched a new function called AI Group Photo. You can combine two models very naturally. Based on this case, I select these two models.
Due to the time limit, we generated many samples for you guys to refer. Like this one. This creative function powered by AI quickly drives the globalization of [audio distortion]. Lately, it is very good performing, top in 14 European countries, app stores, and also in many other regions. We will further propose more creative functions to attract more clients from different countries. I've updated the photo editing for the Meitu app. Now let's turn to a function of portrait beautification. Before we shared some paths of our globalization, we would like to find some common pursuits of global consumers. Let's look at this picture. Actually, globally, our clients want to do something related to anti-aging. We believe this is the common pursuit for global users despite your regions. You can see very clear wrinkles. Editing is to improve the aging situation of this model.
We can also use some free functions of many steps to do so. Today, we try some very simple and lift. We just do it with one click facial plump, adjusting the lighting on the face to lighten the wrinkles on the face. We also use the even and lift function for paid subscribers. We adjust some muscle directions on the face or around the mouth and lips, compared with the original one. For example, we can use this high-definition multi-dimensionally compared with the original picture. We use three steps. We can replicate the natural effect. Let's look at a slimming example. This picture actually doesn't have many flaws. We want to have better details around the model's waist or hips. We can do some minor adjustments. The background details are abundant in this picture. If you want to adjust, powered by GenAI, we achieve background protection.
Before we adjust the waist, we protect the background first. You still find these users from the Euro-American region tend to be more about fitness. We can have clear app lines. It fits the aesthetics of the Euro-American users. We can turn to some niche functions. Once you have this kind of demand, you will pay for it. This girl with braces, she wants to smile brightly. For myself, I also did braces when I was young, but I refuse to smile very brightly. Powered with GenAI, we have teeth adjustment function. We can remove the braces just with one click, and we can also generate teeth adjustment alignment. Demonstrated.
Coming up, we will focus on video editing and using different differentiated marketing policies and our own policies, our Wink product. It has been a fast-growing product. It has two core functions. One is quality restoration. The other is video portrait beautification. Today, we'll demonstrate these two core functions. First, we open quality restoration. Firstly, we restore a photo. Suppose we choose a photo here. To the left was the before demonstration. You can see that the entire photo is quite ambiguous, low resolution, and the water coming out of the holes is unclear as well. This is the quality restored photo. We can see that almost every detail of the photo is now clear. The lighting and details around the model's face have become clearer as well.
If we use another example of a child, we can see that the original photo, I believe, many investors here, when they took your photos as a baby or as a young child, the limitation for [audio distortion]. Many of the old images that we may have tend to be clearer than by Wink. Actually, the AI here bases its restoration on the analysis of photos. It's mainly focused on video editing. Because we have video portrait beautification, aside from videos, photo restoration is actually a big hit function. That's why we have both developing. That was quality restoration for photos. Now we'll see the video. This is a cartoon from 1961. This was the original video. Five minutes needs 54 credits in our Wink. More credits. Now I select this video. Because of the time limitation, we have a sample of this restored video.
You can see the before and now after. Of restoring a video of a live streaming blogger. This was the before. Now we move on to the after. The GenAI-powered restoration technologies based on movements, the overall video development. Now we look at an example of GenAI-powered video restoration in a dimly lit environment. The studio is situated [audio distortion]. We see the before effect. Now we see the AI-powered. You can see the effects are quite stark. Another key core function of Wink is video portrait beautification. With no makeup on. Now we begin the video portrait beautification. Firstly, I would like to color to [audio distortion]. Suppose we choose headphones [audio distortion] on the blogger's face with some hollowing example. We can have some plumping for the faces. We click on the plumping function. We also add some lighting and whitening to the blogger's facial details. We adjust the parameters. Now we dewrinkle.
We reduce the wrinkles. Then we add more details to the facial features of the blogger. Because the video is quite short, we can add more details such as makeup. If we remember the effect without makeup, now we see the effect with makeup. Above are the core demonstrations of Wink and Meitu app. Now I'll demonstrate to you the newly rolled out AI-powered Whirlwind. Before demonstrating, I would like us to think about our past experiences of shopping online. Did you encounter any unpleasant situations? For instance, have you received clothes purchased from online and became dissatisfied? Why? Could it be because your physique didn't exactly fit into the design of the clothes? Or the textures or materials did not match the product images? Was it because you lacked extensive fashion styling knowledge, so you simply couldn't replicate these stylish outfit combinations?
You have a special or important occasion, and you do not know how to choose your outfits. Sometimes simply your wardrobe is too full. You don't know where to begin with choosing your outfits. Now, based on the above pain points of our clients and a solution for online shopping, suppose we opened the app and logged in. Firstly, the app requires us to put in some basic information: your gender, your clothes, your wear, your height, etc., and your size. Maybe you will choose regular size. For the size, we provide some regular body shapes for physiques: the hourglass shape, apple shape, or the hourglass shape. Now suppose we choose sportish traveling, the vacation on vacation style. For the regions, perhaps we choose a K-fashion. For the artistic styles, we choose minimalist dressing style. We see this is the AI-powered recommendations.
These are what the AI recommends us to purchase. If we select the first set, this is a yellow and cute outfit. We could click and see the specific details of each of the outfits. Suppose we click on the yellow vest. Here we see all the details of the vest. For instance, the sleeves, the shape of the collar, the patterns, the color, and the seasons, etc. All of these fundamental details. In the recommendation, there would be Taobao links. You can click on these links, and you could buy them in no time. Now we return to the product itself. Aside from recommending based on consumer preferences, on the homepage, there would be [audio distortion]. Suppose the user chooses a recommended style. They could save it in the outfit shopping list, for example.
For instance, we could save it for tomorrow, and the system will remind you what tomorrow's outfit will be. Aside from these recommendations, the user can actually upload photos of their existing wardrobe, of their existing outfits. For instance, now we upload the photos. The app will scan and distinguish the specific garments that we have. It takes some time. We can see that the generated result is the exact replica of the clothes in reality. The length of the sleeve, the texture of the color, etc., are all exact replicas. We can click the add to closet below, and the system will distinguish and sort the items for you. For instance, the clothes that we just identified belong to Topwear. This will make it easier for clients to sort their clothes later. The system will also generate descriptions based on the uploaded photographs.
The AI system can categorize clothes based on their conditions, such as they are wearable, they should be disposed, or they should be put on sale, etc. Now we see that in the bottom and middle, there is a function called Wow AI. In fact, there will be a shortcut of this at the homepage. This allows users to communicate directly with the AI agent to do AI trials, which is what we will demonstrate just now. Suppose the user has a significant job interview, and he or she needs to buy an outfit, some clothes that I should wear for a job interview. Now the AI assistant can think based on the user's scenarios and generate outfits, matchings, combinations based on the user's preferences. Now we see the recommended combinations. If a user on the [audio distortion].
Now the user can upload a full body or a half body portrait of the user. Now the results, the user is satisfied. He or she can click on the bottom right link and see more details of specific outfits. The links will also lead the user to Taobao, and they can purchase directly. For instance, these are the shoes that have just been recommended. Another scenario is that going into winter, and the users need some beautiful yet warm clothes. Now we put in what should I wear when the weather becomes cold? Now the AI again will generate recommendations based on the user's preferences and the weather conditions provided by the user. Now we have AI [audio distortion]. Based on your uploaded portfolio, it will generate the trial effect generally. You don't need to regenerate.
If the user is satisfied with it, you can just click on the picture to go to the Taobao item linked to it. Above stated is the introduction of Whirlwind. In the future, we will improve the whole product, the trial effect, and the user experiences. We want to achieve stable effects for generation and also faster speed or for more clothing types. We just demonstrated for clothes. We want to do jewelry, hats, scarves, and all those accessories. We want to achieve higher consistency to replicate the true cutting of the clothes and how to fit clothes on different body shapes to do more detailed and realistic representations. Oh, how to match clothes together. Next, let's welcome Erin to demonstrate productivity tools, XDesign, DesignKit, and KaiPai.
Thanks, Coco. Demonstrate it. XDesign is for the e-commerce to do SKU designs. You use it on the web or PC. These are the common scenarios. I believe you have a basic understanding about XDesign. In limited time today, I want to demonstrate some new functions. You'll also see, beside the traditional homepage, we have an AI agent program. We'll integrate this function into [audio distortion]. Like ask how the agent can empower e-commerce practitioners. First, let's look at the traditional core functions of XDesign. They are based on [audio distortion]. They just do some upload. In the design process, it didn't engage a lot. Now we have a lot of user cases. The agent can help the e-commerce practitioners to design. Now the e-commerce can be a designer for himself. They don't need to pay for outsourcing designers. Show relevant functions. For example, do the refine some e-commerce. How many pictures do I have for different platforms like Amazon and T-Mall? What angles, what pictures?
Now with the help of AI, major use cases based on our investigation on users' needs. It unleashes users' product. The agent can help to generate a series of relevant photos. Now let's just demonstrate it. Suppose today we sell women's wear. We have a very basic SKU. It's the [audio distortion] AI about it. For example, I want to do SKU variants. I want to do three colors: dark light pink, dark green, and leather black. After the conversation, RoboNeo starts to imagine. We had a complete conversation sample. First, RoboNeo analyzed the [audio distortion]. It says a jeans, jacket, and also the details, whether the background is transparent or whether the background is complex. Based on our needs, it generates three colors we want: the light pink, dark green. This is also my favorite. I can show you the details. The third is the leather black.
RoboNeo, it replicates the texture of leather. After that, if I am just an entry-level e-commerce practitioner, we want a series of pictures. How many pictures do I have? I just said a series of photos for Amazon. First, it will do further and deeper analysis. What scenarios does this picture fit? RoboNeo considers it is for many street photos and leather wearings. A very unique design on the shoulder and also a pocket with zip and some metal. RoboNeo can all grasp this information. RoboNeo points some potential points for improvement. For example, we have the pure white background. In the future, it will automatically help us to change the background or to show some trial effects with models. First, it gives a refinement with the white background and four detailed demonstration pictures, including the name for different details.
Also, because it's generated by AI, you can re-edit the characters. This will save a lot of time for e-commerce. It gives you a demonstration picture without a model, with light and shadows. Next picture, trial effect with model and interaction. Where's this jacket? It's natural. The details are preserved. Also, the paintings in the background and the top. Okay, about the length and the width of this picture and clothes. It will generate a size picture with two more details. Next, this is a picture set for Amazon. It didn't ask for further prompts. It's very friendly for e-commerce practitioners. If you want more, if you want more trial effect pictures, I want to achieve a picture of trial effects with your American model. It will do targeted analysis, photo style, the background characteristics, and the points of the model. After analysis, it will give us suggestions.
For example, actually, it gives two different suggestions. A same model from the front side and the other from the left side. I even want more. I want a video. Based on this picture, I want it to be live or moving. This is the processing part of RoboNeo. It finds out that this picture can be divided into two parts. It gives us more details or respective plane. You can see the two different parts are not interfering with each other. The model moves naturally. I want an Asian model this time. The same process for thinking. First, it analyzes, and then what's the characteristic of this cloth to ensure the consistency of this cloth. Now we have a picture of trial with Asian model. The effect is really natural. Whether it meets our requirements. Sometimes, maybe RoboNeo is not sure of your demands.
It will ask you for permission to proceed. This is also a way for RoboNeo to increase its efficiency. We think it's good. We proceed. Give scenarios and backgrounds. First, just like the Fifth Avenue in France. The ratio is very proper. Second, interior, maybe at home with brighter background. Third, leaning on a cliff in background in nature is very cool. You can imagine if you really invite a model to do a picture like this, the price can be really high. Fourth, in a runway, classic demonstration. It preserves the integrity of the model and also the details of the audiences, including their hands, their feet. Without further prompts and demands, it could generate demonstration pictures. It asks for expansion or further proceedings based on one example. RoboNeo voluntarily asked for further proceedings. Now I want more poses for the runway pic. First, the front.
Second, side, with hands in the pocket. You can see a bulge in the pocket. This is really vivid and realistic. Next, whether should we generate a video based on the picture? Why not? I need. It can create a video of five seconds based on each pic. Really smooth and natural. Basically, imitating that on the runway. Some basic habits, touching our hair, eyesight to different directions. It mixes them up for 15 seconds. Based on that, the user can do some sample editing and then post it on social media or platforms they would like to do. Next, do we need a detailed picture? It gives six different details. We can see this is popular in many e-commerce platforms. It knew it's a picture of cloth, so it generates a table of size. You can also edit in the future by yourself.
As the e-commerce merchants, sometimes we need renditions of children's wear. Sometimes finding children to become models costs dearly, and the time can be less flexible because children could be uncomfortable in the shooting frames, etc. Now with RoboNeo, we can fix this issue within seconds. Now we ask the AI, please apply our clothes onto a child. After analysis, RoboNeo asks us what kind of background do we need. Suppose we had a street shot. Now again, we need it. Now we see these are the results of a cute child posing and doing a street shot photo wearing our clothes. Even the parent with a bag of the children's is more realistic. The AI asks a follow-up, asks us if we need another scenario of at home or near the beach. We choose at home.
We now have a slightly older child model with our clothes with a handbag at home. We can see that while RoboNeo relies on in-house or third-party image generators, now we have an instance of photos. We will generate a model holding the instance in our hands, wearing his hands. Now RoboNeo asks us what kinds of hands do we need. Because my demands are quite simple, I have a simple follow-up instruction. These are the generated images of RoboNeo. Whether it is the gesture of the hand or the background, they're all natural and smooth. Perhaps I think the background is too boring. I would like to have the background to be a lavender farm. Now you see the result. The overall result only took a couple of seconds. In traditional scenarios, for designers to simply make this image, they could take hours. Maybe I would like even more.
I would like the image to become a video. This is within the capabilities of RoboNeo. We could see the wind blowing over the lavender farm and also the hands slightly shaking while holding the instance bottle all make the video more realistic and more vivid. Traditionally, this would take hours and cost dearly traditionally. In our platform, especially in RoboNeo, RoboNeo can do it within seconds with several clicks. The functions of AI agents. Now we would also have AI generations based on pets, based on accessories, etc. For instance, we are a pet accessories producer or merchant. RoboNeo would still contribute greatly because if we use pets to shoot, it would [audio distortion]. In a realistic e-commerce platform, after designing these sets of images, we still need the eventual poster. Of course, we can make the poster one by one, but that would be time-costly.
We have a function where we can help e-commerce platforms to produce posters, many posters simultaneously. We open e-commerce materials. Supposedly, we need a three-by-four e-commerce poster. We can see to the left, there are templates. Based on different products, we have different templates. The clothes that we've just demonstrated, this is a kind of autumn-style clothes. Based on the template, we can use some of our own products. We can see to the left, there are producing templates in multiple cases. Now we see it here, and we add them simultaneously. We could add some basic information, some basic descriptions. For instance, the prices, the style, the classic denim, the black leather, cute pink, and lastly, cool girl green, dark green. Supposed to resell it at low prices, HKD 299, HKD 299 for all colors. The following step, we see that all the AI agent to the left.
Suppose we would like the prices here, so we could match the labels. We have a description here that we would like to ship it to other places to describe what are the clothes. Now we would arrange the demonstrations and all the words. Once we click, we can have four pages. It would take some time, but it is still much shorter than the conventional process. We could see that these are the four generated images, and we could still make some adjustments to the results. We could see the eventual results. Based on AI generations, the e-commerce merchants only need to make minimal changes that satisfy their demands. The overall process is not [audio distortion]. Aside from this function, there is another function that I would like to demonstrate, which is the AI live streaming marketing video. We could find it at the homepage.
A bright side is that we only need to upload three videos. These are the pre-generated videos by AI agents that I made earlier today. AI agents can distinguish what kind of elements do you have in these respective videos and make recommendations. For instance, once it sees the denim jackets, it can help us recommend the styles for customers, both in Chinese and in English. Because it takes time, we have some pre-generated live streaming e-commerce videos. We have not edited any of them. These are raw videos, and e-commerce merchants can make further adjustments based on these raw videos with the subtitles, for instance, with the BGM, etc. This is an eventual result. Some of these defects, we could make further adjustments. We could do some minute changes. We continuously emphasize that AI cannot help you make the eventual product 100%.
You still need to make the final adjustments. Because if you rely on AI generation solely without any final touches, this is far from enough. Aside from these promotion and marketing videos, the final is the producing product images simultaneously. These are the four images we've just produced. We can see it to the left again. There are different templates, and one of the templates is for clothes. We could produce all of them in [audio distortion]. These are some in-app purchases that consumers need to buy. Of course, we have the eventual results based on the different images, and it gives me four choices for me to choose. As an e-commerce merchant, this is very convenient for me. I can choose any scenario that I want, make some final touches, and I can upload to my platform.
These AI commercial product images have been released at the end of September this year onto the platform of many merchants. One by one, the subscribers and users of our apps can see these newly produced functions. These are the demos of XDesign and RoboNeo. Now I would like to move on to KaiPai. KaiPai is mainly based on mobile phones, so I would like to use my mobile phone to do my demo. Over the past two years, KaiPai grew rapidly. Based on my experiences, I felt also firsthand that KaiPai combined all the functions that the user needs into a single workflow. From a user perspective, the overall photography and photo editing functions are comprehensive. Now suppose we have millions of fans and we would like to demonstrate the talking functions of KaiPai.
If we would like a talking video, traditionally, it would take a long time, at least an hour or two, to write the scripts and then to write. It would take even longer. We open it and we type in our topic today. Please share with my fans about the product KaiPai. Now I can choose the word limit, 250 or 500. Today, because of the time limit, I will set it at 150 words. A unique function is that KaiPai can combine the scripts with the own blogs and uploads of the bloggers. For instance, if the blogger is a social media blogger, mainly aimed at sharing good software with users, the scripts would be tailored to my demands. Suppose that I'm this kind of social media blogger. I save my preferences and my word limit, and I would generate my script.
Because, again, it takes time, to the AI, I've pre-generated a script. Now we can move on. This is the generated script. To the bottom right, we can see that KaiPai is now guiding the user to actually shoot the video. This is my phone doing the demo. You can see to the top is the teleprompter. Suppose you need more makeup, I could add on makeup using filters. For instance, the changing and makeup to my nose, to my eyes, to my lips, etc. For instance, I could brighten up my facial or lipstick effects, for instance. There would also be records kept by the AI. For instance, if online celebrities need to have multiple videos in a day, once they open the AI function, they can have predetermined templates of lipstick colors, etc., so that they do not need any further adjustments every time.
After adding the makeup, I think my background is too monotonous. It's only a wooden plank. Whether it is enough, I can change my background to an open window or an office background, etc. Now we use this open window background with the tree behind. Once I begin recording, I don't need to memorize the script. I can simply read off the prompter. You could pay attention that as I read, the prompter is following my voice. It is making adjustments based on the speed of my speech, and it can guide me. Now we begin. Do you think making difficult videos is hard? If we speak around a sentence, we can just withdraw from the last sentence. We don't do anything. You can see the proceeding of the subtitle according to your pronunciation. This is the end of the recording. Next, we click the button Finish. Complete.
This function, we can find the parts which we spoke wrongly. We just check and delete. This is the part we just spoke wrong. If it's some unnatural parts, you can redo it. Due to the time limit, we go on. We have a very internet-popular template. It used to take two hours to make a video. With this kind of template, it unleashes a lot of productivity for bloggers. We see this bilingual, advanced, high-level RAM template. Notice, I just tap on the button Template. The subtitle would change its style according to the template. The bilingual subtitles are automatically completed. Next, we can add a very interesting beginning. We know that on TikTok, on Red Note, the first three seconds are called Golden Three Seconds. It decides whether a user would go into your main page to be your follower. We can achieve some interesting effects.
We established some beginning templates. We have a film style beginning. This is a paid function. Not satisfied with one, you can change to another beginning template. You can have some creative ones like your duplicate. On the bottom of the screen, you can try it by yourself. Next, how to make a cover for the video. In the past, we select a beautiful frame or even you just screenshot it. Maybe you will turn to other applications to do cover. One stop. Effect. First one, I can directly change the character, the word. I personally like the AI one more. AI will base on my previous frame to help me design some popular cover. We have seen the effects really good. For example, I think the third one is good. I choose this.
You think the speed is a little slow. A simple video is generated within three minutes. I can just click the button to see it. This is why CapEx grew fast for the past two years, because it solves many demands for the users. What if for those vloggers with millions of fans? It's recorded every day. Cyber yourself. You can design your own digital avatar. It will imitate the way you speak, your pronunciation, your eyesight, and also your movement. It will learn deep learning, machine learning. You need to film a very short video for demonstration and for it to learn, and you need to authorize due to some consideration of security. This is a premium fee. You need to pay extra for it. Not only subscription can do it. I think it is HKD 250 for it.
Due to time limit, I generated my own digital avatar yesterday. One I filmed at home. It will match it. It will do the lip syncing. You can see the movement of the lip. It follows closely with the actual pronunciation. I click the function, reading the text. We can adjust the tone. I prefer to use my own voice. We can also record a small or recognize our tone. It's my tone, but I didn't speak then. You can see the process. I generated it yesterday. You can list. You can go to edit, click the button. I also did a complete sample to demonstrate, including the selection of BGM is included in the template. The overall effect is natural and can save a lot of time for our vloggers. They can do more videos every day. Also, a new function launched last month.
It is called Intelligent Mixing and Editing. Small shops need to do advertisement videos to attract customers. It may not only be an owner talking, it will film a lot of the environment, the product, adding subtitles, the product details, and so on. Some other use cases, for example, in the national holiday, you will take a lot of photos. You can just do a video with just one click. Yesterday, I filmed several short videos for us to demonstrate today. I still choose my own voice tone. AI can help you generate the subtitle, the script, but I wrote it yesterday for the investor day. Now we can just click. In this process, first, it's still analyzing what we have. It needs some time to deal with it. It analyzes what is contained in our video: the glass gate, the front desk, the receptionist, the elevator.
You have the movement of walking upstairs. I want to demonstrate to you the thinking process of CapEx. In a modern building, we have a smart gate, interior designs, and furnishments. I already prepared a sample. You can see the overall effect. If you think it's okay, you save. After you download, this is the video you will get. Do you want to use it? Do you want to try it by yourself? Analyze what you have in your videos and photos. This is called AI Intelligent Mixing. Above is the introduction about CapEx. Next, let's welcome Janet to introduce RoboNeo.
Thanks, Irene. Next, I will introduce RoboNeo. Without the apps, the first month, it attracted millions of users. Without its unique and independent product, it's integrated into other products to do specialized agents.
Besides what's demonstrated by Irene, under the scenario of e-commerce, the Meitu app, the homepage can also use the RoboNeo. Its original positioning was a productivity tool. After its launch, we found that many customers and users would use it to do very fun emojis and stickers. The red note, some social frustrating the productivity scenarios. We can see some creation scenarios. First, we use it to do brand design. I guess most of the investors present design major. You may not be familiar with brand design before demonstration. Let's have a comp. It includes two parts: basic and application, logo, sound, and color. The application is complex. For example, if you are a visual designer, you want to have a brand design for a coffee brand. You can see the filter, paper, and the coffee bean. It is color tone.
The packages, the advertisement brand menu, the apron, and also social media posters. It is based on the style of the basic part. It just extends it. By the way, the previous video was generated by CapEx within less than one hour at home, but I am still a newbie in video making. Now the example has just been generated by RoboNeo. Now, as an agent product, the thinking process is actually quite fascinating. I first put it actually matching the style. This is the industry know-how because if you're not a major in design, perhaps you may not be aware that the design. If the overall style was wrong from the beginning, you would not satisfy the producer's demands. We can see that the RoboNeo provided three different styles among them, the images that I like.
Then, based on my preferences, RoboNeo would make some analysis, for instance, the patterns, the color matching, etc. Eventually, based on the analysis, RoboNeo would provide solutions, design solutions. We can see that logo and fundamental parts that we just mentioned earlier, for instance, the fonts and the matching colors and logos. Initially, we could see that the combination of cups and a star, and the second is the filter with coffee beans. These are the logos. The overall matching would be the light brown of coffee beans. Aside from the fundamental parts, there will also be extensions of application. For instance, the packaging, the design of internal designs of stores, the menus, etc. It generates many e-commerce materials for us simultaneously because all of the produced materials are designed based on the styles that are generated at the beginning, from the menus to the internal designs of shops.
All the colors have used light brown and bluish green. We could see that the produced results by RoboNeo is a complete set of brand designing, not a scattered design of images. These, to the left, we see some fundamental designs and eventually to the packagings, the surrounding products, etc. It has a clear logic from simple to extensions. The understanding of the layers separated, we can see that all the different layers can be moved independently. This image, for instance, the combination of the filter and the coffee bean, and also the words "Nova Coffee." There are many further added functions. Unfortunately, these functions cannot be realized by GenAI at present. There are many things that RoboNeo already can do. I don't know if you have browsed AI cats and similar themed AI-produced photos on social media. These are becoming increasingly popular fast on social media.
These kinds of videos can be made by RoboNeo. For instance, here, firstly, I upload a photo of a cat, and RoboNeo would transform the style into a doll. We can see that the black ears and the white belly have been replicated in a fluffy doll manner. Now I ask RoboNeo to create a script based on this fluffy cat doll. For 20 minutes, I asked RoboNeo to generate a four-scenario story of a cat at home, etc. We see that the plots have revolved around staying at home. Such kind of a theme is quite popular on social media. Perhaps we've seen topics and videos like these, cats staying at home and enjoying themselves, etc. As a follow-up, I ask RoboNeo to produce video. We can see that the images of the cat are quite consistent across all the different scenarios.
Video sections, I ask it to compile them together and generate a video. We could see that the video I generated is quite stable. The dynamic images will not produce any flaws that disobey the laws of physics, etc. We can see, also, the images of the cats, the fluffiness has been well maintained and well retained as well. Once we have 20 seconds here, we could also add a BGM, and this is the eventual result from beginning to end. In fact, this kind of short video can be a complete plot with some final touches that can be published. Now we can see some innovative features and functions of users using RoboNeo. These are the changes brought by AI. About Photoshops, for instance, if the images, if we need to Photoshop the images, Photoshopping our wrinkles and our abs, etc., we will not say that we Photoshop them.
Sometimes we will say that, in fact, they are our own, but we know deep down that they are Photoshopped. A decade ago or five years ago, there are some technological innovations, but now we see the emergence of AI. These AI-powered functions are making these images more convincing and more easy to [audio distortion]. The AI helps users to disseminate these images and videos more conveniently. You can see that these kinds of functions that are listed on the PPT here are attractive. For instance, suppose you are a pet owner, and you see a video like this of a cat dancing. You will think that's very cute, and you will try. This would compel you to follow a free friend. We can see there are different templates for dances. You don't even need prompts. You only need to upload a photo, and they will be readily available.
Another example for male users that do not like to use Photoshop, there would be Scrabble artist styles for these kinds of users. There's on the platform Douyin, for instance, also during Chinese Valentine's Day, some Chinese Valentine's Day videos that interest girlfriends and boyfriends. All the movements are quite natural, and in fact, there would also be parents using the images of their children to redress them into top learners, for example. There are also, of course, Photoshop functions. You don't need to click on different buttons. You could simply need to put in short prompts, a Photoshopped image, giving you a great vibe. In short, RoboNeo brings a creative imagination space for many users. There would be many user-generated contents that have been voluntarily demonstrated by different users, and we encourage you to use RoboNeo to explore for yourselves.
The paid services of RoboNeo currently in China cost RMB 68 or around RMB 68 per month, and overseas, it costs around $ 15 per month, now based on different pricing. Above are the demo of RoboNeo.
Above stated are all the demonstrations for today. Before we go into the second session, we can take a short break. Let's meet later. Now we are moving into the session of the Q&A with the management. Before we start the next session, I would like to thank you again for sparing your time to attend today's investor day. I hope today's discussion can bring you some valuable outcomes. Before we move to the next session, I would like to give a brief introduction because we are during the period of the half-year results presentation and the final presentation of the year.
We will not show more statistics about our running and operation, but you can question about our positioning of the products and the companies over some insights. Thank you for comprehension and support. If you have any questions, you can raise your hand. Our staff will give you the microphone. About the management present today, our CEO and Chairman of the company, Mr. Wu Xinhong, our CFO, Mr. Yan Jinliang, Gary, and our Chief Product Officer, Chen Jieyi.
Recently, we saw ChatGPT. Do we have some possibilities of further integrating models and AI? ChatGPT may be an app store of the new era. We can see for the first batch, we have Canva. They are integrated into ChatGPT. This is a future direction for application. How to have business cooperation with AI agent and assistant?
In the year 2023, with the rise of AI, what's the boundaries between AI and us? To the year 2024 and the first half of 2025, actually, it is not a big concern. The empowerment of the AI can have a bigger picture, but now we started discussion, especially when the launch of a banana. We also discussed about it with Gary. The progress of the capabilities of the AI and its boundary with us, this will be a pain point for the market. Can you elucidate on this point?
The management answers. I say first, every time when we see a drastic increase of the AI capability, the relation between model and application, whether it is synchronization, replicating, or replacing, or empowerment with the new model, banana, is capability in video making. We are relatively positive about it from our own perspective. Our development with models actually are synchronized.
We rank the top in the App Store in many European countries. We are digging the latest function of the model. If we apply it to our products, services, how to increase its cacheability. For some vertical scenarios, it may not be that easy to cater to the needs of the users to offer professional or even industry-level services for users. For apps design, as an example, we dig into the scenario of e-commerce. We will spread and extend to other vertical scenarios. The services included and needed are not imagined as or can be done by all. We have a more open ecology overall. We are relatively positive. With the growth of AI models, we are also growing, meanwhile, and we have clear division about each.
As a supplement, suppose 5- 10 years later, the models become more popular, almost covering all these scenarios we need. Now the models would be a generative, a general product. Its main target audience would be the general users. General users who do not have that much expertise but would like to explore many different areas. We use imaging as an example. We will see that in Photoshop, in designing, it has many roles. We look at all the visual materials. An experienced designer can use effects. When the models become more powerful, there will still be opportunities for platforms like Meitu, KaiPai, etc., because in certain specific scenarios, the general AIs will not be as efficient as Meitu and others that are designated to these specific scenarios because these designated apps would have better results as they're more customized to the different effects.
Now, if we see even with the general apps like Photoshop, there are still detailed differences. I believe it is the same. There will be two question maps for each person. Thank you for your question.
My question is based on competition. I've been a user of Meitu for many years. I believe now I see RoboNeo AI can do similar things. All of these functions in Meitu are paid services. I would like to ask for users of Meitu when we do not have that much demand. I would like to ask for the comparison, for RoboNeo AI, etc., their competition with Meitu, will Meitu change the strategy about these paid services, etc.?
For me, this is our emphasis of how to better help our users to all the kinds of paid services in Meitu without charging too much.
If you're looking for free, the general AIs can perform free services relatively well. The problem is if your product has some needs, some services, and it's divided. In Photoshop, many people have few demands for services. Our new customers while retaining others for paid services. That is our strategic goal. In fact, RoboNeo or other AI, once they become more popular, because it's been around for some time already, and from the past to present, our MAU increases, etc., have not been incompatible with the growth of AI. I believe in many scenarios in this area, aside from products that can complete the services, the quality and efficiency matters more. For instance, Wink, the quality restoration. In fact, most AI can do quality restoration. When using Wink, users feel that Wink is the best, or at least one of the best in quality restoration.
That's what compels users to use Wink instead of general AI. When you think you need to have the best effects, you would use Meitu and other designated apps. That is what we think Wink, Meitu, and others can provide more in terms of the vertical categorization of products. I believe at the end of the day, it still depends on the efficiency and productivity.
My follow-up question, overseas, the [audio distortion] is becoming more popular and ranking first in app stores in many countries. The banana, the Google apps, it is also becoming popular as well. I would like to ask, could we predict some of the future users for the AI group photo functions, etc.? What are their potential demands, etc.? Thank you.
I would like to add to the following question.
One thing is inevitable is that in the future, as large language models and AI models become more powerful, I believe they can cover some services for Photoshops. This is inevitable. For any user in a real-life scenario, for instance, the inherent and deep-level demands of a user are different, or at least vary. For instance, they need different makeup. In many cases, their rationale would be to use the complete set of services provided by [audio distortion] and is following. Of course, AI can cover many superficial or surface-level demands, but for the deep [audio disctortion], I think AI models pose a great threat to our development strategies. In terms of the AI group photo and the AI functions you just mentioned, the human-powered predictions cost varies, but sometimes I believe it also depends on the time, the geographical location, etc.
An example would be, for instance, if we look at software, the interface tabs, you could see many windows for many templates. In the past, the templates are quite good, but they were not big hits. Whether they're popular or not also depends on whether the users can make full use of them, whether users believe the templates are useful or not. For a template, even if it's not as good as others, if it appeals to the demands of a user, it can still become more popular than others. In the future, we hope others, I believe we need to do two things. One thing is to ensure that we are capable of marketing ourselves for us to become more popular. Secondly, we need to have more brand collaborations with content producers.
In the past, three to four years ago, Meitu had a service where you could make short videos based on just a simple image. For instance, you selfie [audio distortion] do many adjustments with it. For instance, you could even make yourself disappear, for instance, and you could post it to your moments and have interesting interactions. In many cases, you can't simply have a template and just leave it there. You need to have more content producers to decide on how can you use this specific template. The AI figures you just said, we have similar functions that can make it more popular among whether in China or overseas markets. They can help us to popularize these products, including, for instance, the RoboNeo emojis we mentioned earlier.
Thank you for the management. Recently, I noticed that Meitu launched an interesting function, and it is photo editing with your friend. In the past, if girls took group photos, they edited one by one, but now they're like coordinating work online. What's the methodology for Meitu to grasp and tackle the pain point of users, not like other apps? Maybe in an overseas market, we have different products in different regions. How can we find those pain points of different people in different regions, and how can we power them? What's the goal for middle long-term?
The management answers for functions to develop. We will have a [audio distortion]. Besides about the product operation, how do we verify our demo? With the outbreak of AI capability, this interior intense direction is now engaging more and more customers to co-develop with us.
For example, the launch of RoboNeo, we achieved the outsourcing of creators or even designers. They can tell their needs. After we achieve, they share it to the social media. We gain reputation. In turn, it boosted our statistics to tell us we can better enlarge or empower this section. Encourage outside creators and designers to help us better develop our function. You can see from this year on, our capability to catch the hot topic is becoming much more stronger. We are relatively open to the outside creators and designers. This mode brought about many outputs and the answer to another question. About the market share, I didn't check our update very quickly. Around more than 10% steadily contain all our products. In the past two quarters, this figure is an increase. Especially in the Euro-American market, we are pushing more efforts recently.
Basically, about August and September, in the interior, we redesigned our direction and we gave more efforts. For example, we can see the top of the App Store in many European countries. This is also the result of our redirection.
I would like to celebrate the Meitu World Investors Day. We see the promotion of the Meitu app in the European market. Why choose Meitu? These are consumers' choices being chosen by the consumers. Why? Other apps may simply [audio distortion] on. It's designed to serve the entire international community. It has 17 years of [audio distortion] and it has many services and functions. In the past, these functions were isolated. You need to have different inlets for these functions to use them independently. Now, AI is reinvigorating all of these functions. We said that we could virtually reinvent the Meitu app again through generative AI. The AI agents can help us better, for instance, connect the different functions of the Meitu app to better serve the customized demands. For me, for the Meitu app, it is a significant advantage. It is an all-powerful product.
In regions where demands for the products are high, for instance, in Europe, and also the standards are high, it stands out compared to other products. It means that we can capture the trends and issues of specific regions in a timely manner. We also have some products that have been less popular over the long run. We would, for instance, eradicate in photo editing and editing, etc. Editing functions as well as Wink, etc. Overall, I believe it is the [audio distortion]. Another thing we should [audio distortion]. That is all I have to say. Why, again, why is it all-powerful and has many varieties for the Meitu app? In the photography area, such as the beauty cam, it is becoming more popular for consumers as well. Also, Wink for video editing, etc., are also becoming more trendy.
A core thing is that a product in imaging or video editing areas, once it has complete functions, it means that it can deploy AI agents flexibly in almost all processes and satisfy the consumer demands. This is a big advantage for any company and also for operations competitiveness. These are the important points for the status quo.
Next question.
I'm an investor. What is our products to see the functions about them? Can we say with many brands or products have functions, but we are more exquisite and more vertical, whether this is our direction or methodology? In the future, in the era of AI, we may not see more distinctions, but we focus on more verticals.
The management answers. First, we divide this part into two capabilities. First, about the model capability, and second, the traditional client app or computational design capability. These two parts are synchronizing. We cannot solve all needs by using AI. We integrate many other capabilities besides the AI capability, and we let them coordinate. This is more about diversified competition. When we have more similar models, we have services of similar quality, but if we do coordination with other functions, it means we can have greater and better output.
This is why we did that for the last 17 years. This actually is what we have done. Besides, even if the open-source model, our collaboration with Tong Yi, with Alibaba, or Qianwen Q1 model, many other companies use this model. We do a lot of enhancement ourselves. Regard, even if we use the same open-source model, there is still a large space for you to do. If we only use a model provided to the common, we would not have diversified functions. The second question about think about legitimate and jurisdictional security or functions. For example, the privacy of using some certain photos or videos.
Do we have discussions within our industry, and what's your point of view?
The management answers. We are authorized by many ministries and commissions regarding this part of China, and we strictly obey the restrictions.
We are also the first batch of companies engaging in this program. We also proposed our valuable experience to the government. We are staying in close communication within the regulatory departments. Overseas, for some major areas and markets, we have strict client regulations. For example, we know the European Union. They have relatively maybe the most strict. From the perspective of security, we have more complex considerations and complex [audio distortion].
Question. I have two questions. The first is the competition barriers and the commercialization. The first question is that we mentioned earlier, the investors on the investor side, the application of AI models, as you just said, that the collaboration with Q1, etc., have competitive and skilled advantages. For instance, our collaboration with Alibaba, etc., and other platforms will also be using as Q1. When we're collaborating with Alibaba, what are our advantages? Using the large model, large AI models is our advantage, but it's not that clear once others are also using the same model. Do we need to find and have more edge in competition, etc? In AI, there are many strategic collaborations, etc. Eventually, I would like to say if we can have more advantages in terms of our technological edge.
In the past to present, whether it is our design, designers, or coordination, etc., I believe the status of our products is becoming more consolidated and well-received. From an investor's view, we would like to see if we can have more returns with greater competitive advantages. For instance, if once AIs achieve a great breakthrough, we need to have more certainty towards what AI can or cannot do for us.
The management answers. This is a question that we've been asking ourselves for a long time as well. In fact, before the emergence of GenAI, over a long run, we had similar issues. That is, for instance, the operation systems, the mobile phone operators, etc. Mobile phone operators choosing the imaging area. In theory, they are eating away our space for revenues and [audio distortion]. When can we actually say that there will be no external competition?
I don't think that is practical. From the past to present, I think the competition is becoming more fierce nonetheless. Our own capabilities, our per user income is also on the rise. I believe gain from competition or our competitiveness remains. I believe this is sustainable for us. We are open. We are willing to embrace such challenges and tackle them head-on. We believe we need these kinds of difficulties to constantly remind us to keep us not to be complacent. I hope that answers your question.
My second question is about commercialization. We see that many products, the cacheabilities, once in the premium, there would be using credits, the Meitu credits, and there will be higher premium prices. I'd like to ask in the future abilities in terms of the premium services of Meitu. Do I need to have multiple layers of premium services?
In the future, the high-end premiums, what are the rooms and potentials for this? In the short run, both the mid and long run, do these high-end premiums contribute to much of the revenue? Or will this be a scenario in the future as the tokens consumed by AI increases?
For me, I would like to share a status quo. For instance, in the design studios, the per annual use is around HKD 300 ARPU, the ARPU level. Last year, over the last two months or so, some have spent around HKD 20,000 in the in-app purchases of Meitu. You can see that even under the current ecosystem, there could be, in some scenarios, where the ARPU rate would be 10x the average. We could see that for some products, some scenarios, the reason is simple because our services help them save money.
In productivity, we believe at present we are aware that we are not focusing on the ARPU level aggressively. There are room for improvements. We are trying to maintain the status quo, but we're also making adjustments in the future. For the products for leisure, at present, I don't think we are aiming for higher and higher ARPU level. Many ID photos being uploaded into our apps, we see the consumer habits changing as more and more consumers use our apps. There could be specific programs or services aimed at consumer demands that could cost more or less tokens. I believe the overall dynamics would.
mean, relatively the same, at least in the short run. The IDE photos, I believe, if you have used the IDE functions in Meitu, you would see that it has no correlations with the premiums whatsoever. It would actually help you pay less for the same IDE photos. I believe in the image area, the users' demands are diverse. In the future, at least 5- 10 years later, many niche yet consistent demands can be further exploited to provide better services for people. The IDE images can be a potential. In the future, there could be similar products with potential.
I would also like to add that in recent years, I've been thinking over the same question as proposed by you.
In the internet industry, in many scenarios, from a technological perspective, once we look at a product, another product, once another product uses the same technology, the previous product risks being replaced. This issue, in fact, has been persistent in the industries for thousands of years. From a historical perspective, Lululemon produces the same garment with the same style, Lululemon. It works the same. If you go to any clothing factory, you can make the same garments, but consumers would choose otherwise, would choose one brand over another. We need to always bear in mind the brand's image, the brand's influence. I believe this is a significant component of a corporation's competitiveness. For instance, if AI is combined with other corporations that are not specifically aiming at photo imaging, they will not be as good as Meitu.
I believe that is what contributes to our brand imaging, helping our brands to become better and better in the industry and having a better reputation and more technological edge. Internally, of course, we would [audio distortion]. That is our own say over the matter.
It's a rare opportunity to have three presidents here. I'm a research analyst. We are a fan of Meitu Inc. We are also very happy to congratulate the anniversary of Meitu Inc. We are happy to see you delve into a product with high productivity and impression. We are also researching on this kind of company. We would like to observe a problem from the long run. After communication, I think many of us are concerned about the evolution of AI. Last time, we talked about the last mile. This is the advantage of Meitu.
The influence of Banana may be about storytelling because Meitu has strong capability in terms of aesthetics and engineering. We have a very strong central unit, so we can create many popular products and be welcomed by users. Today, my question is proposed not only for Meitu Inc. but also tool-like or tool-oriented companies from subscription to complete isolated or single purchasing mechanism or fragmented paying plan. We have seen many overseas companies are doing so. Will this have influence on Meitu Inc.?
The management answers, I think the subscription may be a mainstream for us because what we offer to customers are not limited to several functions, but we can serve the users for a long run in photo, video, and design sector. For example, I use a very not proper case if I subscribe only for one TV series because I am contributing to it.
I hope it can in the future, or maybe even for myself, my pursuit for leisure in the future. I think this is the middle ideology for Meitu Inc., no matter it is about products for leisure or productivity tools. We all, but I also see the room or the space for single purchase. It enjoys broader scenes and scenarios. Maybe in the future, at least for the future five years, subscription will be the mainstream. The one-time purchase or single purchase may increase its proportion in our paying plan.
Next question.
Specifically, overseas, Meitu Inc.'s market share is continually growing. Within the domestic area, the growth or revenue is decreasing. What is the mix of the domestic revenue? Does it mean that we reached some peak value? The increase rate is downward while the global increase rate is relatively stable. Can you explain?
The management answers, we didn't observe a downward rate.
Is there something wrong with the observation? Can there be this possibility? The question – increase July to September?
The answers, especially for Chinese mainland, I'm not sure from what statistics they draw the conclusion.
Question, do you mean that the revenue increase rate of the Chinese mainland was also growing? The regulations [audio distortion]. Question. To ask, because I see that many times [audio distortion]. I would like to ask, what are the latest paid functions and consumer data about this XDesign? For XDesign and KaiPai, what are the development strategies domestically and also abroad? What are the potential targets for these two apps, both domestically and abroad?
The specific goals will be released by Gary. I would like to say about the product. The XDesign was being developed simultaneously, both domestically and abroad. Not long ago, we've been exploring the offline studio demand.
This was mainly because after spending some time abroad, we were really interested in the e-commerce materials. The cost of designing these materials abroad was high, sometimes even hundreds of dollars. This is quite different for opening a store. Materials from menus, from internal layouts, etc. In China, however, you only need to find a space and find the right personnel, you can make it. For abroad, the price would be HKD 20 to [audio distortion]. We see this great demand abroad, and we see many that have not yet put their attention onto this demand abroad. I believe this is currently what the XDesign is doing.
We'd like to emphasize that the e-commerce materials will be one of our main focuses online, but we will also focus on offline studios, helping our clients to reduce their costs of producing materials and helping them to design materials faster and also to have more frequent upgrades in terms of their materials, categorization, etc. For instance, coffee shops, the galleries, etc. In the past, often every upgrade takes around $100- $200, and the existing tools cannot satisfy the demands. Now, with the assistance of XDesign and AI, we can help them make the situation more convenient. This is a huge opportunity for [audio distortion]. The e-commerce materials can be related to thousands of stores and corporations. We can make it much better. KaiPai? For KaiPai, a major advantage, both domestically and abroad. Domestically, it is focusing on talking videos, making it a big hit.
Before KaiPai, there were many products that had similar functions. For instance, adding the subtitles and doing quick edits of the subtitles. Because these functions have been over [audio distortion] KaiPai. KaiPai is focusing on this niche demand. In Douyin, in Little Red Book, for instance, there would be many of such talking videos being produced. As an example, if we look at the real estate intermediaries when selling houses, they would add a layout of a house with several introductions. Now we see many intermediaries are using talking videos to demonstrate the housings. The same goes for selling insurance. In the past, the insurance promotion videos would resemble typical ads, now it is different. We see that many of the videos that are composed mainly of images and words in the past to sell services have now transformed to using videos. There's businesses, and its competitiveness enhances competitiveness.
This is Meitu's unique advantage in China. Overseas, there is a competitor called [KBTrust]. Initially, we did not notice this competitor, but now we're becoming more aware. That overseas producer, service provider, is also focusing on talking videos. I'm excited about this. I'm excited about the potential exchanges and cooperation. We're focusing currently on the North American market, etc., because it has great potential. That is one way to develop. The second pathway is coming up with brand new strategies. Overseas, we have a positive growth rate, and we believe we're trying out new things. From the long-term perspective, we hope that our subscribers can be more diversified, complement each other, not compete with each other in a malignant way. The goals, we have not been very clear in terms of the development direction. Perhaps the S design, me personally, I hope that it can raise its market value.
For its independent pricing or in terms of finding more products that can be used by others and more computing points, we believe it is, we hope the more the better. For instance, when we're collaborating with, we hope that in scenarios such as using product image, whether it is the service providers or the consumers, we have similar demands. We believe we should still make our quality the best of the best. Due to time limitations, we have the last question.
Last question. I'm Jerry from Hong Kong's capital. For the first question, previously, a question was asked, how do we integrate into overseas agent? The agent of the C and app, what's the schedule and the future plan? Second, strategically, you mentioned in the era of AI, land matters. For others, can you propose or launch another? Or models, we still insist on independent development, even if we can future.
We purely rely on others or even bury ourselves. We'll include operating systems and app store companies and also some companies doing application development. This is an eventual and definite if you are situated in different eras. These platforms commonly think very possibly not to integrate all those models or functions and gradually turn them into their own. It's actually not that impossible. Many vertical demands, they can never do it. They need the third party.
Sometimes they need to do demo or they need to do sample. They can give their own functions or products. Actually, we are supplementing each other. The agent of the C and app, the plan for it, the management answers two phases. First phase, we integrate the agent into it. If you use it now, you can discover this is just an entry. Also, you can jump through different functions. This includes some agreements. Second phase, the core client is coordinated editing. The second is about co-creation or ecology. It can withdraw or it can use agent halfway when using a traditional function. We can be totally layered. It means first you can redo or undo. Second, you can generate formulas. You will combine agent with traditional generators or functions. You can share it to others to co-ecology. The future creation or creativity comes from content makers.
We open the mechanism. It's already doing so. In August, we advertise this function because content creators, they achieved some effects, and it went viral on the internet. We can combine this with our main product to see some chemistry. The main app is designed mostly by us, but I believe in the future they will develop some creation on their own, and they can do the spread on the internet. This is the co-creation, the second phase. If we have adequate time, thank you for your support, investors. We hope we can have more collaboration in the future. I need to hurry to another meeting. The co-creation or within our company, we did it on our own. The answer, we hope more outside creators can participate in this process.
One last question.
My question is about XDesign. The demand side, I'm not quite sure. My straightforward feeling is that in the past, e-commerce merchants would ask third-party designers to help them solve this kind of materials. Now, the two business products, what are how much of the advanced technological solution generated by AIs? For the emerging solutions, the existing actors are also enhancing the competition in the sector. What do you think are your potential competitors? How will they influence the market share? For instance, from an investor's perspective, I would like to see what are the scale of competition and what are your management answers?
I would like to answer them step by step. First, I hope that we will not think of XDesign as a two-business product.
It is not a two-business product because from our perspective, but from almost any perspective for consumers, the XDesign is a product in between 2B and 2C. It helps expert designers as well. They represent, for instance, content creators for the buyers, for all kinds of operators, etc. Their demands are that they need to design materials, but they do not have design as their know-how. They still need to sell their products based on these designs as a type of marketing. These groups of people, by a large sense, they're not a big corporation. They're simply a small team of 5- 10 people, for instance. Before they had to outsource these kinds of services to external parties, in the past, they may have cost hundreds of yuan to outsource them. The external designers would help them produce these materials. By now, they have a substitute that is XDesign.
For the market overall, I believe domestically the competition is quite fierce because if we look at revenue alone, the similar products, the R2 level is quite high. Domestically, we are not putting that much attention on attention because the tension is already quite fierce. For overseas, it is slightly different because it is an emerging competition. For some products, we have more and more GenAI startup teams. In fact, if we look at the competition landscape, in many scenarios, we need to prepare in advance, prepare for the future. Not long ago in the U.S., when we're communicating with Adobe, they proposed an idea that was inspiring, saying that the products proposed by Adobe in the 1.0 was a prototype model for the future already. For Canva, these kinds of algorithms, their 2.0 products are already replicating the functions of Adobe.
The issue was that the replica, the template, largely relied on the human power and the marginal costs. If you have Canva, when you're editing a menu, for example, or editing a poster, after editing, you will see that there will be some minor flaws. When you're switching to another poster, you will see that they're not that compatible between different templates, different replicas. This is a conundrum for these 2.0 products. In the AI era now, most existing templates will be replaced by AI-generated templates. In the AI era, as many materials are AI-generated, one thing is that there will be endless creation. There will be endless imagination. This is one thing we discovered with the agents. In XDesign, once plugging in the agents, we see that the demands of users are becoming more diversified. The conventional limits of designs used to limit the demands of users as well.
For instance, in the past, we know that during the product images, we will simply put them in the middle and add a template. Once AI is added, we can see that we can have many fonts, many adjustments for the image. For instance, we would like a product at the end of the table and the sun penetrating the windows and shining on the product. This kind of lighting that we can only describe by words, but it is a scenario that we desperately need. It can only be generated by artificial intelligence. If we would like to do it in reality or generate a template from scratch, it would be extremely difficult. In the AI era, AI elevates all of our starting points. That is why we're seeing more fierce competition.
I believe this is not only a challenge, but also a major opportunity for designers in the future.
Again, another follow-up question. In the 3.0 or the AI era, in the future, scale and magnitude of these kinds of technological deployments, because we would like to hear about your estimates about the market potentials.
Numbers aside, firstly, if we would like to imagine the scale of this, in fact, many e-commerce platforms have been trying one thing that is using templates. The main issue was not incompatibility, it was that the sheer diversity, for instance, some like a red background, or for the selling points of a product, there would be other appealing points that were not present in the template. These abilities of an agent or a human designer, but rather whether the producer can generate enough selling points that can satisfy consumers' demands.
We produce this dynamically, automatically, and change this according to the scenario. This brings us great potential, as we see today that AI can adapt to different scenarios. These kinds of opportunities compared to companies without AI are better, more convenient for us to turn into cash. For instance, we see on Amazon, over the past 20 years, more merchants adding colored boxes or people. Some people are adding more details to their products on Amazon, for instance, the background images. To generate these background images, you either ask a human designer or use AI for help. In the current era, you can use outsourcing and templates as well. Both can support your production. In the future, if you would like to use ChatGPT or similar AI, that's where XDesign could have great potential. This is hard to realize, but once we do realize this, the opportunities are great.
Thank you for your participation and questions. Due to time limit, our discussion ends here. That's the end of today's session. Thank you again.