XPeng Inc. (HKG:9868)
Hong Kong flag Hong Kong · Delayed Price · Currency is HKD
64.20
+2.10 (3.38%)
Apr 24, 2026, 4:08 PM HKT
← View all transcripts

AI Day 2024

May 24, 2024

Speaker 1

Hi, everyone. Whether you're joining us online or in person, friends in the media, our colleagues, and all the ladies and gentlemen, good evening. I'm thrilled to welcome each and every one of you to Xpeng AI Day press conference today. Today is a very special day for everyone. First of all, today is Xiaoman, a traditional solar term in China, and people often say that Xiaoman or lesser fullness is better than anything, which implies a very good mental state. Now, today is very special. As we all know, there is a popular saying that the last decade was the era of new energy vehicles. This year marks the tenth anniversary of Xpeng Motors. From a certain perspective, I'm lucky that Xpeng, together with many new car makers in China, has led a new decade of changes in smart cars and new energy vehicles in China.

Over the past 5 years, when discussing the adoption of new energy vehicles in China for the upcoming 5 years, no one dared to predict a penetration rate of over 30%. The most optimistic projection was just 25%. However, over the past decade, we have witnessed a significant shift in the new energy vehicle market. Previously, the market penetration for new energy vehicles was only in single digits. Now, it's almost unimaginable that the peak penetration could exceed 50%. Therefore, the next decade is likely to be the era of intelligent driving, similar to the transformation brought about by electric vehicles that many doubted a decade ago. I'm confident that the rise of smart vehicles will happen even more rapidly than the adoption of electric vehicles. Now, I mentioned that today is a unique day.

We will discuss how AI technology can profoundly change the automaking industry and introduce a new release for this. This marks an important first step towards an era of intelligent cars. This transition will lead the automotive industry from the traditional functional car era to the new era of AI plus intelligent, fully intelligent cars in a more efficient and disruptive speed. This is our autonomous driving large model to be pushed to all eligible users through our OTA today. Many people struggle to grasp how large models can impact automaking or autonomous driving. Today, I'll provide a brief explanation. Over the past year, there has been an increased awareness about OpenAI and ChatGPT, which fall into the category of large language models, or LLMs. While large language models are not directly related to autonomous driving, they share a common underlying perspective.

They all utilize the Transformer architecture to comprehensively perceive the world from various angles in an end-to-end manner. Previously, autonomous driving was compartmentalized into three different stages, such as perception, positioning, planning, and control from a technical standpoint. However, if these stages are not interconnected, it can lead to different issues. For example, when we see a vehicle encountering a situation, where it needs to turn right, brake, or bypass a parked car on the road, they exhibit hesitation or indecisive behavior. This occurs due to conflicting rules between different stages, resulting in challenges in forming a unified decision. If you adopt a large end-to-end model, you will notice that the car does not strictly adhere to rules set by humans. Instead, it mimic the behavior of human drivers based on the videos that it's learned from and undergo joint training of the three networks.

This is the only way to create 100,000, 1,000,000, or 10,000,000,000 different driving methods and rules established by humans, thus enabling limitless expansion. Now, I spent a prolonged amount of time explaining the end-to-end large model because I believe that assisted driving will transition to a fully autonomous driving, and I see two stages representing the future AI car transformation. The first is fully autonomous driving, and the second is fully unmanned driving, and I think these stages will be realized in the new decade. And for Xpeng, today, we have taken the first step and successfully applied the large or the first large model to mass-produce cars. We are very proud that we have taken the first step in developing an end-to-end model and achieve a twofold improvement in capabilities.

It's important to note that this isn't a small improvement of 10% or 20%. Whether it's the visual module, the XBrain, or the XPlanner, their cognitive capabilities are continuously being enhanced. For example, XBrain can identify a waiting area and infer the intention of traffic participants. We're developing the capability to read text in multiple languages, including Chinese, English, German, and French. XBrain can comprehend the words on the road and the underlying intentions. With the continuous increase in training data, its intelligence level is steadily improving. It may even surpass our regular drivers in the near future, and our goal is to increase Xpeng's autonomous driving capabilities by 30 times within 18 months after the launch of the large model. We aim to internally iterate every 2 days and continuously conduct-...

Visual training and tests in real cars, while also performing more tests in simulations. I'm confident that the era of assisted driving, which everyone is accustomed to, will soon be history. The next era will be the era of end-to-end autonomous driving. In the near future, we'll achieve fully autonomous driving. However, in January this year, we announced that we've expanded to most cities in China, covering up to 243 cities. Now, today, it's May 20th, 2024, and I can share with you that based on the end-to-end large model, Xpeng will be able to cover every road. What does this mean? Our aim is to cover every small road in every town, every village, and also in urban areas, our intelligent driving, independent of HD maps, will be consistent with our autonomous driving experience on a highway or even surpass it.

This is a significant point that we're very focused on. If we reach our goal, what will we do in 2025? Now, I have set a small goal for our team to replicate the Waymo experience in San Francisco by 2025, achieving the same or at least a very similar level. We expect to achieve easy and comfortable, fully autonomous driving across the country, whether it's a community, a neighborhood, ETC, toll gates, or parking spaces. But someone needs to still sit in the driver's seat and continue to watch out. If we can take over once every 100 kilometers, once every 200 kilometers, and once every 300 kilometers, it will be a breakthrough. Although it can be done, in reality, no one has done it in such a large range or scale.

This is the goal that Xpeng wants to achieve in 2025 through end-to-end large model technology. Now we're conducting relevant tests of the technology worldwide, and we hope to promote this function to the world after our XNGP in China achieves fully autonomous driving. We hope that in the future, no matter which country you travel to, you can proudly introduce that Xpeng can also provide the best fully autonomous driving there. Now, our development of autonomous driving in urban areas has progressed through three stages. In the first stage, or the main reliance was on high-definition maps. Today, a lot of our peers are merely following our footsteps, precision or high-definition maps. These maps are used in maybe a few cities or a part or more than a dozen cities. The advantage of these maps lie in their simplified development and high accuracy. However, there are disadvantages.

For example, out of China's 100 million kilometers of roads, high-definition maps may only cover a few million kilometers. This means that the roads it can cover are really limited. Relying solely on HD maps led to very poor user experience. Drivers had to take over manually in areas without HD maps, on small roads, in communities, neighborhoods, and also at ETC toll stations where HD maps are not allowed. Additionally, maintaining HD maps is expensive due to China's rapid highway construction. If there are changes to the highways, the HD maps need to be rebuilt. While HD maps are easy and quick to make, the user experience is incomplete. In the second stage, most high-precision maps were eliminated, and only a few high-precision standards were kept to assist in particularly challenging areas, such as certain non-standard six-way or seven-way intersections and other complex scenarios.

Previously, we and a few partner companies have achieved partial map-free driving capabilities and can navigate in most cities across the country. From now on, Xpeng ADAS will be equipped with two capabilities. We have the end-to-end smart driving, and the other is mapless smart driving at a mass-produced level, and this leading advantage will continue to expand. In the automated data training system, we noticed that the differences between companies are not just a few percentage points, but several times or even dozens of times. Now, I want to share with you our investment in intelligent AI. In 2024, our R&D investment will be CNY 3.5 billion, and this year's training expenses will exceed CNY 700 million. We actually have the largest computing power among automakers in China. Starting with this OTA, Xpeng Motors has been laying the groundwork over the past year.

First, we have been working on canceling the HD map for intelligent driving and achieving end-to-end large models for autonomous driving. Additionally, we're focusing on building a more intelligent cockpit system, aiming to promote it in multiple language versions in China and around the world, and enhancing the large model capabilities Xiao P inside the cockpit. We've been reworking the entire architecture over the past year. Today, our smart driving and smart cockpit have been completely rebuilt. Starting this month, Xpeng Motors OTA updates will be conducted monthly. We aim to surprise you every month with significant improvements to your car's capabilities. Now, please join me in welcoming Yu Tong to the stage. He is in charge of our user experience. He's our intelligent experience director and will share the new experience of the OTA with you. Thank you.

Hi, everyone. My name is Yu Tong.

I'm responsible for the entire OTA experience at Xpeng. Today, I'll share with you how we've integrated all the capabilities mentioned by Mr. He Xiaopeng just now into today's first version. Well, actually, starting this year, we have been organizing a Duty Week in the community during the last week of every month, and it's held by our product managers. As of today, we have received over 10,000 deduplication suggestions, and some users feel bad for us. They say that, "Let's not make suggestions anymore. This is just a Duty Week. It's not a wishing pool." But we have reviewed every suggestion, and I want to say that this is both a Duty Week and a wishing pool. I hope that you can see that Xpeng aims to continue to listen to every user and incorporate your suggestions into our version.

The AI Tianji system is the first OS to apply AI large models to the cockpit and intelligent driving field. It's also the first operating system to deduplicate tens of thousands of requirements and integrate them into our system. And I believe that from today on, everyone will see our ultra-fast iteration and ultra-fast co-creation with more users. If you were to ask the users, what is the first feature of the system that concerns them the most, and I believe that most people will inquire: Does the system lag? And how many years can I use it without experiencing any lags? We collected some real-life data. The left side displays the animation before the upgrade to AI Tianji OS, and the right side displays the after. Here are two of the most commonly used scenarios. The first one is, car control.

Many enthusiasts enjoy experimenting with car control to see how fast it can slide. The second scenario involves map usage. We have slowed down the playback of relevant materials in order to clearly demonstrate the improvement. Following our testing, our average operational response speed has improved by 30%, and additionally, during internal testing, we conducted several user interviews. The results are astonishing, with 97% of the public beta user experiencing or expressing satisfaction with the smoothness of the version. This equates to almost all staff members believing that our system is very smooth, and we test it and compare it with all the 8155 system or platform-based system on the market, and fully squeezing out this platform's performance. And we can confidently say this is the smoothest system on all 8155 platforms.

But of course, it's still the smoothest on the 8295 platform. And to answer the question, I believe that this system can be smooth for more than 5 years. When discussing Xpeng, in addition to the car's smooth performance, everyone also think about user-friendliness, especially represented by assistant, Xiao P. we introduced the voice dialogue in the P7, and then by G9, Full Scenario Voice Assistant and multi-zone tones, dialogue advanced Xiao P, and today I want to say that the only voice assistant surpass Xiao P is the AI version of it. But don't just take my word for it. Here's some quotes from some of our users from the public beta testing.

A user from Electric Community stated that, "In Xiao P has transformed from being just a tool to becoming a friend." Another that, "Xiao P is fast, accurate, and relentless, and continues to stay ahead." Now, instead of comparing these parameters for superiority, we want to share the core intention of our design. In addition to the end-to-end large model, the global large language model is used to Xiao P evolve. our self-developed XGPT system, Alibaba's Tongyi large model, and Zhipu AI large model, are all integrated to generate Xiao P. the first question I want to ask you is, do you recognize these buttons on the screen?

I once talked to an old friend of mine, and he found that after buying a car, he actually did not recognize what many of the physical buttons on traditional cars meant. For example, the third, the fourth, the fifth buttons are very confusing. The first one is Internal Circulation, second is AC, third is AC Partition or non-partition, or zone-based. Fourth is Resume Cruise Control, fifth, AC Temperature Synchronization. So over the past few decades or centuries of car use, we have become accustomed to traditional button-based interactions, and everyone thinks about which button to press to execute a command. These represent the most traditional form of interaction, and even today, doesn't matter if it's a physical button or a screen button or a voice assistant, we still follow the same mindset, which is when I have a need, I press a button.

I think it's time to rethink what interaction approach we should adopt. Therefore, we replaced it with a large AI Xiao P to revolutionize traditional command-based interaction. How do we do that? Let's ask Xiao P, it smells weird in a smart deodorizing mode." Xiao P, i have very serious low back pain. "Got it. Enabling Massage Mode." Well, let's go to Zhongguancun, and then, the airport, and then, the train station in the west, and then the Tiananmen Square. "Got it. Planning route for you."... So Xiao P can fully understand your needs. When we use the large model, it can help us draw pictures and provide information from encyclopedia. But these are just basic functions and should not be the entirety of a smart car.

The large model truly understands your needs, and that should be the main focus. What does it mean? The first step is to remove the wake-up word. We used to Xiao P, i feel warm." Today, we can remove that. Suppose you have a friend sitting in the passenger seat, and you ask him to open the window. You will definitely not say, "Hello, Lao Wang, help me open the window," because it's so awkward. You should just call him by his nickname, "Lao Wang, help me do this and that." So starting from this version, we've canceled the wake-up word, "Hello." You to ask Xiao P as if it were your friend, and it can help you achieve the most humanized interaction. This is the first step.

The second step is that if you "Xiao P, turn on the seat massage," but is that enough? How did Steve Jobs teach us to make products? He let a baby touch the iPad to get the most natural interaction with the iPad. It's the same today. When your daughter tells you that she's cold, she's not going to say, "Dad, help me turn on the AC or put some clothes on me." She will only say, "Dad, I feel cold." She will only express her needs, not her instructions. Similarly, today, you "Xiao P, my lower back hurts." It will know that it's time to turn on the massage. You only need "Xiao P, the song is not Xiao P will know it's time to change the song.

This is a true Xiao P can function as your friend, ready to assist you at any time. For cask Xiao P, "remind me not to forget to take my phone when I get home," Xiao P will provide a friendly reminder when you arrive home. Or if you need to wake up after a short nap, you can say, "Wake me up in 20 minutes," and it will set an alarm to wake you up at the specific time. Or if you forget important dates, such as May the cask Xiao P to remind you to buy a gift for your wife or make a love confession on Xiao P will prompt you at the designated time.

We understand that creating drawings and encyclopedic knowledge is not the final stage of large model, so we have taken the next step in exploration. With Xiao P can comprehend the world around it. It can identify a black car of a certain brand in front of it, a tree on the left, a charging station behind it. Today, we have taken the first step in establishing a Xiao P and the world. In the upcoming OTA, we're going to continue to enhance the input of all world Xiao P, enabling it to understand both you and the world better. The following is Xiao P that we've introduced in the version. It features a fresh design, a new voice, and an upgraded brain. We're Xiao P will surpass all previous versions and become the ultimate AI voice assistant.

So that Xiao P, but what truly represent Xpeng is intelligent driving. Today, the long-awaited AI Valet Driver is finally here, and everyone can use it soon. Why did we want to develop an AI Valet Driver in the first place? There are some deeper considerations. We've observed that most common daily scenarios for many drivers involve routine activities, such as commuting from work to home, picking up kids, and running errands. We envision that AI Valet Driver vehicles can enhance these everyday routes by providing a smooth, seamless, and smart driving experience. Intelligent driving used to provide a standardized driving experience for everyone. What does that mean? All users utilize Xpeng AI Valet Driving, the system will uniformly change lanes at a specific position a few kilometers ahead of the target when passing intersection. However, different users have different driving habits.

Some may prefer safer. They prefer the system to change lanes earlier. Other prefer to transition more quickly. To address this diversity in demand, the AI Valet Driver function can learn and gradually adapt to every driver's habits. This allows for a more personalized driving experience tailored to the individual's needs, and we believe that we've introduced a personalized driving experience to our users, and that is what we truly understand as the AI intelligent driving. Now, let's also discuss the capabilities of AI Valet Driver technology. The version introduced some important updates. First of all, AI Valet Driver only needs to drive manually once to learn the route, and secondly, we allow 10 memory routes in this version, covering the most commonly used scenarios. Third, once. You know, every single route can cover a distance of up to 100 kilometers. What does it mean?

For example, I live in Shenzhen, but I work at Xpeng's headquarters in Guangzhou. It's a daily 96 km drive for me. I might have to drive manually in the city for a while, and then use the super easy-to-use Highway NGP on the highway. But today, the AI Valet Driver can take me door to door, from home to company. Thanks to the AI driver, I can enjoy the beautiful sea view of the Guangzhou-Shenzhen Riverside Expressway and have the opportunity to relax. This experience has transformed my daily commute into a very, very pleasant journey, as if I was traveling. Our goal is to provide everyone with this intelligent driving experience, which reduces energy consumption and makes life more comfortable. Now, let's discuss its capabilities.

I would like to thank one of our users, Mario Zong, for capturing videos during the public beta phase, showcasing the remarkable abilities of our AI Valet Driver. Our car effortlessly navigates through narrow gravel roads that conventional cars struggle to traverse. While we do not encourage attempting this, it exemplifies the superpower of our AI Valet Driver. In the past, there were instances where the route shifted from a main road to a side road or a ramp, causing automatic driving systems to make incorrect choices. With the AI Valet Driver, such errors are avoided. Whether it's a U-turn or navigating a roundabout, the AI Valet Driver is up to the task, and I feel like it's about AI Valet driving. As the number of times users use it increases, the driving experience of our AI Valet Driver will become higher and more sophisticated.

Now, let's shift our focus from driving to parking. Lately, there's been a lot of buzz about how tight parking spaces have become and the challenging conditions. From a product standpoint, what are the actual challenges people encounter? I believe many have experienced the frustration of struggling to park in an extremely narrow space, only to realize that they can't open the car door once parked. This can be quite embarrassing. In our latest version, we are proud to introduce the world's first mass-produced AI parking feature or valet parking. When the vehicle approaches a parking space, if it anticipates that the space is too narrow, it will actively alert you and suggest finding an alternative spot. With a simple tap on the screen, the car will automatically maneuver into the parking space, allowing the driver to gracefully exit and close the door.

Additionally, we've tested and confirmed that our system supports parking within a 30-centimeter gap between cars. The first time when I used this function, I noticed several thoughtful gestures as well. For example, the automatic retraction of the rearview mirror to indicate the narrow space. Also, after watching these videos, you might wonder whether using this function when a car is too close to you will prevent the other driver from getting in and out of the car. Now, just a friendly reminder for everyone, if the vehicles on both sides of your car do not have self-parking capabilities, try to maybe leave space for others to park. But when the parking space is wide enough, I encourage all of you to give it a try. We have also made significant improvement to our parking capabilities in this version.

The updated system can now identify 3 times as many parking spaces while driving compared to the previous version, and the parking speed has increased by 50%. It is capable of parking in side spaces as well as dead-end spaces. We believe that this is currently the most advanced AI parking capability in the world. Following our press conference today, we invite everyone to test out this function in person at the experience area in the backstage. Now, that's an update on the improvement we've made to our intelligent driving capabilities. In addition to the twofold enhancement in the smart driving capabilities mentioned by Mr. He Xiaopeng, we've also introduced an AI Valet Driving service that is highly intelligent and user-friendly, as well as an AI parking capability that is courteous and efficient.

We are focused on AI smart driving and aims to ensure a 100% safety for users in human driving scenarios. Currently, Xpeng Motors' intelligent driving mileage penetration rate is leading the industry. However, there are still 40% of our users that enjoy manual driving. So how can we transfer control from our super AI, you know, driving system to manual driving? So today, we introduce a groundbreaking feature for the first time, the active safety reminder capability that covers six core scenarios. As we all know, AI smart driving has powerful sensory and recognition capabilities. It can identify a wide range of scenarios and events and warn of potential risks. We have identified the six most commonly encountered driving scenarios and provided advanced warnings for each. For example, in cities like Shenzhen, drivers often experience long wait times at traffic lights.

During these waits, the driver may become distracted while communicating with the passenger, leading to pressure from the vehicle behind. Now, with this new feature, when the system recognizes that the car in front has started moving, it will gently remind the driver to start in time, thus avoiding potential embarrassment and traffic pressure. I also hate the following scenario, where many drivers may not be used to using the turn signal when changing lanes. It's very difficult to judge whether a driver wants to change lanes, squeeze in, or go straight. However, with the help of our intelligent driving, we can accurately identify the vehicle's driving intentions and promptly remind you to possible lane changes, thereby ensuring safety. We've also optimized blind spot detection and AEB scenarios. Now, the safety experience that we bring to everyone is not just about braking at a certain speed.

We aim to provide safety and a sense of security at the same time. So first, when the car is about to brake, we'll provide a clear reminder on the screen and sound a warning, so you're not taken by surprise. Second, we've made significant improvement to the AEB capability. First, we have added the ability to customize the sensitivity. Drivers who prefer to maintain a close distance to the car in front can set the sensitivity to a lower level. For novice drivers, a higher sensitivity can be selected to allow for braking action at a greater distance. Also, with the enhanced perception capability, we can anticipate potential risks before you are even aware of them. In addition, we will provide more proactive and timely reminders.

Finally, in this version, we've also increased the AEB braking capability to 80 kilometers per hour, and in the upcoming versions, we'll rapidly improve the AEB capability. I'll also give you a little teaser. Actually, our chairman, Mr. He, will talk to you about our core long-term thinking on AEB at a future press conference. Now, in addition to reminders, the long-awaited Steering Assist feature is finally here. In fact, many users are wondering why we didn't make the Steering Assist available to everyone as soon as it was launched. We initially thought that there might be a conflict between the function and the map, because when the vehicle reaches an intersection, and needs to turn, the user typically needs to check the map to determine the driver route. The Steering Assist interface display may block the map.

Now, when using the split screen, users can now adjust the interface to their preference. It can be placed on the left two-thirds or off the screen for a wider view, or minimize the position, the size, and position it on any side of the screen to minimize obstruction. Now, for example, I, as a conservative driver, used to experience frequent lane cuts by other vehicles. However, since using the Steering Assist, I've become more aware of my surroundings and have improved my driving. At the same time, positive feedback from our friends and users during the public beta of the version has been very encouraging. For example, I really like this one. Little Danny mentioned that after several adjustment, the Steering Assist has finally met everyone's expectation and deserves a five-star rating.

One of our older friends, Yin Shen, also mentioned that the Steering Assist now comes with HDR, which is great for scenes with large lights and ratios. I think if you are familiar with digital terminology, you understand what he means. Our interface is large and super clear, allowing us to actually take pictures of vehicles on the left and right. In addition to the Steering Assist, we have upgraded the 360 panoramic image, which can now be turned into a large or small window without blocking the map. Many users have requested to see the 4-wheel perspective when reversing, while others only need to see 2 wheels. These options can now be switched at will. We've named our imaging system Full Scene Imaging Killer, which is a term commonly used in the 3C industry that has now made its way into the automaking industry.

Also, during the Duty Week, we noticed that the most mentioned issue in the feedback section was the Sentry Mode. The previous Sentry Mode had two problems: its accuracy needed improvement, and also it consumed a lot of energy. However, now we have a brand-new Sentry Mode. It's fast, accurate, and efficient. It responds quickly and can record videos for over 30 seconds before an accident. It won't miss any videos if there is a risk. And in this version, we've increased Sentry Mode's accuracy to 90%, making it no longer useless, and also it's efficient and powerful. This is basically our unique feature. Many users want it to always be on and smart. So today, alongside our smart scene function, we are introducing the Smart Sentry Mode.

I would like to express our gratitude to one of our users, Yan Shu EV, for sharing their weekly experience of using the Sentry Mode on Xiaohongshu. Here are a few examples of how we use it smartly. You actually can configure the vehicle to automatically activate the Sentry Mode when parking near your home, or prompt you to enable the function every time you park. You also have the option to keep it active at all times to ensure the safety of the vehicle. The safety experience of human driving is what we aim to deliver whether it's intelligent driving or human driving, our goal is to give you the 100% safety experience and 100% sense of security. Now, we also are bringing you our split screen capabilities, which we call the customization experience or thousand faces for thousand people split screen experience.

Now, during our interview, a lot of our users said that they are young and personalize their cars, the exteriors of their car, and so maybe we can personalize our cockpit as well. Let me give you an example. When driving the X9, if you're focusing on the road, but your wife can watch videos in the co-pilot seat, no problem. For example, my personal experience is that during the May holiday, we encountered a very long highway congestion. My family wanted to sing a karaoke on the screen, but I was worried that it might block the navigation map. To solve this, I used the split screen function to display the map on one side and karaoke on the other. This way, we could enjoy family time while ensuring my driving safety. And lastly, our XDock, which may be our industry's most customizable dock.

Now, the most commonly used functions can be placed on the dock, greatly improving operational convenience. Just like when a family travels and needs to control the opening and closing of the back door, the dock can be flexibly adjusted according to personal needs. Take me as an example, I need to drive from Shenzhen to Guangzhou every day, and the vehicle needs to be charged every 2-3 days. I personalize the charging port function on the dock, which can easily be opened. When I'm driving, I always turn up the volume because I love music. However, when my family, especially my children, get in the car, I have to turn down the volume and avoid discomfort. So we introduced a global volume control function, which can be placed on the dock to adjust navigation volume and voice volume at will.

This feature is part of the AI Tianji OS that we're introducing today. It is the first operating system in the industry to apply AI large model capabilities to the cockpit and intelligent driving. It actively learns, understands you, grows quickly, and has customized functions. Finally, as the person in charge of OTA, I would like to express my sincere gratitude to everyone. We have dedicated a significant amount of time to developing the system. Throughout the process, we witnessed everyone's enthusiasm and support. The system boasts over 1,000 new features and more than 5,000 experience optimization, and however, we face challenges in articulating the upgrade instructions due to the multitude of functions, and it-- actually, the instruction totaled 13,000 words. Last time I had to write so much was when I was in school.

Xpeng is often lauded as a company that takes actions first and speaks later, and one that doesn't just squeeze the toothpaste, but squeezes it until it bursts, meaning that we always over-deliver. I've always noticed, forum users likening the anticipation for OTA to waiting for the launch of a game. Actually, when we do OTA, it's like preparing a grand gift for all of you. Today will be the most epic update in the entire automotive industry. Next, a brief video will demonstrate the full capabilities of the XNGP system, and then Mr. He will be invited to return to the stage. Now, after completing this upgrade, do you feel like you've just bought a new car? It's a completely new vehicle. We, first of all, apologize for the delay caused by rewriting the architecture over the past year.

However, future OTAs will be faster and more efficient. As was mentioned, in the past ten years, Xpeng has also been building continuous new businesses. We would like to thank all of you for your support and trust in the past ten years. Without your trust and support, we would not have been here today. Our colleagues and friends within the company all know that since September 2022, after the G9, it has been very clear that Xpeng will make disruptive changes. I believe that soon, all of our new capabilities will improve and operate more efficiently. This year, I'll also lead all Xpeng people to do a better job of OTA and provide better services. Thank you so very much to all of you, both on-site and online. Your continued support and attention have been invaluable in helping us move forward.

In this OTA public beta, over 1,000 users actively participated. Additionally, many friends in our forum, in our app, also provided valuable suggestions. As a result, I've emphasized to our colleagues that all research and development must consider customer feedback, and all subsequent updates must be expedited. Once again, thank you so much for your unwavering support. Today's OTA marks a new chapter. I'm confident that AI will become a defining feature for Xpeng, leading China to the world. Xpeng will become synonymous with AI driving. Currently, our X9, G9, P7i, and G6 models will begin full-scale OTA immediately. Other customers of ours should not be concerned, because in the third quarter of this year, P7 owners will also commence public testing, and in the first quarter of next year, we'll introduce the Tianji OS to more P5, G3i, and G3 owners for public testing.

I'm confident that all car owners will experience different capabilities of Xpeng vehicles. Now, this is approaching the end of today's conference. I'm still very, very excited today. I believe that this is not just an ordinary OTA, but a new step for Xpeng towards AI intelligent driving cars. Over the past decade, Xpeng has always believed in the transformative power of smart driving and new energy. We have carved out an unusual path in the Chinese market with excellent technology and products. In the near future, we look forward to rapidly popularizing and advancing AI. It's worth noting that the first model of our MONA series will officially be introduced next month in June. So stay tuned!

Powered by