Welcome to XPENG Livestream. Today's weather is very good, and also we have a very important event in XPENG, which is XPENG AI Day, the seventh AI Day of our company. Every year, XPENG will announce very big news regarding technologies. Later at 3:00 P.M., Xiaopeng himself will announce the latest technologies and surprises from our company. Now we are at a very good weather condition and a very familiar venue here, located at our XPENG Guangzhou new headquarters. Behind us actually is the Tower 3 of our headquarters, which is also today's conference hall for the event. Later, we are going to bring you guys for a tour and give you some teasers for later this afternoon's lunch event. Now, let's get ready. For us, this is very familiar, but for the friends online, I think you have already some questions.
On this occasion, let's take you to have a tour around our XPENG new headquarters. Today, we are going to launch four very important products. So through the auto display area, I can also give you some teasers. So first, let's start from our Canghai platform. Please say hi to our online audience.
Hello. So for the display board, we can see the latest EEA architecture of XPENG. I saw a lot of integration systems here. Can you give me a brief introduction? The Canghai platform is the latest EEA architecture of our company. Actually, we can see some innovations on this architecture, like the XCCP. Let's have a close-up on the display. Here we can see the latest AI chip and also the latest AI computing cluster. We are going to have a central processing and classification here. What's more, we can also see some peripheral features.
Can you give us some simple introductions? How we apply all of these features in our car?
I can give you one example. Previously, for a single-point feature like door opening, it's only one single feature. But we use the latest 1,000 apps sharing this mechanism. This is very special because we can define this feature based on our scenarios, like our Sentry Mode, our Sleep Mode, or other kinds of modes. And they can all be integrated together. For example, our Welcome Mode will be connected with the welcome light and also the seats. So this will not only trigger one point, single point, but also different features together. If we talk about the specific features, I can understand in this way, it's more customized for our customers, right? Yes. When our customers open the door, they can enjoy the features they have already set for themselves before.
For instance, when they sit inside, they can actually fine-tune the temperature, adjust the seat, and also through the facial recognition mechanism, can also provide more comfortable driving and ride experience. This is a very powerful integration capability. I can also see on the display here, we have A and B, different categories.
Are they relevant with our safety?
Actually, this is our safety redundancy strategy because on the latest architecture, we want to reach the top functional safety standard. On this board, you can actually see very clearly, like here, we have the VR power supply systems with redundancy. For a single power system, if it fails, then our redundant system will actually become immediately effective. So the safety of driving and also the experience will not be affected. Very impressive.
All the features, we are going to provide a plan B so that we can have a more smooth and comfortable experience driving the XPENG car. This actually thanks to the powerful EEA structure of our company. So any more information you want to share, Na Yu?
Two more things I want to emphasize. The XCCP, actually, for this XCCP, it's the central computing platform of our company. Now we can already achieve 2,250 TOPS computing power. During the autonomous driving mode, you can feel that the car is smarter and also more seamless.
Thank you, Na Yu, for your introduction. Now let's move to the next stop. See you later. I think you guys also very look forward to the new scene, which is the brand new X9 REEV. So you can also feel this car. Today, we already display the new X9.
We know family users pay much more attention to comfort and safety. Today, we invite our safety expert, Mr. Zhang, to give us introduction. Regarding the safety of X9, I know you guys have already did a lot of deployments during the project development process. Can you update some information to us?
Yes. The X9 REEV, we have already achieved the C-NCAP five-star safety. We are also going to enter the European market very soon and also other overseas markets. We have already prepared some active and passive safety features. I have heard that X9, different countries, we have already went for some calibration and also some tests. Safety now is already next level. We consider the compliance and homologation from different countries. Actually, spanning from daily scenarios to the future features that one may expect, we have all been ready.
[Foreign language] Can you give us more examples?
[Foreign language] For instance, we have some advanced features on X9. In China, Zero Gravity Seats are already very common. [Foreign language]
So can we have a close-up of our Zero- Gravity Seats?
For this Zero- Gravity Seats, before collision, we can actually integrate with the ADAS signal. When the collision happens, if the seat is still at the zero gravity mode, then we are going to have the seat airbag and also its own recliner system. So the collision will not actually affect the safety of our passengers. Yes. Some people may concern about the safety of Zero- Gravity Seats. Here, we give you the answer. Because X9 is a global model, more and more users are enjoying their experience for this model. So do we have some special features also for different shapes of people?
Actually, for our overseas customers, based on the height difference compared with the Asian colleagues, we have also adjusted and also optimized our current safety mechanism for the high people and also shorter people. We all have targeted safety features. Yes. For different target groups, we have already developed corresponding safety features. Just now, we've mentioned some active and passive features. [Foreign language] I think for active safety features, we have already seen some examples, like when the weather is not so good, the speed will be controlled to enhance the safety. And also AEB system has also upgraded. Here, we will not go into the details, but I still want to say our safety features are very advanced and also can increase the comfort level of our colleagues and also our customers. [Foreign language]
Thank you, Mr. Zhang. Just now, we have already checked our X9 REEV.
I don't know if you have noticed some changes compared with the previous EV version. [Foreign language] Now we are at the new stand. Let's talk about the energy consumption. So welcome, our energy consumption expert. What kind of optimizations do we have made for the energy consumption?
Hi everyone. On behalf of the energy consumption team, I'd like to share with you some accumulations of our new technologies. You can see the lightweight technology displayed here. Can we have a close-up? This actually is a subframe. Previously, it's made by steel, but now we use the latest technology and materials, and the height actually will be reduced. The weight will be reduced a lot. This actually is the first time for us to use the aluminum and also magnesium alloy on our car. And this is the forged wheel of X9.
Compared with the traditional and conventional process, the weight can be lowered by 15% without affecting the rigidity. We also actually further improved the performance. So for the new X9, we didn't compromise on safety, but we have a lot of optimizations regarding the height. You can feel it's very lightweight. Step by step, we are using our lightweight technologies. For hardware, yes, we are reducing the weight, but in addition to the weight, we also have some software optimization, which can be updated by the OTA of our company. So what are the software optimizations? For our team, based on the low drag and high power efficiency performance, we have actually spared every effort to optimize the performance of the car. [Foreign language] We have already pushed the limit, like the active thermal management. And also for our company, we are actually born out of the AI gene.
Currently, our energy management system, without affecting hardware, we can increase the range by 15 kilometers because the thermal management system has already upgraded for the cloud, for the vehicle, and for the road. The three scenarios we combine together to support this kind of optimization. So let's check out the display here. Actually, we polish every detail. We predict the working conditions of different roads. For example, before our driver enters the tunnel, three kilometers before, we are going to already have the prediction. For instance, we lower the fan speed so our customers can feel very comfortable when they are entering the tunnel. And also when going up to the hill and also go downhill, we also control the energy regeneration system to support this kind of better efficiency. We also optimize the nearby route, and also we have more efficient interaction with our traffic lights. Thank you.
So just now, we heard a lot of examples and also different scenarios, as well as the optimizations that we have. With the empowerment of AI, we can see the range and energy consumption have improved a lot. Thank you. See you later. Just now, we have already introduced the energy consumption AI. Now let's move to the latest technology stand, which is also very important for our AI-defined cars and other kinds of products . So let's check out the AI computing cluster.
[Foreign Language]. Our XPENG Motors actually developed the Alibaba Qwen to build our Fuyao Autonomous Driving Foundation model. So you can see that we already have 30,000 cars clustered, and the AI computing power reaches like 2 billion TOPS, 1.2 billion TOPS. So this computing power will ensure us to have better data training speed.
Can you actually disclose some of the teasers we're going to unveil later? This very powerful computing power clusters, after so many iterations and optimizations, has now achieved and developed into our VLA 2.0, as well as our mass-produced physical AI world. Can you actually disclose some features or some performance that it can deliver so that it can better cater to the customers? Okay. Due to this rapid iteration, we integrated our VLA and the physical AI world. We actually developed this VLA 2.0, which will help our autonomous driving to have better decision-making, for example, the autonomous emergency braking AEB, with better driver experience. Actually, this is not a simple iteration, which will be a breaking news that will be disclosed in our launch event shortly. We will leave it to Mr. He Xiaopeng to introduce later. Okay.
So we actually touched upon many things, such as our X9, our AI computing power. Next, let's move on to the next stand. You can see that this one doesn't have anything displayed. But here, you are going to see a poster about our next-gen IRON, which will be served in two more settings. For example, the household, the mobility, and also more convenient interaction. So let's stay tuned. After two teasers, next, let's move on to the next stand, something that you might have interest in and want to know more about. I will hand it to our junior sister. Hello, [Foreign Language]. So you can see we are already back to the stand, the much-anticipated flying cars. We actually invite a foreign KOL, as well as our expert in the flying cars, to our channel to say hello to the camera.
We're going to introduce the flying cars through our product manager. The camera, let's go. Let's go. Y eah.
Hello, my friend. I am the specialist of this modular flying car. Oh, be careful. This flying car is also called a Land Aircraft Carrier, which combines this electric vertical takeoff and landing aircraft and that multipurpose vehicle together. You can come to have a look inside. Right here, we create a one-lever controlling system, which is quite easy for the learner to drive this aircraft. You can use your right hand to press up this button to give it a takeoff and press down to land this aircraft. You can also press up this button to go straight and press back to go back. You can also turn left and turn right. Then you can also swing this lever to turn around this aircraft.
[crosstalk] You can have a try with your right hand. Yeah. It's pretty cool. It feels quite intuitive. So is it quite easy to learn? Yeah. It's almost like playing a video game. [crosstalk]Yeah, yeah. It's much easier than video game. 3D video. I feel like I would quite enjoy flying this, actually. Yeah. That's pretty cool. I've got a second system, right? Yeah. We also have a backup controlling system right here. If this doesn't work, we can also use this button to give the driving control. Nice. Okay. Yeah. Next, we can have a look at that multipurpose vehicle. Yeah. I noticed that we have air conditioning, right? Oh, yes. The air conditioning is right back on here. And it's different from the helicopter because we know many helicopters, they don't have the air conditioner inside. So it's quite hot if we fly that in summer. Nice.
[crosstalk] I didn't know that, actually. I've never been in a helicopter though. Okay. Okay. Let's go to the MPV. Yeah? Yeah. Let's go move to the MPV stage. We can have a look at this multipurpose vehicle, which is a six-wheel vehicle. And its power engine system's range is EV system. So its driving range is like 10,000 km long. So we also have this. We can come close to have a loo k at this streaming rear mirror. If in the rainy day, we can also have a very clear view inside the screen. Nice. Okay. Cool. And we can also have four seaters inside this interior design. And the biggest difference is the trunk. We can collect this aircraft inside the trunk. So it's like a container of this aircraft. So all of this fits in there? Yeah. All of this fits in the trunk.
[crosstalk] We can put it into the trunk. So we call it the modular flying car. And this is the first one in the world. So hope you like it. That's awesome. I think it looks great. Yeah. I think it looks amazing. It's like a very futuristic vehicle. Yeah. In fact, you get this as well. Also just makes it. Yeah. You can have a vehicle and also the aircraft together. I can't believe you can fit this in there. Yeah. Actually, we're just at the charger. Also, you can charge the flying module, the air module inside this land module, right? Electric. You can recharge this aircraft inside the trunk. And it only takes half an hour to give 80% charging of this aircraft. Wow. Half an hour? Wow. That's pretty fast. Yeah. It's quite quick for the recharge. Very impressive. Yes. Thank you. Thank you so much.
[crosstalk] You're very welcome. Hope you like it. I hope that we got a chance to drive this flying car in the future. Yeah. I'm looking forward to it. Yeah. Okay. Thank you. Thank you so much. Thank you. Thank you. Okay.
Okay. So in our today's AI day, we're also going to show our comprehensive lineup, not only the flying cars, but also our lineup of all the vehicles, as well as our brand new X9. [crosstalk] Oh, who are you? Do you mean me? Yes, you. Hello, I'm K. So please say hi to the camera. Hi. Hi, friends online. Well, we're actually quite familiar with each other, right? Yeah, consider so. It's been so many years. It's fine, but as long as you're a big fan of XPENG. So which is your favorite one among all these products? I think it's the P7+ .
Yeah, P7 Plus is also a beloved car by all our car owners. It's very cool. And I saw that it is one of the best cars that offered the most comfortable experience. So when you drive it, you feel very comfortable. Smart. I've heard about the XPENG's really focused on the intelligence. So that's why I think it's very futuristic. So do you think this X9? How do you think about it?
Well, it's so wide, so giant. Just look at it. Look at the size. I think it's a perfect match for you.
Do you mean, well, it's the family or it's powerful for me?
Well, I think it's very cool, but also kind of cute. It also has a hint of power inside. So this is our Power X version, which is our REEV version.
Actually, one of my friends back in my hometown went to the capital in that country to purchase the X9 because we have an XPENG store there. So he just drove it for 30 days and just loved it so much. But it's not an REEV version. Yeah, because this one will be launched shortly afterward. So the Power X, yes, we're very looking forward to that. Hoping that this will be available in your countries.
Oh my God. Then I think it's about, well, it would take about three months, I think, to drive X9 back to my country. Of course, I want to have a try. But of course, I need to get my driver's license first. Well, I probably need to take days to get my license. So here in our XPENG AI Day, how's your feelings?
Sci-fi. It's very sci-fi. I think it's very tech-savvy.
It's very cool, and I'm a big fan of the flying car. I think it will change the world, especially because there will be more vehicles driving on the road, and many of them turn into electric ones, and flying cars, I think, will going to solve so many problems because the flying cars, well, they cannot fly long distances, like, well, 400 km. But if you store that into the land modules, you can actually just drive it to anywhere, to outdoor scenarios, even very suburban areas, mountainous areas, and then I simply just detach that air module and fly into the sky, well, I hope that one day we will also have a chance to try it out, so really, thank you for that. We'll see you later. Bye-bye,
so you can feel the enthusiasm from all these foreign media outlets.
They're actually acclaimed and very interested in it. We've also shown you around our flying cars, right? And next, we're going to a new stand. There will be two experts waiting for us. Hi. Say hi to the camera. So right before us, can you introduce our X9?
Can I use Chinese? Yes. So hello, Chinese friends online. So this is the REEV version X9 that will be launched. So you can also introduce it in English. So behind us is the REEV X9. Okay. So let's introduce this electric vehicle, right? Yeah. Let's start. [crosstalk] Hi. Hi. Hi. How are you? Hot. Hot. Hot. Really hot. It's going to be really hot. Okay. Let's introduce this car through the front, right?
Okay. So it's an REEV. So it's the first time you can find the engine in XPENG's car.
So this engine is specially designed for the REEV system. So it's very quiet. It's easy to do the maintenance, and it's efficient, and also low consumption. You can use the normal oil. No need to use the high-level oil. So it's very important. And also the safety. You can say we enhance the front safety. Very strong beam. It's like very strong. And also the double crash beam to enhance the. I read that it was the biggest giga casting in the world for this car. I think it's like trunk. It's like a trunk's crash beam. So that makes it much safer. Yeah. It's much safer. And also reduces the weight because you make it in one big piece. One big structure. [crosstalk] yeah, yeah, yeah. Of course, we have that die casting from rear. Die casting. Yeah. Rear and the front. It reduces a lot of part.
[crosstalk] And the whole car is basically made with aluminum. Aluminum. Yeah. I tested all the body panels with my magnet. They're aluminum. Yeah, yeah, yeah. I checked. Aluminum. But before we go here, the engine, with a lot of engines, you have to service them every year. This one, how often do you have to service it? The maintenance regularly is two years once. Two years. So less servicing required. Less servicing required. And I also hear that it's nearly as quiet as an EV. Only just a tiny bit more noise, but you cancel the noise out. Yes. We have a lot of solutions to control the noise. And the one is most advanced is we have the active noise control in the cabin. So basically, you can't notice there is an engine. So you have ANC, which also doesn't just cancel out the engine noise.
[crosstalk]Yeah. Also cancel out the road noise. Also with the road noise, engine noise can cancel it. Well, I want to test this. You can test it later. We can have a test drive. Great. Great. Great. [Foreign language] So I can see you guys all very looking forward to the X9 REEV. But the AI day is about to start. Let's move to the stage. And Xiaopeng later will share with us the latest technologies to the audience. AI day. Yeah.
Thank you. I wanted to say all the features, but he's shown me lots of features. And this car is the most advanced MPV you can buy on the market. Yes, of course. It is. It is. If you look at all the data, the charging speed and the range, 450km of range, there's nothing on the market that is this good in this segment. Yes, yes.
Of course. Very clearly. [Foreign language] Thank you. Thank you for our media from overseas. Now let's move to the stage to witness the power of emergence. Bye-bye.
Ladies and gentlemen, our event will start in one minute. Please get seated and turn your phone to silent mode. Thank you for your cooperation. 2025 [Foreign language] Ladies and gentlemen, 2025 XPENG AI Day is about to start. [Foreign language] Ladies and gentlemen, please welcome Mr. He Xiaopeng, Chairman and CEO of XPENG.
Hello everyone, I'm He Xiaopeng. Welcome so many media friends and also our customers, our colleagues, and also audience from online, both domestically and overseas. Thank you all for attending our 2025 XPENG AI Day. Today, marks actually participated for the first time for us to attend the event in our new headquarters in Guangzhou.
Now, already more than 10,000 colleagues skilled in AI, cars, robots, and other fields work together to explore new technology innovations. We actually create together across different disciplines. Today, I'm very proud and also very excited to share with you the latest technologies of our company. Today is also our seventh AI day. In my memory, this is the most important AI day. We are going to launch four major AI-defined applications. This time, for XPENG, it will be a giant leap because this time we are going to elevate XPENG to the new heights. Over the past 11 years, XPENG has gone through a lot of things in China and also in the whole world. We can see the hardware is driving software. Also in the past 10 years, from customer point of view, software only accounts for 10% of the total values of the car.
For me, actually, I started my career as a Software Engineer. That's why I always believe software should play an ever more important role in the automotive industry, and I also believe this can bring disruptive innovations, but what are the specific applications? Today, let's unveil this. First of all, let's actually recall the past six AI Days. Since 2019, when we actually have the Highway XNGP, and also in 2020, we actually made the XNGP available nationwide. In 2021, we brought the concept of the flying cars to the society. At that time, people thought it was only a dream. In 2022, we said we are going to launch the intelligent robots because at that time, we believed this can be executed. In 2023, at that time, we said we are going to give the AI-assisted driving to the cars priced under $20,000.
Last year, we mentioned to launch the next generation Super Range Extender. As we can see, we need to have dreams. We need to have expectations because a lot of tech imaginations and also expectations are turning into reality. Today is our 2025 XPENG AI day. This event will be very hardcore. Let me give you guys a teaser because for our company, we are thinking that how can we transform the hardcore technologies into the things that can be easily understood by everyone. Today, I will try my best to explain this very briefly and also very clearly. The theme of our 2025 AI day is emergence. A lot of friends will ask, what's emergence? Because emergence, this word, is more and more frequently mentioned. At the beginning, I remember emergence comes from the large model in AI field.
When we have massive data, especially the long-tail data, when we compress this kind of data and have another kind of sequencing, then we will find out that a lot of unexpected things happen. This is also similar to what we have experienced in autonomous driving. I believe this is not only all the things that we can achieve with emergence. We still have more to explore. During the past 11 years of our company's history, we did a lot of self-developed innovations. Some of them actually have already transformed from full-stack in-house development to cross-domain innovations. A lot of technologies that you are going to see today will be in new scenarios with new capabilities. I also think this can be called as emergence. At the beginning of the event, please allow me to use a few slides to introduce about the current trend.
So, in China, we have seen that for males, it's very actually dangerous to get into the wrong industry. And for ladies, it's very dangerous to actually marry a wrong guy. So, to use this example, I want to say that we cannot beat the trend, and we need to follow the trend. And for our company, we need to have the correct and very logical observations of the latest trends. On this slide, you can see over the past 25 years, the evolution in the digital world. A lot of Chinese companies, they are on the platform ecosystem or in smart hardware. But we still have two more important layers. It's about the operating system. For example, Windows, they have already achieved very good results back in 2000 because the operating system is very crucial.
After 2010, we can see that the mobile internet, no matter in China or in overseas markets, have witnessed a dramatic change from Windows to Android and then to later iOS from Apple and so on. A lot of new hardware and also a new generation of technologies have emerged over the past 10 years. In 2020s, we can see another evolution, especially NVIDIA. In 2017, when I met Jensen, they are at the market value of $20 billion, now already over $5 trillion. I don't know who has already bought the stock from NVIDIA, but if you keep until now, then you will make a fortune. We also see the operating system has also evolved over the past 25 years. Before, we were more rule-based and hardware-defined. Today, now we are data and model-defined.
This drives the evolution of smart hardware and also drives the evolution of the platform ecosystem. We believe in the next 10 years or in the future, AI will be the defining trend. Now let's see what happened in the physical world. It's totally different because the transformation evolution is very slow. From 1890- 1920, steam cars and also petrol cars, diesel cars are starting to develop, and then the steam engine cars, they stopped, and then the petrol and diesel, they further developed, but until now, already 100 years, we are still actually mainly about the ICE or petrol cars. Hundreds of companies are working in this industry, producing different types of cars for our customers. However, around 2020s, we can see a very big change. At that time, Tesla already surpassed some of the market value of a lot of car makers.
Also around 2020, BYD starts to grow very fast. At that time, the monthly sales are similar to XPENG, but today you can see every month they can sell more than 400,000 units, already 10 times compared with 2020. The physical world is changing not that fast, but also it's also changing the speed with the speed out of expectation. We also observed a very interesting trend. That is, in the physical world, we have an engine, we have energy, but in the digital world, we don't have any physical fuel. We can see that in 2025, we are going to usher in the new era. Operating system can also burn the fuel. The fuel here is data. Here, the operating system is like our engine.
We found there are two types of engines, two types of fuels, which will actually contribute to the formation of the digital world. Here, AI and software will actually transform different products like AI cars and also robots. Now the machine is gradually able to understand, interact with, and change the world. This is physical AI, which will actually bring transformation to our next two decades. This is the key word we want to bring to you guys. Physical AI. We believe AI is deeply integrated with the physical world from this era, and also machines will gradually be capable of truly interacting with the physical world and even changing the world and generating the new world. Now let's check out the video to think about the future of physical AI.
We just quickly walk you through the differences between the physical and digital world, which may give rise to a complete change in the physical world. Therefore, we'll introduce one of the changes, which is the operating systems. One of the changes is just like the cerebellum of the brain, which is in charge of the actions. It's also changed rapidly. We called it VLA. It has three major cores: the model, computing power, and data. From this slide, you can see that in the past decade, our autonomous driving has already undergone three eras. In era one, it's basically followed through the algorithm. In the second, XPENG launched our XNGP, the highway navigation guided pilot, which is in March 2022. In 2024, last year, we launched our end-to-end model on our vehicle.
While actually, the end-to-end model deployed in the current industry, it's actually small model, if we put it in the right terms, because of some limitations. So VLA is also one of the end-to-end models. And now the end-to-end small models are gradually evolving into more powerful models, from vision to language and to action. And this model is driving our change. You can see that the standard VLA would usually translate the vision into language and then into action by learning all those traffic rules and driving habits. It would generate the trajectory, the speed. So you can see there are actually two translations. While following the first principles, it's actually not the optimal solution. Can we actually translate what we see into what we would perform and act? And that we will see something totally different.
For example, when we learn how to drive ourselves, we often say that seeing once is more intuitive than reading thousands of words. This video is very short, just dozens of seconds. However, if you try to use the text to describe the skin, it's about 1,200 words, and even the 1,200 words, you find it difficult to describe everything in this video. You will ignore lots of the details. To put that to perspective, how can we actually give such a large data amount to a model to train on it? And what kind of form should we adopt? Therefore, we opt for video than text. When we developed the VLA, last year, we actually struggled, thinking whether we should follow our first principle to take out the language or to drastically reduce the representation of language to directly translate vision to action.
This will become a vision-centric model in the physical world. You can see that no one in the industry has managed to actually mass-produce this model. In 2024, we had heated internal discussions. We decided to go for both two solutions. One is the standard VLA, visual language action, and the second solution is to remove the language, directly translate vision to action. From a technology perspective, it's more like a VL model, so what we want to do is a VA model. It's like at the very initial stage in the physical world. When we started this project last year, we had no confidence at all that we would achieve, so since last year until the first half of this year, the VLA that we promoted are all standard because it's clear and logic, and it's actually accurate. That's why we had two teams. Dr.
Liu Xiangming led the second team to pioneer the second solution. And in our XPENG history, innovation is to push for the limit. You can see that we've actually invested a lot. From 2024, we've invested 30,000 card computing power, about 2 billion CNY. Our R&D expense without seeing any hope until the second half of this year. At that time, we had an internal executive meeting. We have some executives from the autonomous driving center, the ADC, joined the meeting saying that the model failed to work. However, just one day ahead of that meeting, they found that the model had a great improvement. All of a sudden, in a clip, in a scenario, it worked. And quickly led us to the next VLA. I think it's easy to guess what happened.
So someday in the second half of 2025, we decided to diverge into the second solution and stopped the development of standard VLA. Of course, it would definitely have some impact on the Max variant users because we actually find a brand new direction for us. It will give us the potential to find a different, much safer, smoother, and more powerful smart driving system. We decided to go for it and stop everything we do right now to devote everything on our second VLA. So after we see and have had different abilities, we're going to see what we have introduced to you, which is our VLA 1.0. We believe that we just pioneered a new paradigm of physical world models on mass-produced car. It actually will be nearing two years of autonomous driving system by two years from now.
This paradigm, I believe that will be a more universal solution in the industry. When I personally used it, some ADC colleagues who have already tested it for so many times, I have to be honest, I was amazed by it because so many long-term scenarios that can hardly be solved can be easily tackled with this model. The driving experience is so smooth. Even just have like during the driving, there will be no jerk, sudden braking, and during interaction with other vehicles. It's just like a seasoned driver. Therefore, we find that when you have the model, computing power, and data all set in place, you actually see the moment of emergence. After you solve one problem, you will find that so many legacy issues that cannot be solved cannot easily be tackled with the new methodologies.
It's very sudden for us because we have months of failures, multiple times of discussions, talking whether we should remove this VLA team because it actually costs a lot. I believe that Brian knew it pretty well because he often told me that we have spent another CNY 100 million into the R&D without seeing anything. At that time, we had no confidence telling and persuading other people that we will succeed. Therefore, at that time, this VLA 2.0, its emergence, we would like to use a short video for you to know its whole new change. The VLA 2.0, City Campus Night Road, Narrow Road, the meeting vehicles in complex intersections, complex intersections in the night rush hour. The Narrow Road construction site, finding the right timing to make the detour, the very extreme passage through a narrowway that seemed to be a dead end.
We're very cutting from the path, comfortable deceleration without any water being spilled. Very comfortable longitudinal acceleration and deceleration, no jerk, no accidental braking, very efficient lane changes in the city. Passage through the bollard at night, intruding vehicle at night, very smooth passage and multiple vehicles. The navigation freed LCC in the campus. The LCC finds the open road. How weather full scenario change.
Is it cool? I want to explain actually all the videos you have seen. We didn't speed up, so it's a one-to-one speed. For instance, I don't know if you can also encounter some kind of road bumps in the middle. Sometimes I need to get out of the car. I need to calculate by myself like three millimeters. For people, it's difficult to through, but for our VLA model, it can empower the car to drive through it.
This kind of feeling will give you actually the feeling of finding a new continent. In the past, we need to write a lot of rules and also a lot of algorithms. Now it's smooth, joyful, effortless, with very high upper limits, and also very strong capability. This is only the foundation. For XPENG VLA 2.0, we take vision at the center. It will learn like human for minimal information loss, without any translation in the middle, just transform from vision to action. We can see that during the process, the information loss will be reduced because you see the whole world very complete. Also the inference efficiency will be very high, and also the response will be quicker. As you can see now, the driving experience will be more smooth and also safer. How did we achieve this? We used nearly 100 million clips.
This data is very common in OEMs, but the most important thing is about the corner cases and also some long-tail issues for this 100 million clips. According to our calculation, they are equal to the sum of extreme driving scenarios that one can encounter over 65,000 years across the whole world. When we go global, this massive data and also training will empower our vehicles to be more intelligent. When you see the wider world, you can actually predict and prevent some situations, and at that time, you can be safer and also more comfortable. What's more, VLA 2.0 is still a VLA large model and also a world model. We can achieve the optimal decisions through understanding, prediction, and generation. For instance, you can see on the slide, this is the inference in the cerebellum and also cerebrum based on the world model.
Looks very boring, but during our daily simulation work, we have a lot of inference and decision-making. Also, on this slide, you can see another case for the upcoming scenario inference. This is also a long-tail scenario that we have generated. The left is original video. The right one is the generated video. We can see that we generate some new envIRONments during the simulation stage, which can help the training further. If we want to achieve this, then we have already used more than 30,000 cards to support our compute cluster and also 72 billion parameters foundation model. Every five days, we will have one full cycle iteration. I believe next year, in 2026, we are going to achieve 50,000 cards or even 100,000 cards. In the future AI world, we believe the ultra-large AI computing power will be the cornerstone for physical AI, but only relying on clouding.
Power, is it enough? No, because we also have the physical level, which is our chip operator and the model cycle. We use three Turing AI chips to achieve actually 3-22 times more computing power. We already reached 2,250 TOPS, the highest effective computing power in the industry. According to our study, we also found that the inference efficiency also improved by 12 times. At first, we thought only one time or two times better, but it's already now 12 times. I hope this can be further improved to 20-30 times. We also hope that VLA 2.0, we can actually have already achieved 10 times more the parameters. Because this kind of parameters will reflect stronger and more unexpected features, and for this slide, I want to introduce you to the narrow road XNGP to be launched.
In the past, we have a highway, we have a City XNGP, but we also forgot a lot of narrow roads and some campuses. Only in China and in the U.S., we encountered this kind of situations. The road is very wide, easy for autonomous driving. But in a lot of cities and countries across the whole world, we can see in the suburban areas a lot of narrow roads, and a lot of narrow roads also exist in overseas countries. If we didn't do this very well, then it will be a disaster. For example, the alleys in Beijing. Sometimes my friends brought me to the restaurants in the alley. My driver actually was very painful because the road is too narrow. It's very difficult to navigate through the narrow road. We believe narrow road NGP actually will be the new beginning and 10 times more difficult.
But if we can conquer this, what does this mean? It means that our City XNGP, the capability will improve 10 times, and also the Highway XNGP safety level will be increased by even 100 times. For our VLA 2.0, we will have a 13 times better average miles per intervention on complicated narrow roads. Now, for every 260 km, we need to take over once. This hugely improves the experience from door to door. A lot of competitors and also our company, we have this kind of door-to-door driving experience, but it's not that so smooth in the past. This feature, we are going to make it available from narrow roads to campuses and to other kinds of road conditions. This kind of capability can also go to the whole world, not only in China.
A lot of friends here, you will know that our sales in Europe are very good, but we can only offer LCC in Europe because the Max version that we can offer is not that available. Because in Europe, we have a lot of narrow roads, sometimes single lane. Double lane can be very difficult to see. And also, when you have some oncoming traffic, you need to wait for the car to drive through that you can pass through. So it's also very common. Only when we get down this kind of narrow road NGP, we can make our cars with the best experience in the whole world. We believe our VLA 2.0 will be a new giant leap for our company to enter the whole world. We also made a comparison with FSD. This time, FSD, we compare with their version 13.29. The experience is very smooth.
And also, we consider the abnormal or some corner cases. For our company, our narrow road performance is very good. We can see that our takeover rate is only 20% of FSD. And for our two systems, we don't have any takeover. And also, the test, the timeframe is also we are fewer, we are less than FSD. Next month, I'm going to the U.S. to drive the latest Tesla FSD version and see the difference then. During the test, we have also identified some interesting emergent features. For instance, this new feature. When someone waves at us, no matter if the traffic police or the pedestrians, the car will stop. So this is the feature that we didn't expect to develop.
Another interesting feature, as you can see on the slide, if we wait for the red light, when it is about to go green, the car actually is moving slowly already and very slowly but very smoothly. This is not the feature that we plan to develop, but this also has emerged from our latest technology. When we launch this officially, we will see more and more interesting and useful features. The VLA 2.0 of our company, we are also going to offer you the industry-first Super LCC free from navigation. Because we are thinking, if we turn off the navigation, can we also use autonomous driving features? Now, for Chinese customers and also for the overseas customers, we are now developing the Super LCC. For instance, tonight you are going to drive, you want to drive freely, you don't want to use the navigation.
But during the free driving, you can also use autonomous driving on a straight road, and when you switch the lane, it's also very smooth. The brand new autonomous driving method will be compliant globally, and also next year, it will be ready to use for both Chinese and global customers. Speaking of the timing plan, we can see that no matter if it's Max users or Ultra users, you can already see some of the upgrades are a little bit slow recently for our company because we are doing our efforts to switch to the latest VLA model. This month, we are going to open the test drives. Today, we also have a lot of media friends here. Some of you, if you are willing, we can have test drives. And in late December this year, we are going to invite some pioneer users to experience.
And also, if you bought the ultra version, then you have the possibility to be invited. And also, next Q1, we are going to push to all Ultra users of our company. Here, I want to emphasize the XPENG VLA 2.0. We are evaluating how can we migrate to our Max version users. Next year, you will have a more powerful Max version autonomous driving. Open source. This is a very broad, different idea. We hope that XPENG autonomous driving, the chip, the EEA architecture, the large model, we can realize open source for our global business partners. In the near future, we welcome our domestic and overseas OEMs and tier-one companies to talk with us and welcome you guys to bring this technology out of our company to other provinces in China and also to other countries in the whole world.
Today, we officially announced our VLA 2.0 will be open source, and also, we'd like to announce Volkswagen will become the launch customer of our XPENG VLA 2.0. Additionally, XPENG Turing AI chip already secured the nomination from Volkswagen. First of all, a big thank you to Volkswagen. Over the past several years, the two companies are joining hands to make actually huge progress. We believe for our companies, we are supporting mutually to each other. Next year, the XPENG VLA 2.0 will also be equipped and launched in the latest Volkswagen models. We believe our two sides can promote the whole world autonomous driving technology forward. For our VLA 2.0, it's actually the cerebellum, but if we want to be more intelligent, we should also consider the cerebellum. That's why here, together with VLM, we are going to offer new features.
This kind of combination of VLA and VLM will have a better parking experience for you, and also when you are on the road of three lanes, you can also give the instructions. For example, you want to drive on the middle lane. We hope it's not only on the road, in the outdoor area, or in other kinds of road conditions. You can have actually very low latency. We are also developing, based on this VLM, the latest AI assistant, which can offer multiple language services. Our Xiao P will serve you in mixed languages, not only in China, but also in English, in German, and also in other languages. Now, let's check out how Xiao P can do this with a video.
[Foreign Language]
[Foreign Language] Hey XPENG, I'll add two nano seat massage to relax. Okay, I've turned on the massage for right seat in the second row. [Foreign Language] Hi XPENG, turn up the fan speed a little bit. Sure, I've adjusted the AC for you. [Foreign Language] Well, when I go travel abroad, I would also drive different vehicles.
And when I was in Britain, the wake words we often call Xiao P in the export, but it's hard to activate due to the poor pronunciation, not to mention German or Russian because of the language barrier, so just imagine for Chinese friends going overseas and want to activate our voice assistant, you can now activate in Chinese to access the navigation, the control screen, and others. This is an important feature. We know that usually it's Android or other systems often have the origin of Linux that power our cabin, but we believe that in 2026 there will be two operating systems. Apart from the Android, we're also going to have VLM operating system, which is a voice system that supports conversation, memory, and other functions. This is still under development. We hope that in 2026 or 2027, we're going to show you our great leap in the VLM.
So that would be the end of our first chapter. The key takeaway is our VLA 2.0. This is a model in the physical AI world. This foundation model keeps iterating, which enables us to see the brand new capabilities and emergent features. What I want to emphasize is that this model will not only be applied to our robot or our vehicle, but also on the Robotaxi , the robot, and flying cars. Therefore, we strongly believe that based on our VLA 2.0 plus the VLM, we're going to emerge new features, including what we're going to introduce in the second chapter. The era of autonomous driving is accelerating. I believe that in the future, there will be two types of cars. First is Robotaxi without a driver. This is what we usually call the Robotaxi. This will be a full sharing mode.
Another type, I believe, would be the private exclusive mode, meaning that it will open to the retail customers for them to purchase. They will enjoy the same hardware and software of Robotaxi . And this can also be driven by your family members. This is why we call it exclusive mode. This L4 car would be compliant with the laws and regulations. You can drive it as if it's an L2 car. But of course, for Robotaxi, we need to hand out licenses. And first, let's delve into the Robotaxi. While the world's first Robotaxi was put into operation in 2018, and in the past seven years, it's been frequently discussed and piloted. However, the industry is yet to actually scale it like the NEV or electric vehicle. And I believe that many of you have never ridden it yourself. Why is that?
First, it's because of the high modification cost, the multiple laws and regulation limits, and that will lead to a small fleet size because it is hard to commercialize and profit. Therefore, companies always run a relatively small fleet. Not to mention, the operation scope is also really, really limited. For us, we're a company that is transitioning from L2 autonomy to L4 autonomy. We found that many of the Robotaxi , they actually cannot access places without navigation, or they often only choose to drive on the open public roads instead of the narrow roads. Therefore, the route planning in the autonomous driving actually has many blacklists and limitations. If you want to ride an autonomous driving car like to your office, to the parking space, it's very difficult for the car to actually drive to where you want.
These are all the problems of Robotaxi. If we really want to scale it to a larger scope for better commercialization, even expanded from China to the world, we believe that we need to have the car OEM itself to invest into it. So here comes our XPENG. In 2026, we're going to launch three Robotaxi models. I believe that this would be a model that would be fully self-developed, full-stack in-house development, and these three Robotaxi models, we believe that will enable more people, more friends, not only in China but also across the world, to access the Robotaxi and enjoy the service. I want to quickly walk you through some of the features. First is the hardware designed for the L4 autonomy. We have two hardware systems for safety redundancy, including the computing power, steering, perception, energy, braking, and communication.
We will ensure that throughout the whole driving, the safety is our priority. Also, the Robotaxi will have higher computing power, up to 3,000 TOPS, way higher than what we can currently offer. We will have four Turing AI chips on board, 2,250 TOPS for operations, and another 750 TOPS for redundancy. And based on our VLA 2.0, we're going to train different styles of autonomous driving, which would prioritize ultimate safety, especially zero takeover. It may lack in driving efficiency, but to ensure a very comfortable, smooth driving experience so that you feel like you actually sit in a car driven by a seasoned driver. This can support generalization, meaning that it will be deployed globally. And what other features can we support? We also innovate the interaction system that can interact with pedestrians outside the car.
It's an inspiration that I drew from an animation film called Cars. I believe that in the future, those smart vehicles can actually interact with people inside and outside the car. That's why we actually conducted lots of tests for the front side of the car, be it the glass, the sun visors, or the sun shields. We've conducted a lot of the tests and found that the sun visors are the perfect place to display. So this car can actually show information to the pedestrian and even communicate with it, telling the car, "Where I would like to go? Can you drive me there?" There will be voice communication outside the car. This will be powered by our VLM, and in the future, you'll see that the car will be more like a human-like robot with four rows.
Therefore, our Robotaxi will actually try to tackle the current bottlenecks in the industry through the factory installation, the scale delivery, not restricted by the region, and cover the narrow roads. Therefore, in 2026, our Robotaxi will start trial operations in Guangzhou and also other select places. Because we are a car OEM, unlike those ADAS software providers, we also focus on the hardware development. When contemplating Robotaxi actually a B2B business, we actually questioned whether we can have a more car that would tailor to the retail customers and whether the customers would like to purchase. We had no idea, but we would like to give it a shot. So this is the second car I mentioned, which is the L4 experience car with driver inside. So this is how we shifted from Robotaxi to Robo.
he Robot will be a car for the retail customers, and we name it as a new smart driving variant. It actually shares the same origin with Robotaxi, with the same hardware, the trim, the software, and the safety redundancy, probably even other unimaginable features. So you can see that in our smart driving models, there will be Max, Ultra, and Robo, these three variants. But not all cars can be equipped with the Robo configurations. We're going to see how we can have better tries. Probably in the future, there will be vehicles with L4 autonomy. We had no idea how far, but still, we'll work hard on it. And for XPENG Robo, we have the safety redundancy, 3,000 TOPS computing power, two-set smart driving modes. One is to focus on commute efficiency, and another prioritizes ultimate safety.
Also, if the laws and regulations permit, we're also going to explore more scenarios and applications. For example, the cloud-based model can actually empower. We had no idea we should do try. And also, when we actually permitted by the law, can we actually have our robotaxi to pick up our seniors and kids when we are busy, to pick them to home? We had no idea whether it would be achieved, but probably one day does not be far away. And I hope that. Also, just like what I mentioned, unlike other software providers, we hope that we can share this revolutionary moment too. That's why we decided to open our SDK. Probably in the near future, in a small town in China, there will be a distributor or partner asking like 30 XPENG Robotaxis and have all of these Robotaxis inside their town.
We hope that we can work with our partners to co-build such Robotaxi ecosystems, not only in China but also across the world. If you want to start a new business, I think it would be a perfect choice for you. Today, we already have partners that have interested in it. We're delighted to announce that Amap will become our first global ecosystem partner with us on the Robotaxi. Therefore, when we actually achieve Robotaxi capabilities, we're going to have this ecosystem partnership to achieve like tens of thousands or even hundreds of thousands level Robotaxi will be deployed across the world. Amap will be our first ecosystem partner. Let's see how they view our cooperation. [Foreign Language]
A big thank you to Amap. Today we have already introduced two important things. First, VLA.
Second, based on VLA, the explorations we have already made on Robotaxi and Robo models. We hope we can become the first mass-produced, factory-installed Robotaxi and Robo models OEMs. And also we hope we can achieve global operations and deployment. And we strive to find more ecosystem partners. In 2026, we are going to launch three Robotaxi models. In 2026, we are also going to bring to you the Robo variant for some of the new models of our company. After Robotaxi and the Robo model, now let's move to another species: robots. First, let's check out the video. [Foreign Language] So this is the IRON robot launched last year in the AI Day. Over the past seven years, XPENG's Robotics Center has already had seven generations of products. For XPENG Robots, we are evolving actually from quadruped to human-like and then to humanoid robots.
Today, the third chapter of our AI Day, I want to share with you about XPENG's robots' progress. First of all, I want to talk about one thing: the robot should be like human or not. For the past seven generations, five actually are quadruped. At first, I was very insistent on doing quadruped robots because, two, the biped will not be that kind of stable and also safe. Because for the quadruped robot, it can do a lot of things for you, no matter if it's for the household chores or for entertainment. But with the evolvement of our technologies, we found the quadruped robots, they don't have hands. Sometimes in our company, we made a joke: if a bomb exploded beside you, what kind of organ you'd like to compromise to lose?
So many people will choose, "I don't need the feet because hands are more useful." At that time, I was thinking, how can we add actually hands for the robots? But we cannot make it like an elephant because the nose is very long. Can also serve as a hand. Later, we also make a pony. The tail, we make it a hand. But every time we found the tail of the pony needs to go up, it's not very friendly and also difficult to operate. In nature, you can see the quadruped animals, which species have hands? And also, if we bring the quadruped robots to your home, then you will find out actually the apartment will be too big because usually it's around 100 square meters. For your pets, like dogs and cats, they will actually stop moving forward if they see the corner or the wall behind.
But for the quadruped, you cannot make them turn around because it will have actually some crashes into the wall or some corners. For our family users, it's very difficult to use the quadruped robot. With the development of our R&D, we already predicted that we will have different kinds of humanoid robots. But for XPENG, we choose the human-like robots. And we choose the most human-like robots. This is our Next-Gen IRON. Why we want to be more human-like? Because more importantly, if we don't become human-like, then you cannot receive valuable data from humans. A lot of robots, they can jump, they can run, but they use actually different kinds of joints. For robots, it has a big difference compared with humans because robots can do some movements that people cannot do.
You will think, "Wow, it's impressive." But robots are not good at doing some simple things that humans can do. If we do the structure totally different from humans, then we cannot receive the realistic data. And also for our scenarios, home, office, shopping malls, people are always using, and also the basis of the design. It's very easy for us to generalize. I think recently you have seen some also social media topics the quadruped robots, what they can do. For some people, if they buy the robots to the house, then the parents and also the children, they need to like it. For human-like robots, it's more easily to be accepted and also very much more easier to be commercial. Now let's welcome the video first of our next gen IRON. [Foreign Language] This is XPENG New-Gen IRON.
[Foreign Language] Actually, I have mixed feelings now because over the past seven years, in order to make the walking very lightly and very elegantly, we did a lot of work and we tried to mass produce IRON as well. Thank you, IRON. You can go to have a rest. [Foreign Language] this is a very difficult process because today IRON is still under the R&D stage, but we plan next April we are going to enter the mass production preparation stage with hardware and software. [Foreign Language] Today it is completely using a. The IRON you see very gently and also walking like the model. A lot of robots in the industry you can hear the noise of walking very loud and also the impact on the floor is very big.
In the past, the robots will work like this, but we are actually always want to prepare for the mass production during the rehearsal. Some of the colleagues were saying that is there a real human inside the IRON? No, actually not. Next, I want to. No, I want to first of all tell you guys that we are already starting the mass production preparation of our humanoid robots. We hope it can be lower than 1.7 meters, also can be lighter, can be more good looking and safer for our customers. We will also enhance the reliability of our mass production robots. This is what we are doing. Next, I want to use a video to show you the structure inside of the IRON.
[Foreign Language] Actually, according to the head of our robotics center, he said we are not making robots, we are making humans. I told him, actually you are creating intelligent humans. In the future, robots will be the life partners and also maybe your colleagues. That's why it's more like intelligent humans have the intelligence to create a better life with human beings. That's why the Next-Gen IRON has very flexible bones and also very solid muscle, bionic, and also very soft skin. We hope it can also have the similar height as human beings inside out, human-like. That's why we designed the human-like spine for the Next-Gen IRON . It will be able to actually go down, bend like human beings. I cannot touch the floors, I will not show you, but IRON can. And also around the lumbar area, we can also have flexible movements.
No matter if it's lying, standing, getting up, doing some easy gestures and movements, all of them can achieve on the next-gen IRON. In addition, we also have bionic muscles on IRON. It can be more like human with different body shapes. You can choose a little bit fatter IRON or like me, a slimmer IRON. Or you can customize your IRON based on your preferences. We also offer you the full coverage soft skin. So the robot is warmer and also more intimate. In the future, it can go to the households. This skin touches very softly, and on this new generation IRON, we also create some touch sensors. Different areas like the hands, they can have interaction using the sensors. With humanoid bones, muscles, and skins, we can say that IRON can have different body shapes and sexes.
As you can see on the slide, we have two male robots and two female robots. I suspect that just like you buy the car, you can choose different colors, exterior, interior. In the future, when you buy the robot, you can choose the sex and you can choose the hair longer or not, or the clothes, what kind of purpose. So do you want to achieve this? If you want, then let's see some hypotheses. If IRON wears different clothing, it will become actually people from different industries. I believe in the future, we have different robots. We have human for sure, and also we have IRON, this kind of intelligent human. What's more for our Next-Gen IRON, it also has the head-mounted 3D curved display, a lot of sensors inside, including the visual sensors. We have the cameras. We also have the millimeter wave radar.
We also have a lot of very sophisticated components for the head, for the face. You can have different expressions. With the screen, the display here, you can see the emotions of IRON. We are also developing based on other kinds of features like ear and also the mouth and so on. So it's actually the integration of different capabilities. We also developed the bionic dynamic shoulders for our Next-Gen IRON. So our humanoid robots can stretch their shoulder like human beings, and they can achieve the same level of flexibility. And next point is the most difficult one, dexterous hands. This actually is very difficult to mass produce because for the hands, not too big, very small, but we need to be more like human hands. For human hands, actually, it's the most sophisticated organ. For IRON, we developed 22 joint flexibilities for a single hand.
And also we developed the smallest frequency regarding the joints. So a single hand can achieve 22 different degrees of flexibility. And for IRON, the fingers can actually support a very small item, as you can see on the slide. In addition, the working of IRON, no matter the gesture or the dancing or the sitting gesture, is very gentle and also very soft. We add the flexibility at the tipping point of the feet, just like the models walking on the stage. We also actually adopted the first all-solid-state battery in the industry on IRON. A lot of people are asking, "Are you talking about the all-solid battery or semi-solid-state battery? And why don't you use it on the cars?" Because for robots, it's only less than 2 kilowatt-hour battery.
I think humanoid robots will be the opportunity for us to mass produce the solid-state battery because for humanoid robots, the safety requirements will be more stringent. For cars, we are facing the roads, the outdoors, but for our robots, we use it in office, in shopping malls, in your households. We need to achieve the top safety standards. In addition, we use three Turing AI chips on the Next-Gen IRON with the maximum computing power of 2,250 TOPS. I believe this is the most powerful robot in the industry. Very possibly next year will be the smartest robot because using the computing power that I mentioned, we actually combine three large models: VLT plus VLA plus VLM. As we all know, VLA actually comes from the cars. VLM is also from cars. But we add the new VLT large model.
This large model is the concept and also is the model first created by XPENG. It's also today the first time for us to announce VLT large model will be the core engine for robots' autonomous action. It's actually like the brain of our humanoid robots for their decision-making, especially in the real world. For instance, how the body is going to move, how the hands are going to adjust, and how they are going to take different actions. This VLT large model is extremely valuable. Also, the VLA big model will also be applied on our XPENG Next-Gen IRON. But we need to say that it will be much more difficult because on the car, we have one engine going forward, going backward, going left, and going right, but for a robot, we have 82 joints.
So you can think about actually tons of combinations from hands to feet to shoulder to lumbar to head. A lot and a lot of gestures will be a challenge for us. But we firmly believe that this VLA large model will be extremely useful for the body control and have unified intelligence. Also, XPENG, last year, we have established the first embodied intelligence data factory in Guangzhou. We believe no matter your computing power is high or not, data will be the key. How can we receive massive data over time? I think this is very important for our robots to be more generalized. Here, let me allow to make it still confidential because in the future, I will disclose more and more to you guys how XPENG can use the massive data to support the mass production of our robots.
Regarding our IRON, we also want to mention about the three laws of robotics because now we extend these three laws of robotics. As we all know, the first law is that robots cannot harm humans, cannot let human beings get harmed because of no action. The second law is robots need to obey the instructions of humans unless this instruction is in conflict with the first law. The third law is the robots need to protect its existence unless it's in conflict with the first and the second law. Now, for our company, we are thinking the fourth law that robots cannot disclose any human privacy because they have ears, they have eyes to hear, and also to see everything around you. This kind of active safety protection will be actually very important for our company to develop for our customers.
We all know that in cars we have AEB feature. Through rules and algorithms, we can protect the occupants. But for robots, can we have a similar feature? Maybe not every one of you knows the complicated process of robot development. Sometimes it will go crazy dancing. Suddenly, you can think about a 70kg robot. If he kicks you, then what can you do? That's why we need to have this kind of Active Safety Protection without harming people. This is crucial. We can see that very frequently, including in our company, we have shot this kind of video. We can kick the robot, and the robot can still be balanced. Is this safe? No.
Because during the movement, if you kick him, if the floor is not flat or your pet is on the floor, then the robot will actually step on the pet or step on the pit on the road. That's why we need to think about how can we protect actually the pets, the people, the conditions around the robots. All of these are belonging to our working scope regarding the IRON development.
Next, Mr. Xiao, commercialization. Many may concern that this is your eighth generation of robot, and when can you actually make it a commercial product and mass produce it? It is something that many would ask actually two years ago. We've been contemplating about it, and we would like to share some of our insights and observations. First, should we introduce our robot into factory to tighten the screw?
It is actually the easiest settings we believe a robot can perform. But after one year of trial, we found that it's not suitable. Why the case? Because first, the most difficult part of the robot, we believe, is hand. And when you perform the work like tightening the screws, usually it will just take one month for the robot to have its hand wear. Not to mention, it has two hands, and the cost for that is very costly. Not to mention, the manufacturing skill between Chinese and American workers is different. Therefore, it's very cost-effective to hire Chinese workers in our factory. That's why we believe it is not the best way to introduce the robot into factory from both commercialization, from both the technology and the usability. Therefore, having the robot to tighten the screw is not our priority. It may become later, but not for now.
That's why we cancel it. Then how about the household chores? We often run into videos, and I'm also excited about the vision because I'm kind of a lazy person. For all those household chores, I would rarely perform myself. Can I have a robot to actually do that for me? But still, we believe that it is still not viable. First, it's because of safety. As I've said, for a family, usually have about 100 sq m area. Not to mention all of these obstacles, the items that you stacked in your house. It's very difficult for your robot to not get tripped over or just run into your furniture, the pads. I believe no single company and robotics can actually ensure safety in the household. That's why I think safety would be the greatest challenge to hinder that development.
The second is generalization because you have different household settings. How can you actually ensure the robot can learn your settings and perform accordingly? I think it's very difficult based on our current technology. That's why we decided to pause it for now, even though we may choose to deploy it in our household in the long term. That's why after all of these considerations, we decided that we want to prioritize our commercial settings. The first, we expect that it would be deployed in three kinds of scenarios. When I was in the mobile internet industry, we often say about walled protections, guarantee, and I think it's similar in robots. We often followed the same kind of rules. For robots, we believe that it can perform work like tour guide, shopping guide, and even reception in a company.
These three kinds of capabilities are very likely to be deployed in the real world. For our Next-Gen IRON, we hope that it will not be remotely controlled. Instead, it can be instructed via natural language. But of course, it needs to undergo some basic training. Instructing it how to better guide the people, you need to have those materials that input into the system so that this robot can actually introduce themselves. And next year, our robots are very likely to show up in our XPENG stores to introduce our product. You will probably see our robots introducing our product. But sorry, they can only introduce the car. They cannot actually test ride with you. Not to mention, it can probably lead you or guide you in our manufacturing base.
In the coming future, this may generate a new kind of occupation, occupation that will require people to teach, instruct, and manage the robot. For example, one manager in charge of a group of seven robots that can perform the work that dozens of people can do. This would be a dedicated occupation probably in the future. Next is about mass production because we really take it seriously. And after our development of our Next-Gen IRON, many of our labor would be actually concentrated into our mass production, hoping that we can achieve the kind of capability, making sure that our robot can see, listen, understand, even though it may perform slowly, but it can ensure that it's operable. And we often say that it's more difficult to mass produce the robot than to build a robot ourselves.
We actually took many detours, and we would like to share some of our experiences with you on this AI Day. First, in the past, when I actually got into the auto industry 11 years ago, we believed that car is actually hardware-defined and to think about the software, how to better be compatible with the hardware. Therefore, the software providers can actually assist the car OEMs. However, it's totally different in robotics because it is a software-defined hardware. You can see that our IRON, they actually learn how to walk through the reinforcement learning, take some simple steps because at that time, it can only do so based on that hardware and the software. But if you want to power our robot using the new model, due to the different architecture and different software, the hardware may probably need to adjust it accordingly.
In this field, well, to be bluntly, many of the car OEMs, the suppliers, or the partners, they may find that a lot of the software providers in the car manufacturing industry can hardly perform very well in the robotic industry. Second, people would ask about whether the robotics, what is the standard for the robotic quality. From our perspective, we believe that the quality of robots will likely go beyond that of the stringent automotive grade because we also work on the flying cars. We believe that these two will be, well, very rigid, more rigid than the car. For example, our battery designed is more stringent than that of the car. We worry about catching fire in the household or buildings. Not to mention, the robot has 82 joints. Imagine a car. If the engine fails, it cannot drive.
But there are 82 joints on the robot. If one fails, well, the whole may collapse. Oftentimes, we will have situations like a joint fails, and the electricity will break down, the robot will just fall over. That's why the robot would have very high requirements for safety, much higher than that of the car. And for perception design and domain controller design, we actually benchmark against the car and even higher than that. The third lesson we learned is that we find that most of the car makers, they would like integration and innovation to do this at both the same time. But for robotic manufacturers, it's more about fusion, fusion and innovation. How can it better feed the software and integrate to make it coordinate? Because the eyes, the hands, the legs of the robot need to be coordinated. These are powered by two different systems.
It's very difficult. That's why we required full fusion. Fusion and innovation is actually the foundation for robotic development, and this is a great challenge. I believe that for the future robotic manufacturer, many of them would prefer the full-stack self-development, and for those who choose integration, well, it may be very difficult for them to venture into this field. That's why we believe that mass producing the advanced humanoids is as difficult as building a robotaxi with an immature hardware and software supply chain, just like back in 2015. That's why I would take the lead to build a robotic team, and so far, we have 10 R&D teams working on the robotic development, so apart from the robotic centers, we have other departments that would also work on the R&D.
Not to mention, we have more than 20 partnering departments, over 1,000 people that will support our overall development. Well, first, let's watch a short video to see some of our mass production scenarios. We actually set a very aggressive goal for ourselves, which is by the end of 2026, we hope that we can mass produce high-level advanced humanoid robots. If we can really achieve that, this will be a great challenge, but also a great success for us. We hope that there will be a louder round of applause here. And I truly believe that our robotic teams will never fight alone because all of those R&D teams working on the robotics will fight together, join hands in hand to tackle all those hardware and software challenges, ensuring safer and higher-level mass production.
We hope that XPENG will become one of the first and even the first Chinese company to mass produce the humanoid robot. This is a challenging goal. Also, just like our VLA 2.0 and the Robotaxi, our XPENG IRON will also open our SDK to the global developers. We believe that robots developed by a single company are very limited in these universal usages, but if we can partner with more developers to have secondary and even multiple developments, it's definitely going to enrich our IRON's features and to better adapt to different scenarios, unleash greater possibilities, and we're very delighted to announce some news because I actually have a friend who has long reached out to me saying that they want to have our IRON deployed in their workplace to help with their business.
[Foreign Language] So to make a summary, the latest Next-Gen XPENG IRON will be the most human-like robot. And also, we are going to equip the industry-first full solid-state battery. We hope we can achieve mass production in 2026. And more and more people will buy our humanoid robots like you purchase our cars. So just now, we have already shared a lot of applications on the road, on the floor. Now, let's go to the sky to see the latest our low-altitude mobility progress. For our ARIDGE team, it has been 12 years since the R&D work started. You can also see all the products from ARIDGE over the past 12 years. When Deli created this company, no one believed in this dream. Later, XPENG joined hands with Huitian, currently ARIDGE. Still, many people didn't believe.
But recently, I started to see that a lot of people, they are committed to this low-altitude economy, this kind of mobility in not only China but also in the U.S.. We have already seen the news recently. This kind of flying car will be a major direction for the mobility. Today, very glad to bring you the latest flying car from ARIDGE. It's actually not a flying car, but the low-altitude eVTOL. Now, let's check out the video. So just now, you are looking at our latest generation fully tilt rotor flying car called A868. I believe this will transform the future mobility because it's higher efficiency than cars and also more convenient than the high-speed trains. You can actually start your journey anytime. Today, the prototype is already ready at the venue nearby this conference hall. It's too big. We cannot bring it to this stage.
If you are interested, you can have a look. If you stand in front, you can feel how big this flying car is. For ARIDGE A868, we have already entered the flight test phase. We hope we can have the actual aerospace-level system, and we can achieve more than 500km and also the maximum flight speed of more than 360km/h . We support maximum six people sitting in the cockpit. Only half basketball court, you can solve the landing and takeoff issue. You don't need a very big airport to do this kind of things. Next step, ARIDGE, we are going to have two systems. The first direction will be the land aircraft carrier. It can fly for about 20km-30km , but it cannot for long range. The second direction will be our A868.
We need actually a designated takeoff and landing venue, but the range will be longer and more efficient. These two directions can all be your option. We will transform our future as well, and this will no longer become imagination. Now we are ready for mass production and delivery in 2026 regarding the land aircraft carrier. Maybe in about two quarters, many of you can come to have test flights. For me, I will also be the first to try the flight during the mass production stage. I think demo is very easy to make. We can see a lot in the world. But if we want to mass produce it, we have already actually worked for 13 years. Next year, we'll also be the 13th anniversary of ARIDGE. Today, we also want to share with you different parameters. First, safety.
We benchmark the top flight safety standards to achieve full domain safety redundancy. You can see we have a lot of redundancy for your safety. And we also have the industry-first six-axis, six-propeller dual-duct golden safety configuration. So for each propeller fails or the diagonal rotors fail, we can achieve very safe landing and takeoff. We also have the executive-first program in ARIDGE. So for parachutes, you know the story in World War II, the parachutes cannot be opened in the sky. That's why later the supply chain companies at that time, they find their executives to try in the sky. If they cannot open, then they will die. That's why in the future, the parachutes no longer face this kind of problem. For me and for Deli and other ARIDGE executives, we have the hands-on commitment to safety with a mandatory 5,000 km flight mileage.
In Dunhuang, the northeast of China, we roll out the first low-altitude tourism flight route. Dunhuang is a very beautiful area, but I never see this scenery in the sky. Some people are saying that I have the drone to take the photo, but if you can see from the sky, it will be totally different from the drone. We also believe that we are going to open five more flying camps in Dunhuang and 2026 across China. In total, we have more than 200 flying camps. Arodge also has launched China's first low-altitude flying camp in Guangzhou, Higher Education Mega Center, which is very close to the current headquarters of Arodge. But in two weeks, Arodge will also move to the new headquarters here next to our XPENG Motors building. This actually will be the world's first flying car 6S store.
If you want to fly, for sure, you need to have a flying car license. That's why we now introduce you to the first dedicated pilot license for the Arodge flying car. In the future, not only do you have the driving license, but also the pilot license. Also, don't worry about the troubles during the training. Let's see how you can learn how to fly. A lot of buttons, a lot of mechanical features. When you first enter the cockpit, you will feel very overwhelmed. But I'm very happy that our air land aircraft carrier with one screen and one joystick, we can achieve single-hand flight. If you can drive, then it will be very easy to learn how to fly. We also have the redundant console. Even the main console is failing. You can also have safe flight and landing. So warmly welcome everyone to try out.
And also, we are going to have a new park across our headquarters, which will open very soon. In the park, you can try out our flying car. And for all mass production, we cannot talk about this without the manufacturing factory. Recently, the first prototype has already rolled off the production line in the world's first mass production intelligent flying car factory. Very soon, we are going to deliver our flying car in 2026. I believe at that time, it will be a brand new experience for you guys, and I'm very much looking forward to exploring the new future of mobility. Today, also, I'm very glad to announce that ARIDGE has already received 7,000 orders regarding the land aircraft carriers. We know the world's largest aircraft company. They cannot sell more than 800 aircraft.
But once our Land Aircraft Carrier starts the mass production, we believe this will create actually a new record regarding flying car annual sales. We also firmly believe that you can see our ARIDGE model everywhere across the world.
Okay, some last few slides. The theme of the AI Day is emergence. As one of the earliest companies in China that diverged in the full-stack self-development, XPENG, as you can see, we have finally emerged something different: the cross-industry, cross-domain features and technology. And actually, in the real world, we also have similar things, for example, like singularity and a black hole. When the cosmos is having a gravitational collapse, the space and time will have an extreme compression. This would give rise to singularity. Singularity then will generate the spatial-temporal distortion, which would also lead to the black hole.
Therefore, I truly believe that for both the physical world and the digital world, or the integration of these two worlds, there will be greater emergence showing up in the future that is worth exploring. In our AI Day, we spent around 100 minutes to introduce our latest technologies, the VLA 2.0. This is, I believe, the first large model that will be implemented on the mass-produced cars, and this large model would also power the Robotaxi. The era of autonomous driving will definitely race in. We will see the mass production in 2026, as well as the robo models. The Next-Gen IRON , meanwhile, would also be powered by the VLA and the VLM, as well as the Turing AI chip computing power. Underpinned by all of this, we hope that we can become the first company to mass-produce the advanced humanoid robot.
I will take the lead together with the team to ensure that the mass production will be achieved next year. It will not be a demo anymore. It will not be a performance robot. Instead, it can be an application robot. And the last one is our vision in the low altitude mobility from pure dreams to the very actual products. All of these things that we launched today, they are seemingly cross-industry, probably covering software, the Robotaxi, hardware, and flying cars. But they are fundamentally the same at their core, and that is the physical AI. We will be very committed to perform well on the physical AI front. Therefore, XPENG, based on our hardware and software full-stack self-development, we hope that we can become the explorer of mobility in the world of physical AI, as well as a globally visioned embodied intelligence company.
This will be our vision for the next decade. After so many technical and hardcore content, we actually prepared a very interesting video that shot by myself, hoping that we can give you a glimpse of the mobility in the future that we envision. [Foreign Language] Oh , that's it. I stay alone in my tower. You were just haunting your powers. Now I can see it all. They won't knock it. Tucked me out of my grave, and saved my heart from the fate of Ophelia. So there are actually five actors apart from me. You can see that they're all shown on the slide. Especially, I want to highlight the guys that are dancing is our IRON wearing a high-visibility clothes, but did not fast forward the video. It's actually on its original speed.
Therefore, I strongly believe there's bold technological imaginations, even though it's very difficult to achieve right now. But just please give us some time and patience. I think it will be brought into fruition. I think we're on our way to create a better future, to democratize the technology, and a better life. So this is just the last few slides. So we welcome all of you, especially those joining offline, to visit our new headquarters. And for those that are tuning in online, you can also take a chance to visit our headquarters. This is probably the most family-friendly tech park. We have our mom's kitchen, our technology exhibition hall introduced by our robot, especially a lot of hardcore content. You can even try it out, some of the sci-fi experience. Along with that, our XPENG Intelligent Manufacturing Base is also ready to open.
This is probably the most tech and styling industrial tourism new landmark. You're going to see how the car, robots, and flying cars are manufactured. We look forward to the future that there will be robots performing some duties in our factory, so the brand new headquarters, brand new Intelligent Manufacturing Base, will be open to the public in December. So we welcome your reservations and your visit. To wrap up, I would like to use a quote saying that, "Course of light, emerging as stars." Thirteen years ago, there was actually a vision in our flying car teams, and eleven years ago, or approximately thirteen years ago, I was pondering whether I can venture into the auto industry from the mobile net industry, and last year, I was thinking about whether the VLA can actually become a physical role model. There are all threads of light.
There are all routes that will grow into a greater dream. As long as we can have all these threads of light gather together to emerge as the stars and reach to the distant future, we truly believe that the humans are being great because of our dreams. Once again, we welcome and we really thank all of your presence. We hope that we can, with both online and offline friends, walk into the better and greater future with the physical AI model. And one more thing. On our AI Day launch event, we introduced our progress and latest technology in the physical AI world. And on November 6, which is tomorrow, we're going to introduce our latest REEV model, a model that will be powered by both the pure electricity and the range extender system, which will usher in a new era. So we welcome all of you.
Stay tuned for our tech launch event for our X9 Kunpeng Super Range Extender System. Once again, thank you for all those people that are creating the excitement in the physical world AI. Thank you all.