Rivian Automotive, Inc. (RIVN)
NASDAQ: RIVN · Real-Time Price · USD
16.14
-0.58 (-3.47%)
At close: Apr 28, 2026, 4:00 PM EDT
16.09
-0.05 (-0.31%)
Pre-market: Apr 29, 2026, 8:56 AM EDT
← View all transcripts

AI and Autonomy Day

Dec 11, 2025

Speaker 7

We're in the midst of a technology inflection point. The way that we approach AI in the physical world has shifted dramatically, and the idea of not having fully capable artificial intelligence across every domain of our lives will be almost impossible to even imagine, and I think artificial intelligence will become as accessible as running water or electricity.

Vidya Rajagopalan
Senior Vice President of Electrical Hardware, Rivian Automotive

Now, nothing about AI is incremental. I don't know if people have fully grasped it. AI is everywhere. It's in the vehicles, it's in your phones, it's in your laptops, it's in all the technology you use.

Speaker 6

Autonomous driving is one manifestation of physical AI. It could be robots that may be doing remote surgery, some factory automation.

Speaker 7

For us, thinking about this through the lens of a vehicle and the transportation experience, making autonomy incredibly accessible and something that you own, it's a really transformative thought. It changes how you think about vehicles, the role they play in your life, how you interact with them, the things you care about, and the cars start to shift as well.

Vidya Rajagopalan
Senior Vice President of Electrical Hardware, Rivian Automotive

There's hardly any technology platform out there that's more diverse and more complex. It's got every type of computing, every type of sensing, all in it.

Speaker 6

The human body has multiple sense organs. We need to enable that AI with a sufficient amount of sense organs.

Speaker 7

You have orders of magnitude more data being brought in, and the quality of the data is orders of magnitude more powerful. As we look at this introduction of much higher levels of capability when it comes to self-driving, being able to decide whether or not you want to drive the vehicle or the vehicle to drive you is a really powerful choice. There's obviously an aspect of freedom and personal empowerment that vehicles give you, and I don't think that goes away, but I think it's about meeting people where they are.

Vidya Rajagopalan
Senior Vice President of Electrical Hardware, Rivian Automotive

The most precious thing that we all have is time. I think what autonomy does is really give you some of that time back.

Speaker 6

Once you realize cars can drive themselves, will you settle for anything less? This is just the beginning. The gap between traditional architectures and new modern AI-defined architectures will exponentially increase with all the capabilities that AI unlocks.

Speaker 7

Once you start using it, you don't stop. This is one of the most important technology shifts in transportation since the beginning of the automobile. I think we're witnessing the leap.

Moderator

Please welcome RJ Scaringe, Founder and Chief Executive Officer of Rivian.

RJ Scaringe
CEO, Rivian Automotive

We are incredibly excited to host everybody here today. We're in Palo Alto, which is the hub for our autonomy and our technology development teams. AI is enabling us to create technology and customer experiences at a rate that is completely different from what we've seen in the past. If we look forward three or four years into the future, the rate of change is an order of magnitude greater than what we've experienced in the last three or four years. Directly controlling our network architecture and our software platforms in our vehicles has, of course, created an opportunity for us to deliver amazingly rich software, but perhaps even more importantly, this is the foundation of enabling AI across our vehicles and our business. I'd like to talk about autonomy first. The field of autonomy really started about 20 years ago.

About, let's say, up until the early 2020s, the approach was centered on a rules-based environment where a set of perception sensors would identify and classify objects and hand those classified and vector-associated objects to a planner that was built around a human-defined rules-based framework. A few years ago, it became clear the approach to autonomy needed to shift. With innovations around transformer-based encoding and the design of large parameter models, the approach has moved to building a neural net-like understanding of how to drive instead of following a classical rules-based approach. Recognizing this massive shift in how we approach autonomy, in early 2022, we began the process of a clean sheet design to our platform. This first embodiment of this work was in our Gen 2 R1 vehicles, which we launched in mid-2024.

With this updated platform, our Gen 2 vehicles now have 55 megapixels of cameras, five radars, and run on an inference platform that was a 10x improvement over our Gen 1 vehicles. This platform was designed around an AI-centric approach, and with the deployment of our Gen 2 R1s, we began the process of building our data flywheel to grow and build our Large Driving Model. Now, because this AI-centric approach represents a model trained end-to-end through the millions and millions of miles driven on our vehicles, enhancing the perception platform or improving the compute is accretive to the capabilities of the model. Meaning, the model only continues to get better as the perception and as the compute platform improve, so you can sort of think of it like this.

If you learned to drive and you had bad vision and suddenly you're able to put on glasses and see much better, and then go even further and supplement that with new perception modalities of radar and LiDAR, and then your compute, here metaphorically your brain, was expanding in capability by an order of magnitude, you wouldn't forget the things you'd learned, the rules of the road, how to operate a vehicle, but your ability to understand nuance, to respond to complex situations, and perceive the world and environments with poor or limited visibility would improve dramatically. Our approach to building self-driving is really designed around this data flywheel. We're a deployed fleet that has a carefully designed data policy that allows us to identify important and interesting events that we can use to train our large model offline before distilling the model back down into the vehicle.

Now, while our R1 Gen 2 vehicles and the demo some of you are going to be trying today are using our Gen 2 sensor set and, of course, the associated compute platform, over the last few years, we've also been developing our Gen 3 substantially enhanced platform. And this will underpin a massive leap forward with R2. Starting in late 2026, our Gen 3 autonomy platform will include 65 megapixels of cameras, a robust radar array, and a front-facing long-range LiDAR. And at the core of this platform, our first in-house Rivian Autonomy Processor. Now, the first iteration of our in-house inference platform includes a neural engine with 800 TOPS. It's optimized to support camera-centric AI in the physical world and enables a dramatic expansion of Rivian's autonomy capabilities.

When integrated into what we call our Gen 3 autonomy compute platform, which we'll go through later in detail, it will deliver 1,600 TOPS. The effectiveness and efficiency of our in-house processor has been a core focus in its development. Our Gen 3 computer is capable of processing five billion pixels per second. Now, you've heard me say this a couple of times. We've designed this entire architecture around an AI-centric approach where the data flywheel of our deployed fleet helps the model get better and better through reinforcement learning. Not only does this sensor set enable a much higher ceiling than what we have on our vehicles today, it also makes the platform much better to serve in building our model. We're going to continue to see improvements on our platform.

Later this month, we'll be issuing an over-the-air update to our R1 Gen 2 customers, and that will dramatically expand the existing hands-free capability, going from less than 150,000 mi of roads to more than 3.5 million miles of roads in North America, and again, this is just a step in a series of steps. Starting in 2026, we'll begin rolling out point-to-point capabilities in which the vehicle can drive address to address. What that means is you can get into the vehicle at your house, plug in the address to where you're going, and the vehicle will completely drive you there. Now, some of you are going to experience this today in our R1 vehicles, and of course, this will be embedded in R2. Now, as I described, the Gen 3 hardware architecture launching in 2026 expands the ceiling of what we can achieve.

The next major step beyond point-to-point will be eyes-off, meaning you can navigate point-to-point with your hands off the wheel, but importantly, your eyes off the road. And this gives you your time back. You can be on your phone or reading a book, no longer needing to be actively involved in the operation of the vehicle. And following eyes-off, the next major step will be personal Level 4. And with this, the vehicle will operate entirely on its own. This means it can drop the kids off at school. It can pick you up from the airport. It allows it to really be integrated into your life in ways that we can't even imagine and haven't seen yet today.

Now, while our initial focus will be on personally owned vehicles, which today represent a vast majority of the miles driven in the United States, this also enables us to pursue opportunities in the rideshare space. Now, beyond self-driving, we've also created what we call the Rivian Unified Intelligence. This AI backbone exists across our vehicle and across our entire business, and we've talked for a long time about software-defined vehicles, which are really the foundational building block for an AI-defined vehicle, where every part of the vehicle, the experience, and everything that's happening across the vehicle is designed around AI, from our Rivian Assistant to enabling our direct-to-consumer sales and service model, as well as our future manufacturing infrastructure. Now, as I said at the start, we could not be more excited about what we're building, and we have a lot of details to show you here today.

With that, I'm excited to introduce Vidya to talk about our hardware platform.

Vidya Rajagopalan
Senior Vice President of Electrical Hardware, Rivian Automotive

Thank you. Thank you. My name is Vidya Rajagopalan, and I lead the electrical hardware team here at Rivian. My team is responsible for electrical content in the vehicle, ranging from our in-house 5 nm silicon that you just heard about that operates at voltages below a volt to the power electronics for electric motors that operate at 400 volts and lots of things in between. One common thread that runs across all these designs, beyond the fact that it involves the transport of electrons, is a Rivian ethos of vertical integration. At Rivian, we have chosen to vertically integrate critical pieces of technology that allow us to differentiate ourselves over time. We started this journey as a startup when we consciously chose to build our ECUs in-house.

Last year, at Investor Day, we shared how this journey helped us to get to an in-house developed zonal architecture far ahead of other OEMs. Today, I'm here to talk to you about our autonomy hardware system, which is similarly very vertically integrated. As RJ shared earlier today, we will be launching our Gen 3 autonomy system late next year on the R2 vehicle platform. The hardware enabling it focuses on three main areas of leadership: sensors, compute, and the overall product integration. Let's start with our sensors. At Rivian, we have a multimodal sensor strategy that provides a rich and diverse set of data for our AI models to operate on. On the screen behind me, you can see the feed from all our sensors on an R2 vehicle.

On an R2 platform, much like the R1 before, we have 11 cameras providing a total of 65 megapixels of data. That is 10 megapixels over and above what we had on R1. The cameras provide an extremely rich set of two-dimensional data and help us see the world around us. But cameras alone have some shortcomings. They do not perform well under non-ideal lighting conditions. This could be low light, excessive light, and fog. And so, much like the R1 platform again, we still carry five radars: one front-facing imaging radar and four corner radars. By using radio frequencies, radars are able to see in total darkness while also providing the depth and velocity of objects in their path. Our corner radars on R2 are further improved. They support a dual mode of operation: short range and long range.

In short range mode, they have very high spatial resolution, which helps us delete the ultrasonic sensors on R2. That's right. We add sensors, but we also delete them when it makes sense. And now, for the first time on R2, we're adding a third sensor, the LiDAR. The LiDAR is an optical sensor, but its strength comes from the fact that, unlike the camera, the LiDAR has an active light source, enabling it to see it much better in the dark. Another advantage of the LiDAR is that it can provide a three-dimensional view of the world, unlike cameras, which we know provide a two-dimensional view, requiring the AI models to infer the depth, which they do, but with a lot less accuracy.

So, in summary, camera is the main workhorse of our sensor suite, generating the bulk of the data fed to the models, but the radar and LiDAR are critical to addressing the edge cases, which would otherwise create the long tail of problem cases. So, why do we choose to introduce LiDAR now? It turns out that there are three main factors that make this the right moment to incorporate LiDAR. They are cost, resolution, and size. About 10 years ago, LiDARs used to cost in the tens of thousands of dollars. Today, you can get a very good LiDAR for several hundreds of dollars. The resolution of LiDARs has similarly improved tremendously. The picture on the left here is a data product of a LiDAR from 2016, whereas the picture on the right shows the data actually from our R2 LiDAR, the one we use on our vehicle.

As you can see, the R2 LiDAR data is much, much richer. Today's automotive LiDARs have point cloud densities of the order of 5 million points per second, which is about 25 times better than what we could get 10 years ago. Finally, today's LiDARs are no longer the large mechanical spinning beasts of the past. The LiDAR of today is more compact and more easily integrated into a vehicle. So, let us take a look at the LiDAR integration in an R2. There you go. This is one of our prototype R2 vehicles. From afar, it looks the same as the R2 many of you have seen and come to love. But if you zoom in closer and look up front, you can see the LiDAR.

What you see is a seamless integration and no signs of the unsightly taxi cab style bump or the teardrop structure that is more commonly associated with LiDAR integrations. Our studio and design teams work very closely with the supplier to shape the face of the LiDAR in such a way so that it blends in beautifully with the R2. And now, if you zoom out and look at the vehicle from the side, voilà, you'd not even know it had a LiDAR. And by the way, this LiDAR integration is camera safe. It will not burn your phone cameras, so don't worry about it. So, let's move on to compute. Before we get too deep, though, it's important to address why we chose to build in-house silicon. It's a non-trivial development effort. Those who've been involved or observed chip development efforts would know that.

It's time-consuming, and it requires a world-class team. The reason for doing it, though, ties back to the same reasons for building our own in-house ECUs, which are velocity, performance, and cost. With our in-house silicon development, we're able to start our software development almost a year ahead of what we can do with supplier silicon. We actually had software running on our in-house hardware prototyping platform well ahead of getting first silicon. Our hardware and software teams are actually co-located, and they're able to develop at a rapid pace that was just simply not possible with supplier silicon. All of this means we're able to get to market sooner with the most cutting-edge AI product. Secondly, we understand our application and our vehicle architecture thoroughly and are able to optimize our silicon for our use case.

Note, we don't just design for today's use case, but we design with headroom for the models of the future. By building purpose-built silicon, we do not carry the overhead that comes from leveraging a design that was built for some other task and repurposed for autonomous driving. We built this silicon so it would do a really good job at autonomous driving and physical AI problems. All of this enables us to get the best performance for dollars spent. Finally, it's all about cost. When we design in-house, we're able to get the best cost point and the best power points. The cost reductions from our design come from the fact that this is optimized for our use case, not just the chip use case, but we look at the whole vehicle use case, as well as the fact that there's a meaningful reduction in supplier margins.

Now, join me as we go look for the silicon inside the R2 vehicle. Our Gen 3 autonomy computer is the next step in our vertical integration journey and features our very own Rivian design custom silicon. It is a highly integrated solution, as you can see. There is very little on the board beyond the two instances of Rivian silicon, power supply, and passives. The hardware and software on this computer are fully designed and developed by Rivian. This computer achieves four times the peak performance of our Gen 2 computer while improving power efficiency by a factor of two and a half. So, let's go take a look at the chip. The Rivian autonomy processor, or RAP1, as we call it, is a first in a series of silicon built for physical AI. It is actually much more than one piece of silicon.

It's a multi-chip module, or MCM, that integrates Rivian silicon and memory die. Our custom Rivian silicon is produced on a leading-edge TSMC 5 nm automotive process. The star of the die, of course, is a Rivian design neural engine, which is capable of 800 sparse Int8 TOPS. TOPS stands for trillion operations per second and is a common measure used to assess AI performance. The chip was also designed with the intent of providing different cost and performance points. We can put multiple RAP processors together in a system, and they can talk to each other via custom high-speed link we call RivLink. So, this gets fun. Let's take the lid off RAP1 and look under it. What you see is the RAP1 SoC in the middle, surrounded by three memory die spread across two sides.

This allows for three independent LPDDR5 channels, but more importantly, allows for very tight integration between the SoC and memory, enabling a very clean data eye between them, which in turn then enables high memory bandwidth. With RAP1, we're one of the first to introduce multi-chip module packaging for high compute applications in automotive. This is not to be confused with Systems in P ackages or SIPs, which have existed in automotive for a very long time. So, it is well known that memory bandwidth is key for AI applications, and this tight coupling enables us to achieve a net bandwidth of 205 GB per second. The MCM design also enables us to significantly simplify the design of the PCB.

The PCB that it sits on no longer has to accommodate DRAM chips with critical timing constraints, which means it can be smaller, simpler, and implemented with fewer layers, all of which results in a meaningful cost reduction. In summary, using an MCM-style package enables us to achieve higher bandwidth and lower cost. The SoC itself is designed to solve the needs of autonomous driving looking ahead. As mentioned earlier, the star of the show is a Rivian design neural engine capable of 800 Int8 TOPS. In addition to the neural engine, the SoC has a plethora of other blocks that are required to complete the system. The application processor complex is implemented using 14 power-efficient Arm Cortex-A720 AE cores. This allows us to leverage the rich open-source software ecosystem that comes along with Arm processors.

We will be the first OEM to introduce the Armv9 compute platform for automotive using the Cortex-A720 AE in production vehicles. In addition to an application processor complex, an SoC for autonomous driving applications needs a high availability safety island and compute that is real-time capable. Our SoC implements both a safety island and a real-time processing unit, which are built using eight Arm Cortex-R52 cores, and finally, we have all the other pieces you would need for sensor processing, including an image signal processor, encoder, GPUs, et cetera. The Rivian neural engine itself is designed to implement state-of-the-art deep learning models for perception control and planning. It is flexible and supports mixed precision data formats. The neural engine has native support and hardware for the latest and greatest AI model technology for inference.

An inference chip today must support transformers and support them really well, and that is what we do. We also include support for all types of attention, such as multi-headed attention, deformable attention, and more. But we don't just implement transformers. The hardware has a host of other capabilities. Some examples include support for non-linear functions, such as softmax, and of course, we do simple networks, such as CNNs. We also have special hooks in the hardware to support LiDAR and radar processing, both of which can be very unstructured, unlike transformers and CNNs. Earlier, I talked about designing an MCM package with a view to optimizing for memory access. Well, we didn't just stop there. We looked at memory very holistically. Our neural engine, for example, supports weight decompression as another way to relieve the pressure on memory bandwidth.

We don't just enable running one model at a time. We can support the concurrent execution of up to four models at any given time. All of this would be irrelevant if we didn't have the software and tools to actually exploit the hardware. We made a significant investment in the development of tools and a middleware stack that can exploit the power of RAP1. The entire software stack is fully developed in-house. Our in-house tools include an in-house compiler that can take standard models and generate code targeting our neural engine. We also support profiling tools that can help users optimize their code. And finally, we have an in-house middleware stack that enables us to write application code that is target agnostic. We use that same middleware stack on our Gen 2 hardware platform, and we will be using it again on our Gen 3 platform.

So, what makes silicon for physical AI different from general silicon targeting inference is the importance of functional safety. RAP1 was designed from the get-go to factor in functional safety in every block of the design. We adhere to the ISO 26262 scheme for risk classification, which is also called automotive safety and integrity levels, which is a mouthful, which is why most people call it ASIL. Every block is designed in appropriate ASIL level based on the scheme, and then hardware and software are implemented to ensure that the level is achieved. Even our chip-to-chip interconnect, the RivLink, is protected using this scheme. So, in some cases, it means that you actually put an extra redundant hardware in the chip, which does the same function twice, and you cross-check the results.

In some other cases, it means you put ECC on memories instead of parity, which a lot of other chips would do. So, we have really a lot of hardware hooks and bells and whistles to make sure that this chip is really functionally safe. In addition, we also have software. It doesn't stop with the hardware. We also have software that runs on the chip when the chip is actually working in the vehicle. We have software that runs at key-on to make sure the chip is still functionally safe, and it runs periodically to complete the whole solution. The RAP1 chip is not meant to be just one iteration. It's an instantiation. It's really designed to be scalable.

While the first instantiation is a two-chip solution targeting autonomy in the R2 vehicle platform, it can be easily extended to solve other physical AI problems, such as in robotics. It can scale down to a single chip solution for low cost or scale up to multiple chips for more performance. RivLink was specifically designed to allow multiple RAP1 chips to talk to each other via a high-bandwidth low-latency interface. At data rates of up to 128 Gb per second, RivLink allows sensor data from one SoC to be seamlessly shared with other SoCs. The scalability doesn't just end there. RAP1 was designed to also be flexible in configuration. While the system to be deployed in R2 is liquid-cooled, we have demonstrated that this can be configured as an air-cooled system.

I'm happy to share that we have successfully demonstrated that our silicon is robust and meeting the performance goals we set out at the start of the project. While peak TOPS are useful to indicate the capability of the hardware, a more useful measure is perhaps the ability of the system to process sensor data. We have shown that our third-gen autonomy hardware system is capable of processing 5 billion pixels per second of sensor data. We are very proud to be at the leading edge of multimodality sensing and to be continuing our trajectory of vertical integration with our RAP1 chip and Gen 3 autonomy computer. We expect that at launch in late 2026, this will be the most powerful combination of sensors and inference compute in consumer vehicles in North America. We are now actively testing the silicon in systems and vehicles.

For those of you attending this event in person, you'll get a chance to see some of our subsystem test box that tests the entire hardware-software configuration. We have also integrated the hardware into our R2 vehicles and are continuing to test it extensively. I will now hand it over to James, who will show you how his team is continuing to improve the autonomy experience for our customers and how he plans to harness the power of RAP1 to make autonomy better. Over to James. Thank you.

James Philbin
VP of Autonomy & AI, Rivian Automotive

Okay, thanks, Vidya. So, Vidya just discussed all the amazing autonomy sensors and compute that we'll have on R2. And now I'll go into detail on some of the software that runs on them and powers the Rivian Autonomy Platform. Firstly, our Large Driving Model is trained end-to-end for millions of miles of driving sequences collected across our fleets.

That's directly from pixels, radar returns, and LiDAR points to trajectories. This Large Driving Model uses state-of-the-art techniques based on transformers, autoregressive prediction, and reinforcement learning that turbocharges our velocity by allowing us to leverage innovation from the world of large language models. It's also built entirely in-house, and this gives us unprecedented flexibility in being able to change all parts of the stack. So, that means we don't need to coordinate with other tier ones and tier twos to make changes. Consequently, our features improve with every update. Finally, and most importantly, the autonomy platform is built on a data flywheel where growth in vehicle fleets and feature adoption drives improvements in autonomy that compound over time. So, let's look under the hood now and discuss the data flywheel in more detail. We'll start with a multimodal onboard model that runs on every customer's vehicle.

The goal for our onboard sensing stack isn't just human level, it's superhuman level. Multiple modalities enable that, allowing our vehicle to see way beyond what a person can. By end-to-end training, the sensor data is early fused into a singular world model, a system where the sensors complement each other. They don't fight against each other. Just like being able to hear can make you a better driver, multiple sensors can make Rivian Autonomy better, enabling enhanced precision and more confident predictions. With more sensors comes richer and better fidelity data. More sensing modalities allow us to achieve the same level of accuracy as a unimodal system, but with much less data, or to surpass the unimodal system with the same amount of data. So, it's a very efficient approach. Let's see how this works in practice by visualizing the output of early fusion.

So, of course, we start with the cameras. Now, as Vidya mentioned, our cameras are really good. Some of the highest combined megapixel counts and dynamic ranges of any vehicle on sale today. And what you can see here is that when cameras clearly sense things, the system works very well. In fact, cameras alone can handle most autonomy tasks most of the time. But for full autonomy, most of the time isn't enough. Autonomy needs to work all of the time on a moonless night, in the snow, and here in the fog. And in those cases, cameras alone don't cut it. If we can't sense something, we can't expect the system to handle it. And so, we add radar to the mix. This is where we are with every Gen 2 R1 vehicle today. You can see that we're now able to detect much more.

The system can detect more occluded objects, can assign better depths to objects, and is better at estimating object velocities. The confidences and redundancies are also much greater than with a camera-only system. But to unlock full autonomy, we need to go further. Adding LiDAR creates the ultimate sensing combination. It gives the most comprehensive 3D model of the space the vehicle is traveling through. And as Vidya showed previously, the combination of all three sensors identifies more objects and can detect things more quickly. This trinity of modalities enables autonomy features that RJ was mentioning, such as eyes-off driving and personal L4, by increasing perception quality, safety, and redundancy. Now, our onboard network is designed from the ground up to flexibly and incorporate new modalities and new sensors. And together with our in-house silicon team, we're co-designing this network to run optimally on our chip.

So, over here, you can see where the sensors and modalities are fed into the model. Each pixel, each radar return, each LiDAR return is encoded, projected, and then combined into a geometric feature space shown here in the center. And this is where the fusion happens optimally and automatically learned through end-to-end training. There's no extra complexity added. There's no handwritten rules that need to adjudicate. The network has figured out the best way to combine this information. And so, what we're visualizing here are the semantic projections of these fused feature maps. And although this space is very abstract, humans were never really designed to see it, you can already see elements here, such as other agents and lane lines showing up. So now, this whole fused tensor is fed into our transformer-based decoders to produce the world model.

From the same input, the network is trained to generate all of these different outputs for the world around the vehicle. So that includes other objects and agents shown here in the top left. It also includes dense 3D occupancy to handle short-range maneuvers and narrow negotiation. And here's where we estimate the local map around the vehicle. Note that this is quite different from typical robotaxi efforts, where the map is pre-generated offline and then localized online. Those maps are expensive to generate and hard to maintain. But just like a human driver can navigate a road they've never seen before, our local map is produced and accumulated directly from what the vehicle perceives. And in the bottom right here, finally, we have estimated trajectories. These represent the model's best estimate of how to proceed through the scene.

This output will be a key technology enabler for point-to-point driving, and I'll discuss more on this piece later. So now we're moving on to the next stop in our flywheel, the Autonomy Data Recorder. This is the system that turns real-world driving into data. Essentially, our entire Gen 2 fleet becomes a huge queryable dynamic database of driving scenarios. The data recorder runs trigger code that can fire off any set of signals seen by the world model: jaywalking pedestrians, red light runners, large animals in the road. All of these cases are examples of cases which you can mine with triggers. We can also run more general triggers, such as finding divergences between the human-driven trajectory and our predicted trajectories. These might indicate areas, for example, where the model could be improved. We can even push new triggers live to our fleets outside of the usual OTA cycle.

This allows us to capture the data we need on demand and with minimal turnaround time, greatly speeding up development. That's a huge driver of developer productivity. As soon as an engineer wants to find more scenarios of an event, they can mine for them immediately. And because ADR is so selective, it's also very efficient. The vast majority of boring driving data is never captured, never uploaded, and never trained on. Now, once a trigger fires, all the sensor data before and after the event is captured, tagged, compressed, and uploaded. And that data is then immediately available to engineers. These scenarios can be used for model training, for evaluation, or for replay through our simulator. Here you can see examples of scenario clusters found automatically by our ADR system. We have examples of environmental conditions, such as dusk and night.

We have examples of map-based scenarios, such as tight turns and merges, and also agent-based situations like animals, bike racks, and semi-trucks, and after upload, all of these curated scenarios are immediately available on the Rivian Cloud, so as the fleet expands and adoption increases, the size of our knowledge base is growing and compounding. From next year, this growth will be obviously further accelerated with the additional volume of R2. Now, all of this data is stored securely, and we don't associate any sequences with your VIN, and if you select it, your home or your place of work, and through ADR, every sequence is already tagged without further processing. Finally, because we store all sensor modalities, the data is incredibly rich and complete. This allows us to auto-label most sequences using large offline models, which would be too slow to run on board.

In fact, the vast majority of our training data today is auto-labeled. That's massively more efficient than using human annotators. So now let's talk about ground truth fleets. You may have seen other autonomy providers ground truth vehicles. They all typically have multiple non-production sensors, such as LiDARs and other things strapped to the roofs and sides of the vehicles. And these fleets are incredibly valuable for training perception systems. The LiDAR data is so crisp, it's essentially used as a ground truth that is used to train the other production sensor sets. But because they're prototypes, these fleets are typically small, numbering in the tens to hundreds, represented by the dots on this slide. In contrast, for the LiDAR-equipped R2, every vehicle will become a ground truthing vehicle. That's orders of magnitude more data than other OEMs.

It's an incredible force multiplier for better and richer training data that massively accelerates our progress. So let's see how we use all of this data to benefit our customers. This happens in the Large Driving Model. The Large Driving Model, or LDM, is an end-to-end model from sensors to driving actions. And it's based on many of the same technologies used in large language models. So LDM uses neural net transformers for processing, just like with LLMs. And the Large Driving Model uses tokens for training, also just like LLMs. But instead of thinking about these tokens as words, they're actually small parts of trajectories that are jointly predicted and assembled together. And the Large Driving Model also uses reinforcement learning, just like state-of-the-art large language models.

But here, instead of aligning the output with human values and intentions, we align the Large Driving Model output for safe, performant, and smooth driving. Because LDM is such a close cousin to an LLM, we can reap all of the advancements, investments, and innovations being made in improving generative AI and can apply them directly to our driving task. This makes the LDM incredibly cost-efficient to develop. So let's do a deep dive into how LDM is trained by reinforcement learning. Over here, you can see the sensor data from a scenario. Here we're approaching a stop sign. And that sensor data is fed into our transformer-based encoders behind me. Then we sample multiple trajectories from this model, token by token and trajectory by trajectory. And the different tokens shape the trajectories in different ways. Once we've sampled all these trajectories, we then need to rank them.

Now, the one on the left here is actually the most human-like. The vehicle slowed almost to a stop, but then rolled it. A lot of our Rivian data is kind of like that. The one in the middle here, they're clearly stopping too soon. And the one on the right is just right. We're stopping behind the line and correctly following the road rules. So here you can see three different trajectories that have navigated this stop intersection. Now, what we're able to do is apply our road rule rankers that can then say the third here is the optimal trajectory, and then we reorder them. And then through back propagation, the model is trained to produce more of these types of trajectories in similar scenarios in the future. Now, that's obviously highly simplified, and you're just looking at one scenario here.

Imagine this process running millions of times a second across millions of scenarios with a whole database of road rule costs and losses. That's how LDM is trained. We can then distill this model into one that we can run on board. All of this work results in new models, continuous enhancements, refinements, and new features that we continuously deliver to our customers. How do we know we can release? We build a cloud-based simulator that runs the whole autonomy stack through millions of miles of real-world scenarios on every release. That allows us to measure safety, comfort, and performance in a statistically significant way without having to manually drive those miles. We also have a capability we call Apprentice Mode. Before we release features, we can launch them in the background of a previous release.

We can then monitor the performance of that new version compared to the human-driven miles, but also compared to the previous version of autonomy. That allows us to do an even bigger evaluation in the tens of millions of miles. And so, through simulation and Apprentice Mode, we can rapidly build the confidence we need to ship new features and enhancements to customers. Because the system is developed entirely in-house, we can update any part of the stack from the lowest level camera drivers all the way to the highest level motion planning code. That means that the whole stack is always improving with every release, and we have a feature roadmap that stretches to the highest levels of autonomy. Now, when we surveyed customers this year on the autonomy capabilities they wanted the most, the answer was resounding. They wanted more road coverage for hands-free highway assist.

Now, previously, we supported 135,000 mi of divided highways. But as RJ mentioned, our map today is about to grow. Universal Hands-Free unlocks over three and a half million miles of hands-free driving on roads across the U.S. and Canada. If there's a painted line and it's clearly marked, you can now drive hands-free. Universal Hands-Free will be part of our pay tier bundled into one simple package, Autonomy Plus. It's a one-time fee, or you can pay month to month. And Autonomy Plus features will be available to all Gen 2 customers for free until March next year. And this is just the beginning for Autonomy Plus. We have many exciting features on the way, such as point-to-point, automatic parking, and enabled by the LiDAR on R2, eyes-off. As our fleet continues to grow, as our adoption continues to increase, our data flywheel will continue to grow.

We've been thinking about this as a circle, but in fact, the system is better on every orbit. A better analogy is an upward helix continually improving and compounding on itself. With that, I'd like to hand over to Wassym to discuss some of the other improvements being made in AI here at Rivian.

Wassym Bensaid
Chief Software Officer, Rivian Automotive

Thank you, James. We have made significant progress in our AI-enabled autonomy stack. As James said, it doesn't stop here. AI runs through the core of everything we do. It's a profound platform shift which changes our product and everything we do at the company, from the way we design, we develop, we manufacture, and we service our cars. This is all made possible by the Rivian Unified Intelligence, a common AI foundation that understands our products, our operations as one continuous system, and personalizes the experience for our customers.

How does it work? We revamped our vehicle operating system to be AI-ready. We developed an in-house multi-agent, multi-LLM, multimodal intelligence platform. The platform is built on a robust data governance framework with security and privacy as main tenets. We have a suite of specialized agents. Every Rivian system, from manufacturing, diagnostics, EV route planning, navigation, becomes an intelligent node through MCP. The beauty here is we can integrate third-party agents. This is completely redefining how apps in the future will integrate in our cars. We orchestrate multiple foundation models in real time, choosing the right model for each task. We support memory and context, allowing us to offer advanced levels of personalized experience. The architecture is natively multimodal, using audio, vision, and text through the same unified layer. The beauty of our architecture is the seamless integration between the cloud and the edge.

Edge AI with an embedded small language model allows us to achieve higher levels of performance, lower latency, and the best conversational experience. And wait till the R2 comes. R2 will have close to 100 TOPS edge AI fully dedicated to in-cabin experience. This will allow us to move most of the intelligence workloads from the cloud to the edge, powering an in-cabin AI experience fully available when the car is offline. The Rivian Unified Intelligence is the connective tissue that runs through the very heart of Rivian's digital ecosystem. This platform enables targeted agentic solutions that drive value across our entire operation and our entire vehicle lifecycle. Let's start with the factory. Our diagnostics agent is the ultimate example of unified intelligence in action.

It instantly connects real-time telemetry from vehicles on the assembly line and allows us to validate quality at production, identifying changes needed before the vehicle leaves the factory. The same unified intelligence is fundamentally helping us redefine service. Because the platform is grounded in real vehicle data, every Rivian technician is now being empowered by AI systems trained on live software data, service manuals, repair histories, electrical diagrams, and supply chain logistics. This is accelerating service repair time by hours, helping us to dramatically improve technician efficiency. And it doesn't stop with our technicians. The same platform that will enable in the future our customers to self-troubleshoot and resolve minor issues directly from the Rivian Mobile app and from your Rivian car. Now, let me tell you more about the in-vehicle experience. This architecture will fundamentally reshape how we all interact with our Rivians.

Today, I'm very excited to announce the Rivian Assistant. As you see from the beautiful wave behind me that our UX team has designed, the Rivian Assistant is fully integrated into the user experience and our in-car operating system. It's designed to understand you, your vehicle, and the context you're in, and the Rivian Assistant will be available for all Rivian Gen 2 and Gen 1 customers in early 2026. Now, rather than tell you about all its features, we'll take some risk here and then we'll do a live demo from the beautiful R1S Borealis, which is over there, so Oscar will be our main driver today. Oscar is the product manager behind the Assistant. Oscar, are we ready?

Speaker 6

Yes, I'm ready. Thank you, Wassym. Hello, everyone. Let me show you what the Rivian Assistant can do. To initiate the Rivian Assistant, you can either hold the left steering wheel button or just say, "Hey, Rivian.

Wassym Bensaid
Chief Software Officer, Rivian Automotive

The Assistant is not just an integration of a chatbot on top of the vehicle UI. We built an agentic framework allowing us to integrate into the larger ecosystem and bring your own digital context to the car. We started with Google Calendar. We had a ton of fun collaborating with the Google team on this project. Oscar, can we see that in action?

Speaker 6

What's on my calendar today? What's on my calendar today?

Speaker 7

You have two events today. Call with Tim from 2:00 P.M. to 2:30 P.M. and meet up with Wassym from 3:00 P.M. to 4:00 P.M.

Wassym Bensaid
Chief Software Officer, Rivian Automotive

The agentic integration allows us to not only connect with your calendar and read it, but also take actions and manage it. Oscar, can you show us that in action?

Speaker 6

Can you move my call with Tim to 5:00 P.M.?

Speaker 7

I've moved your call with Tim to 5:00 P.M.

Wassym Bensaid
Chief Software Officer, Rivian Automotive

As you see, the same action would have probably taken me multiple taps and clicks and swipes to get to the same result. The Assistant can help you control your calendar in a much safer and easier way. And Google Calendar is just the beginning. The platform will expand to many more applications and many more third-party agents as they become available. Beyond the agentic integration, our AI platform can blend your personal context into your vehicle context. In this case, the Google Calendar agent is connected to other vehicle applications. Oscar, can we see a live example?

Speaker 6

Let's go to my meetup with Wassym.

Speaker 7

Navigating to Ferry Building. You'll drive for 55 minutes and arrive around 10:50 A.M.

Wassym Bensaid
Chief Software Officer, Rivian Automotive

The AI integration goes much deeper than navigation. Because this is an AI-native operating system, it can connect with all vehicle apps, including our customers' favorites, EV route planner or drive modes. Oscar, can we see that in action?

Speaker 6

How much battery will I have when I get to my destination?

Speaker 7

You will have 67% battery remaining, which is about 231 mi of range.

Speaker 6

Can you switch to a more efficient drive mode?

Wassym Bensaid
Chief Software Officer, Rivian Automotive

As you see, Oscar did not have to specify Conserve Mode. This is the future of vehicle UIs. The Assistant allows users to perform use cases without knowing the exact vehicle command, without knowing where they are in the different menus in the UI, making it way easier for every user to interact with your Rivian. Oscar, can we see another example?

Speaker 6

Can you make the seats toasty for everyone except me?

Wassym Bensaid
Chief Software Officer, Rivian Automotive

Let's check this out. As you see, all seats except the driver's seat are warmed up. Imagine the possibilities that this opens up. The Assistant takes the vehicle experience to the next level. Instead of having multiple UI commands, multiple taps on the touchscreen, you can perform the whole task with just one natural language command, and one of the features that our community has requested, and I promise that we will get it, is messaging, but I really wanted to get messaging right, so let me text Oscar. I'm getting a call right now. Lots of texts that people are seeing me on the live feed. Thank you.

Speaker 6

Read my last message.

Speaker 7

From Wassym. Hey, I'm nearby. Can you find a good restaurant near the Ferry Building? Let me know. Do you want to reply to Wassym?

Wassym Bensaid
Chief Software Officer, Rivian Automotive

So we did not only want to stop at reading your message, even with this beautifully integrated UI. The messaging app with this new AI platform is fully integrated into the vehicle operating system and has access to every single application and control. Oscar, can we see an example?

Speaker 6

We're hungry. Can you find some restaurants near my destination?

Speaker 7

I found Terrene, about 36 mi away in San Francisco. It has a 4.2-star rating. Would you like to go there or explore other options?

Speaker 6

Actually, can you send a text message to Wassym and show him the top three options from this list? Ask him which one he wants to meet up at and include my ETA.

Speaker 7

Would you like to send a message? 1. Eight Mission Street, San Francisco, California. 2. Eclipse Kitchen and Bar. 5. Embarcadero Center, San Francisco, California. 3. Super Duper Burgers, 98 Mission Street, San Francisco.

Wassym Bensaid
Chief Software Officer, Rivian Automotive

I think I would go for Super Duper.

Speaker 7

My ETA is 10:53 A.M. to Wassym.

Speaker 6

Yeah, send it.

Received.

Wassym Bensaid
Chief Software Officer, Rivian Automotive

Oscar, while I appreciate the invitation, I think I have something else to do right now. You just saw the difference that native integration makes. The Assistant has memory, has context. It remembers the full story, who you're talking to, where you're going, and what you just searched for. And then it puts everything into a perfect message. What you've witnessed today is more than just a new voice assistant. This is a peek into the Rivian Unified Intelligence platform that powers the new foundation of Rivian's digital ecosystem. The gap between software-defined vehicles and traditional architectures is getting exponentially wider with AI. Rivian is uniquely positioned to move from a software-defined vehicle and bring to the world an AI-defined vehicle. With this, let me welcome RJ back to the stage for some closing comments.

RJ Scaringe
CEO, Rivian Automotive

That was so much fun to watch, so much time and effort have gone into building the platforms, designing the architectures, of course, building the teams, growing the teams, organizing the teams, all the work that goes into these really complex systems, and I often describe it as if you're building the plumbing, and if you're building a house, you don't start with the finished house. It takes years of time of planning. You then have to do foundation work. You then have to do wiring and plumbing that go into the house, and then at the very end, it all comes together, and so when you saw our work that we talked about on our in-house processor, this is something that has been years in the making.

The amount of effort and the amount of time that's gone into it, and by the way, the amount of effort that went into this not leaking, which is amazing, is just so inspiring. I spent some time last night with the team talking about this right before we're about to show it today. One of the lead engineers looked at me and said, "Boy, this is. We've been working on this for years, and I haven't been able to talk about it. It's so cool. Tomorrow I can start to talk about what I do every day, all day long." Between the work that we put into the processor, the Large Driving Model that you heard us all talk about, how that feeds our data flywheel, and this large flywheel approach to building a data set that continues to improve our model.

Then as you just heard from Wassym, all the work that went into first building a software-defined architecture, developing and building all the electronics that go into the vehicle, and then using that as the foundation for enabling an AI-defined vehicle, this is coming together today. You're seeing the house start to form in front of you. What customers are going to see on our Gen 2 R1 vehicles starting very soon is a lot of these features. As I said, later this month, we're going to be growing the amount of miles you can access Universal Hands-Free from just under 150,000 mi to 3.5 million miles, 2026 point-to-point navigation, shortly thereafter that, hands-off, eyes-off. We're very, very excited, and we appreciate all of you being here today. Now, for those that are here, we have demos.

I know there's a lot of people here, so not everybody will get to do a demo. But for those that are doing a demo, you'll get to see the point-to-point navigation, the work that's gone to that. We have a bunch of great displays that show some of the hardware. But for those that aren't here, we appreciate you listening along, and we appreciate your support and enthusiasm for what we're building. Thanks, everyone. See you later.

Powered by