Ambarella, Inc. (AMBA)
NASDAQ: AMBA · Real-Time Price · USD
71.56
+2.76 (4.01%)
At close: May 1, 2026, 4:00 PM EDT
71.77
+0.21 (0.29%)
After-hours: May 1, 2026, 7:55 PM EDT
← View all transcripts

CES 2026 Product & Technology Briefing

Jan 7, 2026

Louis Gerhardy
VP of Corporate Development, Ambarella

Good afternoon and welcome from all of us at Ambarella to our CES briefing and technology discussion. I hope your first day of CES was productive. I'm sure for many of you online and in the room, one of your New Year's resolutions was to learn about Edge AI and how Ambarella fits into it. We're definitely going to help you check that box so you can move on to your other New Year's resolutions. Some of you visited us earlier in the day. We have in the adjacent rooms more than 30 demonstrations of Edge AI at work. If you're not registered to visit us later in the week, you're welcome to contact me afterwards, and we'll fit you into one of the groups.

Also, for the online participants, we'll need some time to prepare a virtual CES experience, but we should be running that in March. And you can also send me an email if you'd like to participate in one of those events. So Ambarella has been. I thought I was going to say 10 plus years, and I was just talking to Fermi. Ambarella has been at CES at this location in a much smaller footprint initially for almost 20 years now. So this is a very important event for us to share with customers and partners our new products and technology and to talk about how our Edge AI platform is evolving. And that's what we're going to achieve here.

But we felt that, in addition to all the demonstrations that we can provide to you, given all the developments in the Edge AI market, that providing some additional perspective would be very helpful in terms of what's happening in the market, our view of it, and our strategy to tackle this space. So that's the purpose of the meeting today. Before we proceed, I do need to read some forward-looking statements. I'll make it very short. I apologize. I wanted to get through it quickly. But I do need to read this part. Today's discussion will contain forward-looking statements that are based on currently available information and subject to risks, uncertainties, and assumptions. Should any of these risks or uncertainties materialize, or should our assumptions prove to be incorrect, our actual results could differ materially from the forward-looking statements.

These risks, uncertainties, and assumptions, as well as other information on potential risk factors that could affect our business, are more fully described in the documents we file with the SEC. So we got through that. In terms of the agenda, we're going to run with management presentations for about 75 minutes. We're going to try to save 15 minutes for Q&A. One thing that's unique about this presentation is that you'll be hearing for the first time from four Ambarella executives that have not been accessible to the investment community before. So you'll hear them present today. But also, after this event, we'll be hosting a reception immediately outside this room with some refreshments and, I think, some snacks. And you'll have a chance to meet with some of these executives, as well as some of the partners in the room that we'll introduce later in the presentation.

As a reminder, this is not a financial event. So Ambarella's fiscal year, fiscal 2026, ends in three weeks. And in our earnings call at the end of February, we'll be providing financial updates at that time. So with that, let me turn it over to Dr. Fermi Wang, Ambarella's Co-founder, President, and CEO, to go through the rest of the deck.

Fermi Wang
Co-Founder, President, and CEO, Ambarella

Great. Thank you, Louis. And good afternoon, and thank you for coming to this important event for Ambarella. As you know, the CES is always an important event for Ambarella to show our new technology and products. And this year, with the fast evolution of the AI technology market, we feel it's critically important for us to provide a deeper insight into our Edge AI strategy, platform, roadmap, as well as demonstrate the capabilities of our new products. Compared to the AI data centers, Edge AI at the Edge serves a different purpose and calling for a fundamentally different silicon architecture. And on top of that, the Edge AI market and products meet unique requirements, like one, power consumption is most critical. Two, low latency and privacy are must-haves. Three, much less bandwidth is available, not including the communication bandwidth as well as DRAM bandwidth.

Fourth, this market is supported not only by the business CapEx, but also the consumer spending. With all of the silicon hardware and software optimization needed to achieve those market requirements, as well as drive meaningful volumes at the edge, we believe that this Edge AI market is still at a very early stage of commercial development. And this is the opportunity for Ambarella. And this is where we are focusing on today. And at Ambarella, we are very proud that we offer a very comprehensive Edge AI platform, featuring both AI silicon and the software designed to support our customer, not only for the design for the customer, not only to meet the market requirements, but as well as help them to scale their business. And that we only shipped more than 40 million units of AI SoC across different applications is the best proof for that statement.

Today, we are happy to show you that we're going to expand this platform in a way that we have never done before, and also helping us to tap into new opportunities. As you know, Ambarella is recognized as the best high-quality video perception system at the Edge, originally designed for human viewing. Later on, this proprietary video perception system could gather a significant amount of data from various sources fitted into our custom AI accelerators. This setup helps machines to operate autonomously, sometimes partially, sometimes fully, on one single AI SoC. This evolution between the video perception system as well as AI technology really helped Ambarella define multiple generation product cycles already. In 2004, we founded Ambarella with one simple idea that digital video content will become popular. Consequently, that we developed our first generation of video perception system.

And in 2012, with all the AI research papers published, such as AlexNet, we came to the conclusion that the CNN type neural network will become the foundation of Edge AI. And we developed our CV2 family chip to seize that opportunity. Today, 80% of total revenue came from CV2 family chips. In 2017, the transformer type neural network is published, which led to the development and creation of third generation architecture, which supports both CNN and the transformer type network. And with that, we created our CV3 family of chips, of course, based on third generation architecture, which really helped us to address new applications. Those are new physical AI applications like aerial drones, autonomous driving, and other robotic applications. So that was the past history. That's how we get here. Today, we're going to show you how we're going to improve this product offering in five different ways.

First, we're going to really provide the updates on how we're going to improve our Edge AI strategy and platform. Two, we're going to announce our first 4 nm AI SoC being sampled to our customers. Three, we confirmed that we taped out our first 2 nm chip to Samsung Foundry. Four, we're going to discuss a brand new go-to-market strategy that will improve and increase our revenue generation. And fifth, describe the significant evolution of our Cooper software development platform that will help us address the market need at the Edge infrastructure market, as well as the Edge endpoint market. I think many of you have seen this slide. At the bottom of the slide, the last row of the chip is our video processor, which is average ASP of single-digit dollars.

On the second row, which is our second generation AI silicon, and which is CV2 family, like I talked about, the average ASP is ranging from $10-$75. And third generation is on the third line, and the average ASP is from $20-$400. And today, we're going to add three new members into this family. First one, as I said, we are sampling our first 4 nm chip. We call it CV7. 48 hours that we received the sample from Samsung Foundry, we successfully brought up two demos in our lab. One is an 8K P60 video and AI processing for video streams. The second demo is four channels of video input at a 4K P30, both for video processing and AI. Both of this configuration has a performance level that nobody else can reach in the market.

Those two configurations are going to be critically important for many different applications for our customers. We can achieve this kind of engineering deliverables within 48 hours of receiving the chip. Show you two things. One, Ambarella continued our tradition of excellent engineering execution. But also, through our years, our silicon architecture and silicon roadmap, plus our software platform, become so mature that we can easily tape out a chip. Even though it's a complicated chip, that we can deliver those kinds of demos in real time. We expect CV7 will be in production at the end of the year into certain different applications. The second family we add to this page is our first 2 nm chip. We talked about this before. This is a chip going to be taped out, already taped out to Samsung foundries.

And this is the first customer is really helping us to pay for this chip. It's a semi-custom design for certain applications that we haven't talked about, but we're definitely going to provide you more updates later on. And the third thing I want to talk about today is, in the past, Ambarella focused on Edge endpoints. But then just a few quarters ago, we announced our N1 family chip to address Edge infrastructure. We talked about that we had our first design win two quarters ago, and the first design win will turn into mass production in the first half of this year.

On top of that, we're going to show you, talk about how we're going to evolve our go-to-market strategy for this Edge infrastructure market by including our GSIs and the ISV partners to help us to expand the software offering to customers quickly. This is a new strategy, go-to-market strategy, going to implement for not only for the Edge infrastructure, but also for Edge endpoints. I think this is important for us because, in the history of Ambarella, most of our revenue is still driven by direct sales. This channel sales, I think, will be beneficial for us by increasing our potential revenue creations.

I think, throughout the development and customer engagement of CV2 and CV3 family cycles, one thing becomes very clear that all of our customers, all of them, are telling us that they need more and more AI performance, while they don't allow us to increase the power consumption numbers. So first of all, let's talk about what kind of workload will trigger this kind of AI performance at edge. First of all, the serial CPU workload becomes parallel AI processing, and this fundamental change really pushes us to implement our silicon architecture in a way so that we can really have to increase our AI performance, AI horsepower in a big way.

The second thing I want to point out is, due to the need of low latency and privacy at the Edge, we have to move some of the workload, particularly for the application like physical AI drones, autonomous driving cars. You have to move those workloads from the data center to the Edge. And it's not just limited by the physical AI application. There are many, many more applications that we'll start seeing our customers, enterprise customers, moving their AI workload from the cloud to the Edge. The third thing is probably one of the most important things, the explosion of the transformer type of the workload, particularly for GenAI and the vision language model that are being widely used by our customers today. And they are trying to figure out how to implement those new models into their products. And we are helping them to do that.

And throughout the process, in the past, at the CNN level, we are talking about maybe a few hundred K parameters. Today, we're talking about 2 billion parameters as a minimum running on our chip. That just shows you how much more AI performance we need to add to our silicon to address that need. And in addition to that, all our customers say we need to attach more and more channels of sensors into our silicon, not only just camera sensors, but radar sensors, all the other type sensors. All those new channels of sensors require new AI performance. And to make it worse, in the past, we are handling 2-megapixel cameras as input. Today, we are addressing 8K 30 camera as a video input. And the amount of data that comes into our chip increases significantly over the years.

And because of the five reasons, you can see that our new products continue to add more AI performance, therefore bigger die size, therefore bigger, better ASP for us to address our customer need. One thing I can say is, for example, in Q4 last year, the average corporate ASP is $15. Today, every chip that we are developing or sampling right now has an average higher ASP than that $15. That just shows you how we're going to capture more values per AI unit. In the last several quarters, we started introducing brand new Edge AI applications, helping our customers to take them into mass productions. And we get asked many times by our investors, what's the additional R&D cost to enable one extra new application with our platforms? Today, I want to address this question by showing you the flexibility, programmability of our silicon and our software architecture.

On the left-hand side of the picture shows this is a software stack, and some of them done by us, some of them done by our customers. And when we're going to go through the detail of that, I just want to point out the core of this software stack is our Cooper Developer Platform, which is identical for every product that we ship in the market. And let me use the right-hand side, the CV5, as an example. In the last two years, we used the CV5, our first 5 nm chip, to take them into seven different types of applications, from automotive to enterprise security, drones, portable video cameras, perception system for robots, and so on. Seven totally different applications.

But if you open up those products, you will find that at the bottom of that product is CV5 silicon plus a Cooper Developer Platform, identical. The only difference on top for those seven applications is the application level software that our FAE helps our customers to develop. So the extra cost to enable one new application is really that a small FAE team dedicates for each customer for their application. So you can see that the extra cost for us to enable new applications is limited. And therefore, we can start showing significant ROI when we increase the number of new applications. And that's why we never fear to talk about new applications because of the ROI from our investment. But I also want to point out this benefit of programmability and flexibility applies not only to Ambarella, but to our customers.

For example, any customer who uses CV5 to implement one application, if they want to go to lower-end products, they can port their application level software easily to a chip like CV75 for the low-end product or porting to CV7, our new 4 nm product, because they are sharing the same silicon architecture and the Cooper development platform. So our customers, by using our system, they can significantly reduce the R&D dollars and improve their ROI also. And that's why, and the reason I mentioned the previous slides is because we start seeing many, many new Edge AI use cases. Five years ago, when we got together, the most significant application for Edge AI for us was enterprise security. But today, all of the boxes I show here, we have either engagement or design wins in those spaces. Let me use last year as an example.

Fiscal year 2026, while our enterprise security market continued to grow in a very healthy way, our revenue growth last year also contributed by two new markets. One is the portable video market. The other one is the telematics market that we never addressed in the past for the Edge AI applications. And with this new application, it really helped us to significantly increase our revenue last year. And in addition to the two new applications, we think there are three large opportunities for us in front of us that we can tap into in a short period of time. One is robotics. In this particular market, we focus on aerial drones because this has a huge volume opportunity for us. We also focus on, for example, AMRs or manufacturing automation, humanoids.

They are all important markets in the future, but I think all of them are still at early stage of the revenue generations, and also, we think the Edge infrastructure, like we talked about, is important. We have first design win. We're working on more design wins. We think that potential is huge moving forward. Of course, the third one is autonomous driving that we have been working for a few years. We continue to commit to this market and working with OEMs and Tier 1s to secure our first major OEM design wins, and beyond those three big opportunities, there are many green shoots here. All of them have shown some potential, but it will take time for them to develop their revenue generation.

But I will point out that all of these new opportunities that we point out here, all of them are using Cooper Developer Platform and leverage the same silicon architecture so that to enable each one of them won't take a lot of new investment on R&D side. With all the discussion so far, I think it should not be a surprise to you that Fiscal year 2026 was our record revenue year in Ambarella history. Here, we show you our revenue for the last few years. The blue bar is the revenue generated from our traditional video-only product line. The green bar is the revenue generated by our AI. If you look at the overall revenue performance, we generate 12% CAGR in those periods of time.

But if you discount, if you remove those companies' revenue generated by the company who impacted by the Entity List, for example, in Fiscal year 2020, Hikvision, Dahua, and DJI was 45% of our total revenue. And they are almost zero today because of the Entity List. If you remove that, our company revenue CAGR was 18%. But they are nothing compared to our Edge AI revenue CAGR at the same period of time, which is 64%. So I think the only conclusion you can draw from here is the transition from a human viewing company for Ambarella into a machine perception plus autonomous decision-making company is complete. 80% of our revenue comes from our Edge AI solutions. That also shows the most important IP in this company today is our AI accelerators, which is really well-defined and has better performance per watt than all of our competitors out there.

We believe that this is going to be the focus for us, continue to leverage our tradition on the perception side, but focus more and more on the AI accelerator to make sure that we can continue to deliver the performance per watt for our customers in the future. At the end, I want to wrap up my presentation to make one statement, which is Ambarella is a leader of the Edge AI market. I don't think that's an empty statement because the statement is supported by the following numbers. We shipped more than 40 million Edge AI SoCs so far, accumulatively. We put in $1.3 billion of investment, R&D investment in the last 10 years into this product line. We helped 370 unique AI customer projects in production. Also, we port over 200 unique AI model architecture, not AI model, AI model architectures for our customers.

All of this, I challenge anybody outside who's trying to come to Edge AI to provide a similar matrix and provide a comparison. With those numbers, I'm proud to say that Ambarella is the leader of Edge AI. With that, I would like to introduce Muneyb to give you a more deep presentation into our strategy on the marketing and customer engagement. Muneyb joined us as an executive team as a Customer Growth Officer six months ago.

The first time we met, we quickly came to a conclusion that we both believe that the Edge AI market is at an early stage and it's worth our time and effort to make it bigger. Because Muneyb spent his first 20 years building data centers and cloud products at a company like VMware. He joined us from Intel as Chief Marketing Officer for their network in the Edge IoT business. With that, please welcome Muneyb.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Thank you, Fermi. Thanks, Fermi. And thanks, folks, for taking your time and coming out here. Appreciate your spending here. And folks online, hopefully, you can watch and follow us. So I have a longer agenda here, but let me kind of get in. And thanks, Fermi. As Fermi said, when we first met, we talked about the opportunity at Edge about to explode. And I've spent 25, 30 years of my life just working in the data center and cloud and arrived at that conclusion that the Edge market is going to stand up and proceed just like data center and cloud was three, four decades ago, right? Because I think in the data center market, initially, 1990s and 2000s, it was about transforming companies. There was data automation, and people were building client-server architectures.

And I see a lot of investors in the room, top companies in the market, where system providers went from silicon CPU as the market cap to system providers like the Ciscos and the [ worlds]. Then came the cloud-native market, started moving towards the architecture was very much more mobile apps and developers building this. Of course, market caps and SaaS companies and cloud companies went through the roof. Where we are looking at, we believe, will be where Edge will stand up. And you're actually now transforming not people. Now you're transforming things because everything is going to talk to each other through the AI market. And we believe that is going to explode and at the same level. It's not one against the other. There'll always be a continuum, right?

So there'll be a continuum where there's still 30% of the workloads in the data center, 40% in the cloud. But new workloads emerging at the edge will be 30%. That mix is about to happen over the next decade. And that's the opportunity we're talking about, right? And how do you go capitalize on that? And that's a huge opportunity. But let's kind of double-click into that because the segmentation is not clear. As you have data center folks moving towards the edge, they're building edge servers, which are in the scale of servers and platforms and racks. Whereas Ambarella, as you can imagine, IoT endpoints, we're seeing adjacent markets come in two ways. One, physical AI, where you're taking multiple sensors: vision, audio, infrared, radar, and putting together cars or robots, which multi-sensors put together on one end.

On the other end, up the network, we're seeing more and more kind of sensors and inputs to be processed, but not at a server level, more at an appliance level. Now, add AI to the mix. This starts getting interesting because AI processing and model sizes vary. If you're in the cloud and data center, now you're looking at hundreds of trillions of parameters and models. As you move those towards the edge, and you have big edge AI inferencing solutions and chip providers who look like wafer-sized chips. We're not talking about that, right? So big, huge, still edge AI inferencing market. But then you're looking at more simple, smaller. So these are more parameter sizes of 500 million to 50 million. Or in the multimodal language, it's about 50 billion parameter models that you're going to do with.

That kind of sizing is where we think that edge AI market comes up. But designing for edge, going on what Fermi said, it's a very unique architecture. You can't just take data center silicon software and move it very quickly to the edge. You have to spend a lot of different cycles because privacy and security are super important. You can't stick a huge load balancer in front of an edge endpoint, right? So that's not that easy. Real-time processing, you can't deal with 30, 50 milliseconds of delay in the cloud and data center. You have to be really real-time because you're dealing with real-time things. Reduced network is not like a LAN with a backplane of 10 GB. You have to deal with really hard network at the edge. And low power consumption. I don't have massive power facilities like data centers.

It has to be embedded at low power. And I think these are the synergies that kind of help Ambarella position ourselves because we're designed for all these elements and capitalize on this, right? So now, what does that do for opportunity? Now, I'm making a hypothesis that workloads are going to move to the edge, and they're going to run on these different devices rather than in the big data centers. Therefore, that infrastructure build-out is going to look from first our TAM. And we always talk about our current SAM, but our TAM in 2027 is about $12 billion. But as you look at this new edge infrastructure market that's expanding, hardware, software services, that's a big pie. And our silicon can scale and capture some of those early wins in the network and gateways, which we're already seeing the design wins we've shared about.

But that's a potentially large TAM that we are able to tap into as we go after this market space, right? So that's kind of a big opportunity for us to kind of scale our products as you go into the edge infrastructure market. But you do need a full-stack solution. You need silicon, system, software, applications all coming together to kind of deliver on this. And the good news, as Fermi has been talking about, is we have been, as Ambarella, for the last several years building that stack. We've not come forward and presented that stack as a whole solution. We've been going after vision and on different aspects of that. But we have that stack.

And this is what you will see in a lot of our demonstrations: as Fermi articulated, we have a full kind of range of silicon, first, second, third generation, but also designing the systems from endpoints. Now we're starting to see the infrastructure being built in. But we've spent a lot of time building software. This software supports multiple operating systems from real-time operating systems, robotic operating systems, embedded Linux, QNX, all of these, but a common set of SDK and toolchain. That's super important because now you can write an application for one chip and easily move to wherever you want. And that's super critical for applications and the North bound interface in the software.

And that ability for us to do that has been tremendous, on back of which we've stood up a Cooper Developer Platform, a functional safety ASIL kind of very compliant platform, and real-time platforms on top of that to support our customers. Now, we're also embracing open source because you see open source frameworks. And this is an important part because we believe open source will take off, and agentic layers will kind of help the scaling and automation for this at the edge. So we have kind of announced today, and we'll get into the details of some of our developer zone, but we're embracing an entire kind of open source framework and publishing beyond a set of algorithms we've already built. So at this point, I probably want to invite our experts on stage from the company.

So I want to have a panel discussion with Bob, who's our chief architect, and then Malhar, our software principal, and Alberto Broggi, who's our general manager for VisLab, to kind of start getting into unpacking that stack because they spend a lot of time working on these, and you want to kind of understand how this works. So thank you, gentlemen. Thanks for coming up. Let me kind of start with Bob and say, "Bob, we're talking about AI acceleration evolution." My first question always comes when I can always discuss with you is, "Are you really an AI kind of architect, or are you an environmental scientist?

Bob Kunz
Chief Architect, Ambarella

I think that's a funny question. Truthfully, I'd rather be a computer architect. But honestly, at the rate that power is scaling for these AI applications, I don't really have a choice. I mean, there's a fundamental scaling problem here. The AI, there's a lot of exciting things that we can kind of imagine that AI can do, and a lot of people are just throwing MACs at it. So number of parameters in an AI may be doubling every six months. But if you look at the actual power, that requires more power to both train and deploy those types of systems. But globally, across the world, our ability to generate enough power doesn't scale at those rates. Maybe global power generation, despite all our investments in clean and abundant energy, is maybe only growing 1% or 2% a year. So fundamentally, those scales don't match.

The consequence of that is something has to be done. The edge, in that sense, is inevitable. So let's look at. I'm a computer architect, so let's look at where power is actually spent. If you look in one of these devices, the amount of energy that it requires to compute is actually fairly small. What actually is expensive is actually moving the data around. So maybe a MAC might cost at most one picojoule. But then once you need to move it, either in some large kind of reticle-limited chip or off to some DRAM memory, DRAM or off chip or some high-speed interconnect network, that can be 10-50 pJ per second. So much larger kind of power just to move data. And so then the natural consequence is, "Hey, let's get that computation.

Let's get that intelligence closer to where it's actually perceived, closer to the video, the image sensors, closer to the perception, closer to where we have some compute to generate some local intelligence. That's really critical. And I've been working at Ambarella for a long time. One of the things that's really exciting is we've been building edge products from the very beginning, all back to A1. So we have expertise in image processing and video compression, and those are really exciting. And I think based with this energy challenge, one of the things that I'm really passionate about is architecture really matters. Yeah, and looking forward to future chips.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Yeah. Having been an architect myself, I'll tell you, there's always this constraint and balance. How do I architect something that is wonderful, but then I have all these little limitations? It's vertical size. It's too big. It's too small. How do you kind of trade off between constraints and balance, really?

Bob Kunz
Chief Architect, Ambarella

Yeah. I mean, computer architecture, a lot of times, you talk about constraints. I mean, power we've just talked about is a really important constraint. Latency is a constraint. Compute is a constraint. All of those are really interesting to kind of think about. But those are really inputs. And constraints alone don't actually make a good chip. Actually, one of the things when I see Ambarella chips, what I get really excited about from a computer architect point of view is when you look at all the different components that are inside our SoC, the image processing pipeline, the AI, CVflow, as an example, DRAM controller, and even ARM processing.

When all of those are running at full capability and are fully utilized, then you know you have a balanced chip that's quite beautiful and very effective and also attractive from a business point of view because you don't have any kind of slop in that. Inside, if you have an unbalanced chip, you could have a chip that looks very good on a slide. It hits like a benchmark. It hits some peak number. But then when you actually look at real kind of workloads, most of that time, it's not used.

You don't have enough of any particular resource to actually use it, and especially when you get into the image processing pipeline or any kind of real-time application, when you overload a constraint, the system fails. If we have to drop a frame, you've lost the shot. So that's critical. Hitting balance is really important. Especially, I think an example of a well-balanced chip is CV7. We've looked at that, and we're looking forward to seeing it deployed.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Great. Yeah. Thanks, Bob. And I think the natural question then, and I get this all the time, is, "Hey, you're building accelerators or SoCs?" There's a lot of folks building accelerators, very dedicated for the market, high speed. But the balance and constraint leads directly to, "Should we be doing for the edge market accelerators or SoCs? What do you think we should be doing?" Right?

Bob Kunz
Chief Architect, Ambarella

We're an SoC company, so obviously, the answer should be SoC, right?

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

But why?

Bob Kunz
Chief Architect, Ambarella

I understand that, right? So look, once balance is really important, then SoC is really the only answer. Because when balance is really required, if you don't actually achieve that, then we leave. Here's the problem. If we don't actually build an SoC, then we leave the balance problem to someone else, and really, that's not where Ambarella provides value, right? We want to be able to balance all of these heterogeneous components, those components that I talked about, the image processing pipeline, CVflow, DRAM controller. All of those have different requirements, and all of those have to be in balance, so if we just do an AI accelerator, then we're just leaving problems for somebody else to solve.

I think the interesting thing is when you look at these different markets that you identified, things like physical AI, AI-powered edge boxes, smart camera applications, all of those are going to have slightly different types of balance points. I think when you look at the number of chips that we've actually designed, maybe 15 or more based on our CVflow generation, all of those represent a very important and different sort of balance point than we've been able to achieve. We do that from an algorithm-first approach.

Basically, we study where does that balance point need to be, what are the critical applications that we need to solve, how do we do those effectively and at low power, and then build a chip around that. So architecture is sort of second. But when you see our Ambarella chips that are working properly and the full SoC, it's also very beautiful.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Last question for you. I have one more after this. But really, are we compromising between flexibility and scale in here? Or is it a choice? Because of all the flexibility, am I losing scale, or are we able to do the both, right?

Bob Kunz
Chief Architect, Ambarella

Yeah. So I mean, flexibility comes at a cost. And I think that's important to keep in mind. A lot of times in these large kind of cloud systems, maybe the flexibility that you get is fairly small. It doesn't even show up on your kind of balance sheet. But then once you build these sort of smaller edge devices, then building general flexibility is not free. And everything has to be in balance and be cheap in order to make sense from a business perspective. So how do you build flexibility in your system so you can cover a larger range of balance points? And I think Ambarella, we have an approach of building flexibility into the architecture family, meaning we build CVflow.

Once we have CVflow, we can scale that up or scale that down to hit some specific balance point that we actually need. Now, there is some local kind of flexibility. Obviously, we're looking at algorithms first, as I mentioned. We're looking ahead and seeing the roadmap of where things are going to go. We build enough kind of local flexibility, but it comes at a much cheaper cost than it would be to build general flexibility into our chips. An example of this is when transformers and LLMs and ChatGPT networks really kicked off. We're like, "All right, we need to demonstrate this." We look around at the different chips that we have. We picked N1 that was at a very good balance point.

And then with the power of our software stack and the tools that we have available, we were able to go and execute that, demonstrate that you could run LLM inference on N1 very effectively and do that in a short period of time. So that's sort of an exciting kind of prototyping aspect. And then we can use it effectively.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Great. Thank you. So, kind of to summarize what you're saying, Bob, is our huge differentiation is not these trade-offs that a lot of other vendors have to do. Our huge differentiation is we can actually do an amazing AI architecture with power consumption, have the constraints, but balance it right, and SoCs and accelerators, we can get that blend right. And finally, the flexibility and scale has given us to almost 400 million chips that we've shipped with that type of balance. And for me, slide especially take CV5, go into seven different application markets very easily and programmable.

I think this is a big differentiation for Ambarella. A lot of people try to understand where that fits, but this is our key differentiation from an architecture perspective. Thanks, Bob. I'll kind of move over to you, Malhar. Now, I think we also talked about the full stack. Now, going from the silicon architecture, we talked a lot about Cooper Developer Platform. But before we get in there, can you tell us what are the trends we're seeing in systems, software, agentic world, right?

Malhar Palkar
Software Principal, Ambarella

Right. So as Fermi mentioned, transformer explosion happened everywhere. So you can see transformers, even in our traditional tasks like object detection and everywhere, that's already there. We have a CNNG en tool, which is quite proven. It has been there for 10 years. And we had this thought that put matrix multiplication into our third generation, which is what Bob and our architecture team did. So we were able to quickly bring up the transformer model in our CNNG en toolset. And that basically enabled us a lot of new applications, like all this multimodal application that even the traditional ADAS or all the software being and with the transformer, the new AI kind of method, the models came up. The AI is kind of slightly different because they are autoregressive models. So we came up with a new tool, VLM Gen, that's part of the Cooper platform also.

So we were able to quickly bring up the VLM Gen, and we can bring up all kinds of models with our VLM Gen software. The N1 example that he gave is basically that's how it is. The other thing that I wanted to actually mention is Bob also mentioned about scale up and scale down of our architecture. With our tool chain, we can actually do it automatically. So even though we see a lot of chips, the CNNG en happens to be one tool that can scale up and down easily. So that's about the current thing that we have the foresight of having this matrix multiplication, which enables all kinds of transformer applications, right? Now, if you see moving forward, the inference time scaling is becoming very important now.

Now, inference time scaling, meaning all these reasoning models are coming up, all this chain of thoughts are coming up, or all these things that even the agentic workflow, if you see, the context size keeps increasing, and the amount of data that is being generated keeps increasing. So inference time scaling is a challenge that we are working on. That's what Bob and the architecture team is actually looking forward in our new architecture. So that's what it is. People outside also know the bottleneck that you guys mentioned earlier about the DRAM bandwidth and all that. There are new techniques too people are exploring in a lot of other companies about like 4-bit quantization, token pruning, all these things we also keep a watch. And that's basically some of it we can apply today with our VLM Gen.

Some of it we are actually going to look at how to do it more efficiently in the future architecture. So if you look at, I mean, you also mentioned earlier that we are embracing the open source community. So AI as such is still nascent format. New things are coming up. Every six months, you will see new frameworks. So agentic workflow is very popular right now. So going from one LLM to another LLM, defining the memory, defining the context, that whole thing, like LLM is an application by itself, right? Having all these things to work together. So we are keeping a watch on that one. And if you look at our VLM Gen tools and all that, we are actually embracing Hugging Face. Hugging Face, actually, most of you probably know. Hugging Face is where almost all models are released nowadays.

We can quickly convert those models from Hugging Face to run on our chip. With our tools being common, we can go from CV72 to even CV7 demo that you see outside. We already can actually execute some of those things with our tools. It's a scalable architecture, right? With the new optimization techniques, we can incorporate thanks to being embracing with the agentic to Hugging Face. Last but not least, physical AI that Fermi and you were referring to earlier. We cannot ignore the security, the cybersecurity and the safety, right? There are techniques that are coming up in the LLM world even to address that. We also, I mean, with our automotive, we also have experience with ASIL and other stuff that we have also worked on. That experience helps us in the cybersecurity and all that. That also is we can leverage that also today.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

That's kind of our Cooper Dev platform. I know you kind of threw a lot of the tool chain. Wanted to kind of throw it in there so you can talk to it, right? Yeah.

Malhar Palkar
Software Principal, Ambarella

So let's look at the holistic one, the Cooper Developer Platform, right? If you look at the foundation, is our metal, is our chip, is what Bob and the team has developed. And then we have these core libraries. The core libraries are actually based for the vision, like what we are doing, the GStreamer libraries and all that we release. And the critical part of it is the tools that I mentioned, CNNG en tool and the VLMG en tool. Today, we announced Model Garden. So some of these models that are converted and optimized with our tools, we are already publishing on our website, on Hugging Face, as well as on our website, right? So the customers can directly use that optimized model so they don't have to go through the exercise of optimization.

On top of it, we also have a runtime environment to run all these models, right? Now, one thing I wanted to stress, I think you also mentioned earlier, we have one Cooper platform that runs across different chips. So once you invest for one chip on writing an application, it can easily transform from CV72 to, let's say, CV75 or let's say CV7. Depending on the balance point that you choose for your application, you quickly can develop, have an application. Then we have all other software stacks that are necessary for robotics, like ROS and all those things that also we have developed. And those are part of the Cooper Development Platform.

Cooper Developer Platform is a holistic, which includes all the libraries that are the drivers, which are interacting with the silicon, which has some open source model on the Model Garden, on the Hugging Face, on the cloud. We have the tools, which are going to convert it, and then the application examples that we have for various markets. We have one platform that can address different needs as such.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Cool. Thanks, Malhar. I think we'll go to Alberto. Now, the opportunity to kind of double click into, well, we talked about a horizontal silicon and horizontal stack. Now, with Alberto, who's been our General Manager in Italy for VisLab and acquisition, how are we seeing let's go deep into one vertical, like autonomous driving software stack. But go ahead, Alberto, yeah.

Alberto Broggi
General Manager of VisLab, Ambarella

Yeah. Actually, the trends, the modern trends in autonomous driving are mainly going towards real end to end. And it means that the full pipeline, so the data acquisition, processing, the perception, the fusion, everything is based on AI. And that does not necessarily imply that we're talking about just one big network, which receives pixels and radar echoes and delivers just set points of speed and steering. Because training that big network would require very large data sets. And when you talk about large data sets, you mean a lot of energy for the training, a lot of time for the training. Large data sets also implies acquiring these data sets. So moving towards end to end means having big data sets. And our solution is splitting the network that I was talking about in multiple networks instead of just one.

So instead of having just one network, you have multiple networks. You can train them more easily because you can use smaller data sets and more focused data sets just for that specific purpose. And the networks should be overlapped and sharing some latent space. And by doing that, you are ensuring that you are preserving and you're propagating the features, which are actually intrinsic features, non-explicit features, and you can propagate them throughout the whole stack. So that's the real interesting part for the end to end system. So that's what we do. And for example, in this slide here, you can see our current version of our stack, which is based, again, on multiple networks all connected together that deliver the actual autonomous driving. But actually, the stack itself alone is not enough. So you need more than that. So you need the complete ecosystem around it.

So again, the stack itself and the architecture is so important. It's so critical. But you need something more. And because training a network or multiple networks, again, it requires a lot of data sets, diversified data sets. You need to have large different scenarios covered. And you need to have also ground truth connected to those data sets. So again, the more diversified the data sets, the more scenes are connected, are covered. So that means that you are enlarging your ODD, the design domain. And if you enlarge the ODD, that means that you are increasing the maturity of your stack. So if you really want to scale up, you really need to have lots of data sets, ground truth for them, and selecting the data sets. And this is what we're doing actually right now.

If you go to the next step, so this is what we're having, so we have a complete tooling, and the magic here is that all the tooling needs to be automatic, so if you really want to scale up, you do not have to have any human intervention in the pipeline, so starting from the acquisition of the data, we have vehicles that drive around, acquire data sets, and the driving behavior of the person driving the car, and then you pour this data into our automatic annotation pipeline that attaches the precise ground truth to these data sets, then once you have the ground truth, you need to select the data sets, you need to create a right balanced mix of data sets in order to be able to cover the whole ODD.

And this is done by another tool that we have in the pipeline, which is this VLM based. This is based on LLMs and is able to classify the data sets, or data sets where they know night, day, pedestrians, trucks, and so on. And then based on that, these statistics, you can create the right data sets for the training. So once the training is done, you put these networks, multiple networks, on your car or on your simulator. And then you get statistics about the performance of the system.

And these statistics are also used for prioritizing the next acquisition campaign because you understand how much data you have of some specific scenarios, if you're lacking some scenarios. So these information are also used in the full stack. This is actually the full tooling, which we have been using in L4 autonomous trucking project we have with Aumovio, which is formerly Continental, that is going to be SOP next year.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Cool. No, I think that's great news. I think it's good to see these things going into production pretty fast. Now, taking a tangent with you, a lot of the learnings, so there's a lot of noise about robotics and movement and AMRs and CES this week. How can we take some of these learnings and apply it to robotics?

Alberto Broggi
General Manager of VisLab, Ambarella

Yeah. Actually, we started with robotics at the very beginning. So we started with vehicle automation for big vehicles, like earth moving vehicles and agricultural vehicles. We started with that. And actually, there are strong commonalities between many fields in the robotics and the autonomous driving that we've been working on. And the basic architectural principle is the same, is very similar. And the core functions that you have in a robotic system, well, they are the fundamental core function. I mean, the perception, data fusion, planning, these are more or less the same fundamental blocks that you have in a robotic system. Plus, I would also say that maybe some differences could be in the kind of sensorial data that you're using.

So maybe in robotics systems, you have a strong usage of cameras and maybe other sensors like tactile or pressure sensors that you don't have in automotive, of course. Maybe some differences are in the precision of the planning that you have to do. But despite these differences, I think that, again, the underlying structure is exactly the same. It's largely similar. And one can also benefit from the architecture that we have been developing for the autonomous driving, which is kind of similar. I didn't mention before, but actually, our architecture is based on two stacks.

So the first one is what we call the fast stack. So that means the one that receives the input data and provides the steering position and gas position. And then we have on top of that an LLM-based, so a VLM stack, the slower stack, that actually synthesizes higher-level information about the scene, the context, and drives the lower level. So even in this way of moving towards a higher level of automation, that can benefit from this.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

That's like human brain. There was a popular book called Thinking , Fast and Slow which is about how the brain works. There's a natural thing you do fast and something that you learn and get fast. So that's great. Yeah, that's pretty close to what we're thinking. So, well, I think that was kind of our kind of panel section. And coming back to the stack, before I kind of let you folks go, short 30 seconds, what does the future look like, Bob, for architecture?

Bob Kunz
Chief Architect, Ambarella

I mean, that's kind of a dangerous question because maybe I know a little bit too much than I can really say. What I would say is I think that we've had a lot of experience now looking at deploying all of these transformer-based networks on our architecture. We have a VLSI team that works very, very fast. The important thing is to make sure that they have interesting things to do. We're looking at what are those opportunities now that we've understood and seen how those systems operate on our hardware; what does the next generation look like? I think that's what we're focused on and.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Exciting.

Bob Kunz
Chief Architect, Ambarella

Sort of even more efficient and more low power devices in the future.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

You're an environmentalist, yes.

Bob Kunz
Chief Architect, Ambarella

That's right. Yes, exactly.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Alberto, what about you? Where's the trench going?

Alberto Broggi
General Manager of VisLab, Ambarella

I would echo what Bob was just mentioning: actually having the possibility of having these two layers architecture going together, mixing the two things together instead of having just one slow stack and one.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Fast stack.

Alberto Broggi
General Manager of VisLab, Ambarella

Fast stack, having them together thanks to the higher power that we can get. That's the future.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

So, not two brains, one brain. Finally, Malhar, what about you? Where are those agentic and where is the software going to go, right?

Malhar Palkar
Software Principal, Ambarella

Software as such is how quickly you can enable customers to bring up this SLM or small language model that you are referring to, the VLMs and LLMs. Quickly through agentic workflow, we can bring it to the edge, the small model. That's what it is, the key part is going to be moving forward.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Thank you. Thanks, folks, and I'll let you back in as you can head back. Thanks for your time, yeah. As the folks head back, I'm going to give you a quick update on our kind of business and portfolio updates, and I think so you get a quick view. I know we're going to run on time, so I'll give you a fast kind of exposure of two big parts of our business. IoT, which is kind of one of our largest kind of business portfolio in 2025, is almost 70%. As Fermi pointed out, there's more and more AI that's happening. We've got a huge presence in enterprise security. From enterprise security, we're seeing it move into public home safety kind of markets and smart home markets and a lot of kind of announcements along those.

But also going into portable video, you see drones that are flying outside. You're seeing robotics, as we've been mentioning, that there's adoption of these edge IoT kind of environments that are happening on one end, which is the edge endpoints. But on the other end, we're starting to see the emergence of these edge infrastructure applications that are also saying, oh, I need a small box of compute where I can aggregate all of this and compute at the edge without a massive server environment. So we're seeing a huge uptick of this business that is going with the range of SoCs that we just articulated from first gen, second gen, third gen, and the attributes just that Bob and Malhar kind of mentioned it from a unified SDK, a family with a common set of software.

One thing I know we take for granted, but we have some of the best image quality, like leading AI ISP in the market, and I'll talk about CV7, but that's huge, but we take it for granted. But a lot of people don't realize how good and high quality and how advanced we are. Power efficiency, I think Bob's job continues to be an environmentalist, but we have always very power constrained. And the gen AI models, like I know last year DeepSeek came out within two months, we had it running, right? Because of the flexibility and balance that we're kind of looking at and doing this. So a whole range of chips. CV7 is something we announced yesterday. So CV7, as an example, goes from CV5 with 2.5x more AI in the CVflow 3, 2x more encode, 2x more CPU, of course, is 4 nm.

So lower power kind of envelope as well. And supporting all the applications. Fermi had that slide with multiple applications on CV5. We're already seeing the designs of CV7 in a lot of these kind of environments. And what's kind of really interesting is we announced yesterday our CV7 and the performance. We had competitors announce a similar chip and new chip from them, but it's still we're 2x performance at launch with those folks.

So I think that's what some of the powerful thing for us is being able to keep up and keep launching at the same time with competition, but we're already 2x ahead of them in what we can do with 8K vision and 60p that other folks can't do it yet, right? So that's the differentiator. But enough me talking about it. We wanted to also have one of our customer testimonials here who couldn't be here. But from IQSIDE, we have Sabrina, the CEO, who's talking about it.

Sabrina Steinborn
CEO, IQSIDE

Hello. My name is Sabrina Steinborn, CEO of IQSIDE. IQSIDE was recently launched as the driving force behind Bosch branded intelligent video solutions. As a spinoff from Bosch, we have a global footprint and a solid foundation of proven reliability, quality, and a relentless drive for innovation. By using AI, we help build a future where safety and security incidents no longer disrupt our lives. Our cutting-edge video solutions combine world-class hardware with intuitive software to protect people and assets. Bosch cameras are installed across the globe in a broad range of applications, from buildings and public spaces to critical infrastructure. As IQSIDE, we will continue to lead the way in AI-enabled video. We are on a mission to advance predictive security, to help our customers know what's coming next, act faster, and unlock insights beyond traditional security.

This is an exciting period of transformation for physical security driven by innovations and AI technology. We at IQSIDE want to lead the way forward. Today, AI-based video systems can classify and tag captured image data to streamline monitoring. As we embark on the next generation of AI, we are providing customers with technology that will help them overcome the inefficiencies of human monitoring. Our new portfolio with Gen AI models allows us to automatically monitor and generate scenes, descriptions, and the images recorded by our cameras. For example, our cameras understand the difference between a person on the phone walking close to a car and a person actually vandalizing a car. Based on the same natural language Gen AI technology that has taken the world by storm, these visual language models help ensure that no potential threats go unnoticed and, at the same time, avoid false alarms.

Over the last two decades, our partnership with Ambarella has been a key ingredient of our innovation in physical security. Ambarella's impressive video processing heritage and their systems on chips, incredibly efficient AI performance per watt, enables us to run our AI software on edge in low-power devices like our cameras. Our joint innovation continues. We are expanding our edge AI portfolio with more advanced neural capabilities. These will allow us to generate reliable alerts. And thanks to AI ISP from Ambarella, we will now enhance video quality in real time. At the edge, it's important to have everything integrated into one chip: the perception, AI, and general processing blocks. That's what Ambarella's single system on chip does. It makes deployment of vision language models so much more efficient.

And since we run GenAI-based monitoring directly on our cameras, we reduce human error and cut down on cloud processing costs for our customers. But there is more to it. Everything happens locally, which means faster response times. Plus, sensitive data stays at the edge, which improves privacy, security, and reliability. Looking ahead as we continue to further improve the efficiency of our Edge AI technology, we see a huge potential to unlock new revenue streams from these advancements. Our collaboration has been a key driver to our shared success. And together, we've combined cutting-edge technology with deep market expertise to deliver a meaningful impact for customers deploying Bosch video systems. Ambarella's world-class support has consistently stood out no matter how complex or challenging our requirements have been. We at IQSIDE look forward to continuing our joint collaboration with Ambarella in the security and Edge AI markets.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Thank you, Sabrina. I think it was a pleasure kind of working and continue to work with some of our top customers. Just want to highlight, if you haven't had a chance to go look at some demos in robotics, but robotics is being adopted from different chipsets all the way from just vision level two to stereo vision level three, as well as the brain autonomy that we just discussed with Alberto over there. We have a whole bunch of kind of demonstrations. But one of the first ones you will see flying around is the anti-gravity drone that's, if you haven't missed it, you should go check it out with the vision goggles. We call it the first type of robotics is these robotic drones that are flying around on voice commands and following items and stuff.

So let's quickly kind of switch over to auto side of the business as well. So auto is a second part of our large business, 30% of our business. But we're seeing, again, huge AI adoption. Auto safety telematics business is growing significantly. We've had a lot of kind of design wins and announcements with key customers in that area. And we see AI utilization, inferencing, and a lot of kind of adoption in fleet and aftermarket, like driver cameras, e-mirrors, driver management systems, driver document monitoring systems, as well as fleet and telematics. At the same time, we had this good discussion with Alberto on where we're going with our CV3-AD family and software stack from L2 plus to L4, starting to get some design wins and software kind of built out. So we have a whole range of chips.

This is very important. People have a lot of chips on different kind of markets. But to Fermi's point, we also have this whole range of these legacy chips, all of them ASIL compliant, all of them AEC-Q100 compliant. So all the way from vision, viewing, sensing, as well as the radar family from Oculii acquisition, which is our CV3R family, which is available, and you should check that demo out. But beyond vision and perception of CV2, CV75, and CV5, the S-Series, the CV3-AD, which also acts like the domain controller, like central domain controller for all of that.

So you need that brain, that aspect of that as well, along with the viewing and the sensing aspects of our portfolio. Performance-wise, you can benchmark. And if you see some of the demos, we have amazing performance against competition that you can see in real time as you go check it out, right? So again, enough said, we'll have one of our customers' testimonials from Kodiak.

Jamie Hoffacker
VP of Hardware, Kodiak Robotics

Hi. I'm Jamie Hoffacker, VP of Hardware at Kodiak Robotics. At Kodiak, I'm responsible for our hardware platforms, including our Class 8 big rig trucks. I've been fortunate enough to work with Ambarella for almost 20 years now, including developing an Ambarella ASICs-based encoder, as some of you may remember, that was used by NBC to broadcast the 2008 Summer Olympics. So what do we do at Kodiak? At Kodiak, we're a leading provider of level four AI-powered autonomous vehicle technology, and today, we're focused on tackling some of the biggest jobs in trucking. Kodiak has built a single integrated software platform designed for deployment across three main verticals: long-haul trucking, industrial trucking, and defense. Take our industrial deployment with Atlas Energy in West Texas, where we've been contracted to deliver 100 AI-driven trucks. These trucks are operating today completely autonomously.

We designed the Kodiak AI driver to operate in challenging driving environments, and this is definitely one of them. We operate every day of the year, and truck uptime is everything. We can't stop delivering, even in heavy dust or rainstorms. So this is where we turned to Ambarella to customize the solution with Kodiak. We needed the best camera SoC available on the market, and we needed it to be robust across a wide range of conditions. Today, each of the AI-driven trucks in West Texas is running with four Ambarella CV2s, providing best-in-class vision performance, especially critical in low-light and high-dynamic range driving conditions. Beyond our West Texas deployment, we are particularly excited about our current work with Ambarella on the CV3 platform.

Not only does the CV3 provide best-in-class camera performance, but we're also able to utilize the additional processing power for radar and lidar processing, as well as running our time-critical neural networks at the edge. Using the CV3 for sensor processing also materially reduces complexity and the power needs of Kodiak's overall solution. Instead of running individual cabling to the central compute node, for example, we're able to do core processing next to the sensor, which also reduces central computing demands. Ambarella continues to be a great partner to Kodiak, delivering practical solutions and top support, allowing Kodiak to stay on the cutting edge.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Thanks, Jamie, and the team at Kodiak. So I think I'm going to wrap up the auto section. As we discussed with Alberto, there's both that online software stack from deep sensing across images and imaging and radar, but also the deep planning capability for us to do motion forecasting, maneuver and trajectory planning, as well as vehicle control, as well as the offline data pipeline, right? So which is basically data capture, auto annotation that Alberto touched on, and our data selection ODD, which is very important, and how do we run future data acquisition campaigns. So that's kind of our auto part of the business. Now, Fermi mentioned we're also thinking about expanding our go-to-market. And to touch on the go-to-market aspect, which here's our current go-to-market as it stands, right?

We take and have our silicon, which goes through ODMs, tier ones, goes through OEMs and tier two kind of partners and scale. I call it a typical push motion, right, so it's just basically, hey, we take a design win, we go into and gets involved, and what you're seeing as we kind of evolve our approach and our stack and going to a full-stack solution. By the time if you leave, remember one thing, you're going to remember this full stack, right, so this full-stack solution then requires us to actually also create what I would call a pull motion. A pull motion happens when you start building beyond your design wins to start having software, which is built and applications, which are built some affinity to your silicon.

So that is usually built with vertical use cases, with ISVs, with software vendors who can solve a particular problem, and how do you kind of scale that, and then get that deployed through system integrators in the global kind of landscape of doing that. So with that in mind, that's the kind of expansion and growth we want to see through our channels and scaling that we expect to kind of go in our next phase of growth. And I think that's definitely why today we launched our developer zone. Our developer zone allows for folks to come in and build applications, edge applications, test out our models in the cloud before they go and actually go and do this evaluation. So yes, we've talked about our Edge AI software stack and the Cooper Developer Platform, where we have our SoCs, our operating system, our kits, and our SDK.

But today, as part of that developer zone, we're also launching our capabilities of Model Garden, and I think Malhar touched on this. It's dozens of models which are tuned for our silicon already. Agentic blueprints, agentic gives us a level of automation. Before, if I was going to go and enroll developers, I have to publish my SDKs, APIs. It's a cumbersome process for people to learn how to code with our APIs, bug fix it. Agentic allows us to provide that level of automation very quickly and be able to, so we're creating agentic blueprints, which are published on our website with several applications, and at launch, we announced it with two of our ISV partners, Cogniac and MeldCX. We have the CTO of Cogniac, Sandip, in there. We'll have a wave, and then we have the COO for MeldCX, Thor, in the back there.

So they're launch partners. What's amazing is within a matter of weeks and days, we're able to actually bring their applications in retail market, in hospitality market, in transportation, like railway network applications, to be ported in our platform very quickly. It shows the programmability, the software aspect, how fast we can go and get this done. So we're kind of opening this early access program for developers to kind of come and start building this. So applications have affinity to our silicon, and we're enabling that because that'll give a level of scaling that's kind of new to us. So when the developers go, I think Malhar was trying to articulate this. So we're able to kind of bring any type of data, real or simulated data, and take foundational models. We published a few. Eventually, we expect there to be any model.

And then are able to bring it to us and our platform, use our software stack to port and train that, but then test those models in the cloud or eventually in the kits, where they can attach any type of modality and deploy that to different types of any edge. And then, of course, you want to fine-tune those models as you kind of do this. So our software stack is really kind of providing that capability for any model, any type of data, any modality to any edge. Really, what we expect is, like I think Malhar said this, AI is happening super fast. So we don't know what comes in the next six months or a year, what type of new models and technologies. But as Ambarella, we want to be ready for any. So any type of model, any type of modality, we are ready.

Our flexibility and architecture is really available for that. We'll prove that through our kind of ecosystem and how we go to market. I think the timing is also critical because we're seeing this explosion of edge AI as we saw that transition. Agentic AI provides us a level of sophistication and automation that hasn't existed in the past. The timing is really important that we're doing this now because the onboarding of ISVs takes days and weeks, not months or years, right? That's really the speed at which we can go. Our install base, our type of model architectures that we've already solved for our ecosystem is ready to kind of go and expand on this at this right point in time because the time to market with us is going to be super fast.

And to close up my section is really our growth trajectory. We'll have a chunk of business still growing, expanding our IoT and auto customer expansion. See additional revenues go through this channel scale motion because now suddenly you have distribution, resellers, ISVs, SIs who start picking and taking a whole range of customers, which we haven't tapped into in the past, and exciting network new opportunities for our edge AI and infrastructure customers that are happening. So really excited with the growth that Ambarella has in forward. With that, I want to probably pass it on to Chan to talk about. Chan's our COO, has been here a long time, and could talk about scaled operations and our VLSI sophistication. Over to you, Chan.

Chan Lee
COO, Ambarella

Thanks for running with me today. My son watches YouTube, and the guy is so impatient. He would play everything at 2x or 4x, so he talks. It sounds like it's running like that. Maybe I need to speak like that, huh? Yeah. Thank you for coming. OK. I will introduce you to my team, what we have been doing, and maybe just a glimpse of the future. From the beginning, we have taped out over 40 production SoCs. Bob just mentioned he said A1, and it just hit my head. That was our 130 nm first silicon. Yeah. Bob looks the same, except he had maybe more hair back then. Yeah. The latest SOC we taped out just recently with Samsung is 2 nm. It took a lot of work, but it was beautiful.

What's really remarkable about our silicon team is not just how many chips we tape out, but it's about success rate. 97% of our SoCs taped out actually went into production with just a single spin or less. Out of those, 70% of them actually went into production with no spin, A0 production. And this was. I used to work at Intel, and there was this dream of Intel to go into production with A0. But I was reading some news about Intel. I guess they're taping out E0, E-stepping to go into production, which I was quite sad to see this kind of thing. But at least we are doing a much better job here. And it is truly exceptional in my mind. OK. The microphone's not working. Sorry.

So most recently, we have been working on these more advanced node products, 10 nano, 5 nano, 4 nano, 2 nano. We taped out 14 production SoCs as recent as a few days ago on 2 nano. And 12 of those are actually in production already, demonstrating our technology leadership in every sense. So Ambarella has been in this business for quite some time, 22 years. We have gathered some domain expertise over time. We have very mature, proven vision and multimodal AI IPs and deep system-level expertise. Ambarella team is known for very efficient execution, as Fermi was referring to, and at the same time, high-quality silicon. So as you can see from the number of spins we do on the silicon, we also do rapid process node migration, moving down the process node very quickly as a reliable semiconductor partner to our customers.

Lastly, we are very proud of this 400 million, actually more than 400 million SoCs shipped to date. This showcases our product quality. We can scale. Our scale of operations, we can ship these products to our customers' product and shipping everywhere, which leads me to my next topic. As a semiconductor company these days, supply chain and operation becomes very important, ever more critical now with the current environment you are all well aware of and the backdrop. To that end, we have maintained decades-long strong cooperation with our partners. Samsung Foundry, I recognize some faces here from our partner. I like to tease and call out some names just for fun. There's Peter that I see, and Sean. I, oh, there's Kelvin. Yes. Samsung Foundry partners are here to support us. We worked together for 17 years.

Fermi and I had a bet, actually. Are we the only company that's been working with Samsung exclusively for so long? And I lost the bet, actually. He was right. We are the only one. So what this long and strong relationship gives us is we are able to get early access to the most advanced node, like 2 nano process, GAA technology, and able to get stable wafer supply through so many supply chain shocks. And we have experienced those over and over recently. Our supply chain partners are global, geographically diverse, and resilient to supply chain shocks. And Samsung Foundry, for example, have mega fabs in South Korea and Texas. And our OSAT partners, like ASE, Sigurd, and others, we have locations across Asia, Taiwan, South Korea, Southeast Asia.

So with our operation team, very flexible, and we can rapidly relocate resources to remediate any kind of surprises and shock and stress in the supply chain very quickly. So that's been our strength. Next is Samsung Foundry's video clip. Speaking is Margaret Han, who is a U.S. head of Samsung Foundry. If I can get it to click. Oh, it's not clicking. OK, there it is.

Margaret Han
EVP and Head of US Foundry, Samsung Semiconductor

Hello. I am Margaret Han, Executive Vice President and Head of U.S. Foundry at Samsung Semiconductor. In the AI era, Samsung Foundry delivers customized solutions across a full semiconductor value chain, from advanced process technology and design IP to full turnkey services and advanced packaging. Our state-of-the-art manufacturing fabs span from Korea to the onshore facilities here in the United States. These capabilities enable next-generation AI systems to process massive amounts of data faster, more efficiently, and with great precision. One of our most valued long-term partnerships is with Ambarella. We have worked together for more than 17 years, starting at a 45 nm node for their first video processor SoCs, where we helped them become leaders in HD broadcast video and sports cameras. Over the years, our strategic partnership has delivered more than 40 products.

This includes Ambarella's second-generation AI SoCs built on our 10 nm process between 2019 to 2021, and today's third-generation AI SoCs implemented on our advanced 5 nm, 4 nm, and 2 nm nodes. Most recently, Ambarella announced its new edge AI CV7 SoC family here at CES. These devices are built on Samsung Foundry's latest 4 nm process. Within days of receiving alpha samples, Ambarella is showcasing the CV7 at CES, highlighting its readiness and real-world impact. The CV7 delivers a major leap in AI capability, enabling intelligent processing of multiple live 8K video streams with improved energy efficiency. We truly value our long-standing partnership with Ambarella. Through close collaboration, we have helped ship more than 39 million edge AI SoCs and more than 385 million SoCs in total.

The newly announced CV7 family, built on this success, using our 4 nm process and advanced architecture, delivers higher AI performance per watt. This enables support for the latest VLM, VLA, and agentic AI models across both edge infrastructures and physical AI applications. Looking ahead, our joint innovation continues at full speed. We are excited to have Ambarella, as well as our lead customers, enter production on Samsung Foundry's most advanced 2 nm gate-all-around process technology. Samsung Foundry will continue to empower Ambarella to exceed customer expectations, together advancing the future of edge AI and semiconductor innovation.

Chan Lee
COO, Ambarella

OK, thank you. We love the 4 nano, actually. It's a beautiful process node. So one last thing, one more thing. So Ambarella, since founding, we have done, in a very limited way and selectively, some semi-custom joint product SoCs with key customers, very few. More recently, we are seeing very strong interest, and our customers' voices are getting louder and louder, asking for custom products, custom SoCs, or semi-custom products to differentiate. And it's so loud. It's ringing in my ears now. I've been hearing it all last year, and recently also. We think there are three main thrusts, three reasons really driving this. First is just the cost, simple cost. So 2 nano, 4 nano SoC development, it's economics. It's so expensive. We see people estimating hundreds of millions of dollars to develop such SoCs. And in 2 nano, some people are estimating a billion-dollar investment.

So, mistake or surprise while you're investing $1 billion is failure. It's just prohibitive. And the second reason that I think is happening is there's scarcity of design talents. So, a pool of engineers who can actually develop these advanced node, complex SoCs, are just not enough worldwide. And everybody wants to design such products, but it's just not easy to find people to do it. And the third reason, I call this Apple envy. Apple invested for decades to get to where they are, and billions of dollars, maybe tens of billions of dollars. But system OEMs need this differentiation in their products. And the heart of this differentiation starts from silicon. Silicon is differentiated; silicon is what you really need. And this comes from Steve Jobs at Apple in 2000. I think he started investing heavily into this. So those are the three reasons behind this.

In this environment, meet Ambarella. We have unique system design expertise, years in this business. We have stable and mature SDK and vision and multimodal AI IPs that's proven in the field. So I think Fermi was referring to this 2nm product semi-custom SoC. We just taped out this product, working jointly with our key customer. And the spec and all aspects of design were jointly defined. And it will be used by our key customer, of course. And because there's just so much interest and demand, we are seriously exploring expanding this custom and semi-custom SoC product opportunities. And this is something that we can look forward to in the near future. Thank you. So you want to, Louis, you want to? OK. So Louis is going to come up. We are going to Q&A now?

Louis Gerhardy
VP of Corporate Development, Ambarella

Yep. I'll take the mic too.

Chan Lee
COO, Ambarella

Thank you.

Louis Gerhardy
VP of Corporate Development, Ambarella

Thank you, Chan, for your presentation, and thank you to Samsung for 17 years of support. Look forward to the future together. We've used up our 90 minutes. We fit a lot into that. I hope you agree with us, but we'll jump right into Q&A now. And I think, Casey. Sure. If you're here, if you arm yourself with a microphone, and if anyone has a question, please raise your hand in the room. Sure, go ahead. I do have a few online questions as well. There's a question over there? Yeah, Kevin. Well, let me bring you the microphone first. Yeah. There you go. Thanks, Chan.

Chan Lee
COO, Ambarella

Thank you.

Yeah, thanks for the presentation and very impressive history. And the last topic caught my attention, the custom SoC. ASICs were very popular for a while in the '80s and '90s. And then the cost, I guess even FPGAs, or costs of doing the new design got to be too high. But now ASICs are back again because you can't get it accomplished any other way. And the cost doesn't matter to some of these companies. Where would your product fit in? Is it because you can't do it otherwise, or is it a low-cost solution?

Fermi Wang
Co-Founder, President, and CEO, Ambarella

It's definitely not for low cost. It's really for differentiated IP. All the customers who come to us, who ask for this kind of service or cooperation, it's because they value our IPs, perception systems, particularly our AI accelerators, and also our 2 nm expertise. All of that is the reason they came to us. If they don't appreciate any of our IP, we won't take that business anyway. And also, if they want to come to us for 2 nano because of the cost, I think they come to the wrong house. Ambarella is famous for the 60% gross margin business, and they're probably trying to maintain our corporate gross margin on that.

Thank you.

Thank you. I'll echo the congratulations and thanks for the presentation. Fermi, maybe a follow-up on Kevin's question about the semi-custom or custom ASIC business. I guess just walk us through how that might impact financials, how much NRE is the NRE sort of revenue? Is revenue, is it contra R&D? And then do you, sort of once you get into production, sell the chip? And what kind of margins? I mean, some of the bigger ASIC vendors might talk about margins below your target of 59%-62%. Thank you.

So thank you for that question. In fact, that's one of the reasons we kind of hesitated at the beginning. But now I think with enough interest, we think we need to start talking about this. Maybe let's use the first semi-custom chip as an example. It's an NRE payment that the majority have been done because we tape out the chip. And also that the ASP has been agreed upon that is close enough to our gross margin. I don't think there will be a significant impact. But for the future projects that you can see that for the companies that come to us, they might have the expectation.

We definitely want to figure out a business model that can benefit both of us. So I will say that for the first product, for the first semi-custom chip that we design for our customer, the impact to our gross margin is little. But moving forward, we need to figure out that business model. And we'll continue to report to you when we make more progress on this.

Louis Gerhardy
VP of Corporate Development, Ambarella

Let me do an online question, and then we'll go to you, Tore. Several different ones. Let me paraphrase it. One, I think this question, it is for Muneyb, and it was about the comment scaling through the channels. If you could talk about the expectation for kind of the sequence of events or milestones that the investment community should expect as we move that direction.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Yeah, I think we have some clear milestones in this year to start. We started with the ISV community. And there's a sequence too, because what ISVs like MeldCX and Cogniac kind of bring is a lot of application catalogs. So people start getting visualization of type of use cases. So as more and more ISVs open up, we'll have a catalog of different applications and use cases. Then you start getting folks from. So we have targets of by the end of the year, we'll have dozens of ISVs. Then you start onboarding channels and distribution and resellers. That happens. And a few, you don't want to do too many. But these are value added, not what we already do. There's additional on that side. And then have system integrators, a handful of them also come in.

So by the end of the year, we should have some progress in building out. The priority is a set of applications. More and more, the better. And the faster we're seeing, so it'll be accelerated. Second, system integrators who can kind of bring this together, putting system hardware, software, and taking it to market, and then having distribution and channel resellers. By the end of the year, we should have some pretty significant kind of opportunities sign up. The revenue flows will start coming in the future years. But onboarding them, as you know, takes time.

Louis Gerhardy
VP of Corporate Development, Ambarella

I did miss part of that question, which said congratulations on having your first two ISVs demo here at the show. Let's see. I think, Tore, you have a question. Let me just get in one more online, and that, I think, is for a combination of Fermi and Alberto. The question is about one of your competitors made an acquisition today, a company that's experienced in the autonomous driving area in the robotics market, and so can you talk more about your robotics strategy and how it'll play out for you? Also, given that you have the SoC and Alberto talked about the software being applicable for that market, the question is, why not just do the robot yourself?

Fermi Wang
Co-Founder, President, and CEO, Ambarella

Maybe let me answer the question, the last question first. There are just 11,000s of different types of robots. There's no reason, no way we can address that, that we can do autonomous driving software because that's one software that can serve all the OEMs. But I don't think that a robotic software stack, there's no application that we can set for that. But however, I missed that acquisition news that you mentioned.

Louis Gerhardy
VP of Corporate Development, Ambarella

Yeah, Mobileye acquired a robot company.

Fermi Wang
Co-Founder, President, and CEO, Ambarella

Oh, I see. Oh, sorry, I missed that. It's a busy day for me, but I really think that this is a really new opportunity for all of us. We talk about aerial drones, and I can expect that there are huge consolidations on the drone company in the next year or so. And that will apply to all the robotic companies. So I won't be surprised to continue to see more and more people. But for Ambarella, our strategy is very simple. First of all, as a semiconductor company, we need to focus on volume. And we focus on autonomous driving cars and also aerial drones first because the volume can support our R&D investment. But more importantly, on the second point that Alberto made in his presentation, that all our investment on autonomous driving can be a help of the robotic software stack.

That's definitely something we're going to leverage on, and we're going to continue to use that software stack, not only just for the demo purpose, but we are going to open up the software just like we opened up for our OEM to license our software stack. We're going to open up our software stack for our customer, our robotic customer who wants to license our software. It doesn't matter what's the perception. The decision-making VLMs or LLMs, whatever we have been porting to our system, will be opened up for our customer to leverage on, so for our robotic customers, they can focus on their expertise, and we will provide other pieces of IPs that they think they can come from us, but from the technology point of view, maybe Alberto wants to add a few words.

Alberto Broggi
General Manager of VisLab, Ambarella

Just perfect for me.

Fermi Wang
Co-Founder, President, and CEO, Ambarella

Sorry, that's not intentional.

Louis Gerhardy
VP of Corporate Development, Ambarella

Yeah, Tore, please go ahead.

Tore Svanberg
Managing Director and Senior Analyst, Stifel

Yeah, thank you, Louis. Tore from Stifel. Fermi, could you just double-click on the timing of this strategic change to go after more of a channel strategy? I mean, is it because physical AI is at inflection? Is it because of all the IP you've developed on the auto side that cannot be leveraged to new markets? Is it the maturity of Cooper? I mean, why now, I guess, is the big question for me?

Fermi Wang
Co-Founder, President, and CEO, Ambarella

This idea started, I would say, at least more than a year ago. The reason for that is when we start engaging with a robotic application, physical AI application, it becomes so clear. There are many, many opportunities. Most of them are startup companies trying to come to this market. None of them has a mature solution. So for us, in the past 20 years, we focus on key customers in the large areas, Motorola, the large companies. That has been our go-to-market strategy. But when we start engaging on this physical AI application, it becomes very clear very quickly that our traditional go-to-market strategy doesn't work, and from there, even before we engaged Muni, Muni helped tremendously after he joined to pull this whole thing together, but we, at the time, even before we hired Muni, because it's become clear that this has to be a strategy for Ambarella.

Really, it's because we want to be the leader of Edge AI. The way Edge AI is developing is really go to these thousands of possible customers. We need to find a way to support them. I think Ambarella has the technology, silicon software, and everything. But that go-to-market strategy has to fundamentally change so that we can engage a customer quickly. You're going to continue to hear us, particularly Muneyb and myself and Louis, start talking about how we give you updates on this progress, how we engage our ISVs, how they help our go-to-market strategy. That has to be a fundamental change for this conference today.

Tore Svanberg
Managing Director and Senior Analyst, Stifel

Thank you.

Louis Gerhardy
VP of Corporate Development, Ambarella

We're already about 12 minutes behind. So we'll take maybe another question or two in the room before we take the break for refreshments in the back, Casey. Yeah. As a reminder, we'll have refreshments and some light snacks right outside to help you pre-game for your night in Las Vegas.

Just, Fermi, do you think there's any need as you go more into the SOC realm for AI, edge AI, that you'll need connectivity assets? Or do you think you can do this just on the device and you don't need to do that? And then, yeah, I have a follow-up.

Fermi Wang
Co-Founder, President, and CEO, Ambarella

I think your connectivity means how to scale our silicon to address trillion parameters type of application. I think eventually we will. But the connectivity connection that required for edge inference is quite dramatically different than having a connection for the training system in the data center. We definitely think that when we approach this market, we are focused on most of the edge AI application that can be addressed by single silicon without going to multiple silicon solutions.

But having a multiple silicon solution to our customer to scale up on the edge AI performance inference, I think is critical. But that technology doesn't need to go as fancy as data centers. So we already, in fact, Bob says with his knowledge, he cannot talk about the details. But you can imagine that that has been a hot topic inside Ambarella, how we want to have a next-generation chip that can scale up for those potential inference solutions. And we will give you more updates later.

Would that be licensed? Is that organic? Or would you have to license that?

I think it's definitely licensed. And that's definitely an area that we don't think that we have in-house capability. But also, we only need the physical connectivity. On the higher level, there's many things we can do to help to integrate the basic PHY or basically the Link SerDes into our system and how to integrate that service into our system most efficiently. I think from the architecture level, we know better than other people outside for the inference engines.

Got it. And then maybe just sizing some of this stuff. So to do a custom semi-custom chip, the customer has to think that the revenue, let's say this is a three-year chip, the lifetime revenue for this chip has to be, I mean, it must be over $100 million to be able to.

Easily, yes.

Yeah. Is that the right way? Each of these is custom? This is the right way to think about it?

With all that, we won't engage. But all the customers engaging easily can produce more than $100 million revenue per product.

And then just lastly, I guess I think two quarters ago, you announced some big edge AI wins, which was positive to see. Last quarter, we didn't really hear much of an update. So I guess just does that indicate the momentum had changed in terms of design wins? Or is it just lumpy and just the timing of announcements is different? I guess just maybe you can talk to that a little bit. Thanks.

So, first of all, yeah, it's lumpy because people are trying to figure out. So we are engaging. The one quick update to you is we are continuing to engage multiple edge infrastructure customer talk. In fact, some of them in our space here to give some demo prototype. And that particular design win, we're going to production in Q2 this year. And definitely, that will trigger a lot of conversation because that's a brand new application. And how easy to the easy of use of that product definitely will trigger, give people an idea how to implement similar products in the space. So I'm eager to work with our first customer and maybe first few customers to introduce the product. So that will definitely trigger more of the discussion in this space.

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

I just wanted to add. Just adding that, right? So you're going from these edge infrastructure. You're going from design wins for more consumer to B2B enterprise. Those design cycles are a little longer and then more mature. So it's going to be a little bit more time. So as we see this, the end market you're going is B2B enterprise. And that's a longer design cycle if you think there. So that's also the cadence that you're going to see differently, unlike a consumer where you're coming next, next, next, next because you want to hint at a consumer market. Enterprise is a little bit more longer maturity and cycle. But not as long as auto. But it'll take just a little longer. That's it.

Louis Gerhardy
VP of Corporate Development, Ambarella

I had a related question today. It's not coming in online, but I saw maybe John and I saw maybe 20, 30 investors today, and this question's probably for you, Muneyb, and maybe Malhar, that Ambarella has such a reputation of taking in data from real-time sensors. But now in some of these new applications like edge infrastructure, less time-sensitive data is needed. What has to change in the Cooper Developer Platform? Or what's already changed in order to accommodate that type of workload?

Muneyb Minhazuddin
Customer Growth Officer, Ambarella

Sure. I think a lot of the data at the edge, and I know as you kind of look at us and look at the computer vision data in real time is one aspect. But 80% of the data today is still time series data. Think about data that comes out of machines, PLCs, IPCs, et cetera, et cetera. So it's a whole kind of aspect of that that's untapped. And what you're seeing is multi-modality is how do you combine these two. And what are we ready for already? A lot of time series data is just text. And with time series and parameters, our ability to handle LLMs, our ability to handle text in a fashion is super critical to now start putting context. You put context as perception for autonomy.

Now you're taking context of vision with time series from robotic arms, from PLCs, and putting a scene together, so recreating a scene in a factory, so readiness, because a lot of this time series data is real time, but it's also text space, but combining that with vision and putting that context gives us a huge advantage. We're already ready and already there. This is where the use cases with ISVs and all that is important because the ISVs are building such applications. Sorry, go ahead, Malhar. You try to.

Malhar Palkar
Software Principal, Ambarella

On the Cooper platform, we already announced they're using VLM Gen as a tool, so that's basically it's not constrained on the time, like the frame per second and all that. Instead, we have this prefill time or the time to first token that's very critical, and we can use batching for various things that we already announced. And VLM Gen already does that, and the tokens per second. So in the edge infrastructure, the tokens per second or the decoding rate is less important than the time to first token, where we can do batching and take advantage of our architecture, so some of the tools are already added in the Cooper development platform.

Fermi Wang
Co-Founder, President, and CEO, Ambarella

Thanks, Louis.

Louis Gerhardy
VP of Corporate Development, Ambarella

Any other questions in the room? No? All right. Well, please join us outside for a reception. And also. Yeah, one other thing. I want to mention that Jerome, who heads our IoT marketing, and Jason Huang, are you here, Jason? If not, he'll be outside. But he's responsible for automotive marketing and system solutions as well. So they'll be available for the reception also. Thank you.

Powered by