Here. I'm Alex Platt, analyst here at Davidson, and I co-lead coverage of Blaize with Gil Luria, who, unfortunately, due to some unforeseen circumstances, was unable to make it today. Filling in for him. We've got some great guests here from Blaize. We've got the CEO, Dinakar, and we've got the CFO, Harminder, as well. Yeah, gentlemen, thank you for joining us here in Nashville.
Thank you. Thanks for your time.
Of course. I think just to kick things off, I guess for a lot of the audience, could you just briefly maybe introduce Blaize for us and maybe at a high level just emphasize how you differentiate yourself in the current AI accelerator market?
Sure. Blaize is an AI Edge computing company. The core of our innovation is a novel processor that we have built ground up, which is suitable for Edge applications. The software layer on top helps applications seamlessly run on Blaize platforms. Together, the combination of hardware plus software is what we productize. The initial chosen markets for us are defense and smart cities, and longer term into automobility. To answer your second question, how do we differentiate ourselves? What we have built here is something very special, suitable for the Edge market. Historically, if you look at computing history, there were CPUs very good at operating systems. Then came in GPUs for gaming. Absent a purpose-built processor, a lot of AI is running on GPUs. While that might be good for data center, the Edge has very different dynamics needed.
Things like cost efficiency, power efficiency, lower latency, because often it's in real-time applications like cars, robotics, and so on. What we have built foundational here is a processor architecture which excels at all of these to deliver customer value. That's the key differentiation. We have a patented secret sauce called Graph Streaming Processor. That's the reason why we excel at these workloads.
That was great. I guess as we look forward, let's call it three to five years, how do you expect this budding Edge AI market to sort of evolve? And what key factors are going to sort of separate the leaders in this market from the rest?
Initially, our chosen applications are around cameras. The reason we went after cameras was if you look at the world, there are about a billion cameras installed out there already. These cameras essentially right now are recording devices. There is no AI, there is no smart. Once the cameras are installed across the globe, be it defense entities or city municipalities, they are looking at, hey, how do I make my cameras smarter, either for security reasons or for safety or for better citizen experience, different use cases? AI comes to help there. It is about not ripping out your existing cameras, leaving them there, but adding a box with a Blaize processor. It could be a server, it could be an industrial PC kind of a box, and then software to provide rich analytics right behind it in real time.
Now, three to five years, in the near term, what we are witnessing, this is like now, not three years, is a massive uptake in terms of, hey, how do I make my cameras smarter? And the smarter could be, just to give you a few examples, in the case of cities, it's about things like, hey, traffic management. These are the bread and butter use cases: traffic management, number plate recognition of cars, automatic traffic fines, but also things like citizen safety, first responder services going to a place based on visual element. These kind of use cases are already happening. The next generation, when you talk about, hey, well, what's going to happen? Oh, sorry, before we go to the defense. Defense is all about you have massive boundaries of countries, and this is like thousands and thousands of miles. And they have cameras installed.
They want to detect, let's say, tiny drones in the sky coming into the perimeter or boats in the water bodies trying to enter, accurately flag them and alert the authorities. These are use cases that are happening as we speak. Blaize is excited to be part of all of these use cases. The next generation, we are talking about next year or two, what is also happening is true multimodal AI, which is coming in place here. It is not just about the visual inspection. It is also about creating a summary of, hey, what is happening here at 9:00 A.M., the traffic on the street is X, Y, Z. Mr. City Planner considered diverting the traffic or sending somebody there because there is a crash, et cetera, et cetera. There is an actual element of intelligence that is coming there and automation.
These are some of the things that we're seeing in the next couple of years. If you ask five years, I think a true independent assessment of what's going on and taking action and just maybe, hey, FYI, I've taken this action. I do see that happening. I guess the key is autonomously creating seamless any use case, be it traffic management or security, et cetera, et cetera, more autonomy to individual cameras or servers, et cetera, targeted towards within a scope of what they're able to do, define that, and then let them interact. That's something that's happening. I don't know. Would you like to add?
Yeah. I want to add one more thing that we're already seeing the trend of these LLMs becoming smaller and smaller. And the accuracy is not being compromised. The architecture that you need for those small language models to run at the Edge is also one of the things where we think Blaize is going to be well placed as we look into the future.
That was great. I think we'll touch later maybe on some of the small models, more efficient models that you just sort of mentioned. I guess there's this very immense focus today on large GPU clusters that are sort of powering these very massive LLMs, very recently Claude 3 Opus as an example. I guess on the flip side, though, what are some very common misconceptions investors typically sort of have about Edge or on-device AI that you're seeing currently?
A couple of misconceptions. One of the major misconceptions is IoT versus Edge as we see it in this AI space. IoT was more about you had a sensor, you piped all the data back to the cloud and did the processing on the cloud. While that might be OK for non-real-time applications, the day NVIDIA Jetson, where you want real-time action insights in the security space or in automotive or robotics, there is often an element of, hey, do all of the processing right there at the Edge. That is becoming more and more prevalent and required. That is one big misconception. The second misconception that we witness is GPUs are great for AI training, but not all real-world, physical-world AI problems can be solved by GPUs. Why? Because of total cost of ownership.
Often the projects that are out in the Edge, in the field, in the physical world, it could be factories, it could be smart cities, defense, et cetera, you have constraints in terms of power efficiency. These are battery-operated devices on a lamppost or a car. The second thing is latency, real-timeness. At 80 miles an hour, a car is driving, needs to make a response. The third one is even if it's a server that is sitting in a certain on-prem location, there's a total cost of ownership advantage that a city is looking for. Let's say it's a $100 million project for a city. Majority of the budget is going away in GPUs. The approach that cities are taking is hybridization.
I'm going to do whatever I need to do on GPUs, but I'm going to complement it with Blaize servers, which are more inference-oriented, real-time, and total cost of ownership advantage. By hybridizing, they're able to achieve their capital expenditure as well as operational cost goals. These are some of the things that we're witnessing. Yeah.
One other misconception is the programmability versus fixed function. A lot of the early adopters are saying, I just want to count people in and out and use a simple fixed function chip for that. What we are finding is that as they start to deploy a solution using a GPU, that becomes expensive, as Dinakar has just mentioned. When we qualify customers, we are looking for customers that have multiple problems that can be solved with the same camera frame, for example, or different sensors that can be processed. I think as more and more people start to understand and see the deployment of Blaize products out there, some of these misconceptions will be debunked, if you like.
That was great. Maybe we can step back just a little just to make sure, I guess, everyone because I think everyone in here understands sort of what a data center GPU is. Like Gill and I, we talk all the time about NVIDIA GPUs. Maybe you guys could sort of just go back and maybe explain to us where, for certain Edge use cases, where are you seeing these data center GPUs fall short? Maybe that's an opportunity for you guys maybe to go maybe a little bit deeper into your Graph Streaming Processor. Maybe just talk about some of the advantages that you're seeing from traditional, I guess, on your GSP as opposed to traditional GPU architectures. Maybe just talking about power consumption, latency, those kind of characteristics.
Anecdotal example, we were working with a South American company which was focused on smart retail use cases. They had one GPU server, which was about, each GPU consumes about close to 500-600 watts of power dissipation. The CapEx was quite high. They were actually able to replace four such GPUs with one Blaize server, which had about 25 cards. This is a real example. The reason they were able to do that is the particular use case was focused on video streams coming in. It needed certain video analytics, like, hey, draw a line, a virtual line, and see how many people are coming in and out of the store, et cetera. Use cases which are programmable, but they had a video and a computer vision and a real-timeness to it.
In such use cases, you can have thousands of video streams coming into a Blaize rack. The ratio was almost like a 1:4 replacement, with four GPU kind of servers replaced with one. That is a real example. Now, why this is possible comes down to the secret sauce that I earlier mentioned, which is called Graph Streaming Processor. It is our patented processor architecture. GPUs were built for data parallel use cases. You had many pixels on a screen, and you were running a video game. All the equations, the mathematical equations that were required, were run across all the pixels on the screen. That is how GPUs excel at that. When you come to AI, it is a new kind of computing. You have seen images and pictures on the internet about layers of compute, which comprises AI algorithms.
What Blaize uniquely we've done is just like how GPUs did this parallelism across pixels, we figured out how to parallelize across all of these layers of compute in an efficient manner. Often, AI is a memory bottleneck problem. It is a compute bottleneck, but also memory is a very precious quantity when it comes to AI. By being able to run these layers of AI algorithms parallelly, we have actually cracked the code on how do you run AI in a lesser amount of memory, lesser amount of compute required. That is the secret sauce we call Graph Streaming. Therefore, our efficiency compared to GPUs.
I think you've said it. The way I think about it this way, which is that Blaize chip is able to manage multiple workloads at the same time. It's almost like how our brains think, whereas a GPU will do one thing a million times very efficiently. Both can do the job for the AI, but like for like, for the same set of workloads, a Blaize chip need only be a third of the size and therefore consumes probably a fifth less power and is probably 40%-50% more efficient just because we're able to utilize the chip. What workloads at the Edge are demonstrating is that you don't have time to process and save to memory. You need it all happening there and then.
Maybe sticking with the Graph Streaming Processor and a lot of your AI accelerator for Edge use cases, are you seeing anyone else in the market competing with you for these specific use cases? Is NVIDIA making a chip for automotive, or is someone else making a chip for cameras, for security, or defense? Are you seeing anyone else currently in the market that is sort of offering something very similar to what you guys are?
If you look at the use cases, there are, of course, companies which are building chips for this market. That is a good thing, I think. It means that there is a market. That is a good part. How they are doing it is very different. At Blaize, we have picked foundational A, of course, make sure that you are able to do AI in a significantly more efficient manner, which is what I previously described, lower memory and lower power consumption, et cetera, and a total cost of ownership benefit, while making sure the solution is very programmable. The reason programmability is very key in a processor is you are building a chip now, and the time it takes to build a chip is typically about, anyway, three to three-plus years. If you look at the rate of change of AI algorithms, it is changing every six months.
Every six months is a new flavor of AI. There's a new thing. There are language models. There's newer versions of it coming up. If you're targeted a very specific AI algorithm, then by the time your chip comes out, it's outdated. The chip's outdated. That's the world we're in. A lot of efforts out there, they've chosen the opposite of it, fixed function. Hey, I can do really well a particular YOLO model or a ResNet model. We've taken the opposite approach, saying make sure the solution is flexible. Because even once you deploy it in the field, AI algorithms are improving, and you need to be able to software upgrade and make the customer value come to life. These are some of the foundational pillars why we're different: efficiency, full programmability, ease of use, the software tooling.
Software tooling is a very key aspect, actually. Especially out at the Edge, you do not have experts like Google and Facebook. You have city planners. You have factory floor workers. You have people who are building, let's say, a limited number of data scientists in an organization building defense applications, et cetera, making sure that the software tooling is so easy that with mouse clicks, they are able to deploy an application and maintain it. We have built a software platform called AI Studio, which helps the complete AI journey end to end seamless for the customers. These are some of the key differentiators what we have compared to competition.
I think the only thing I would add to that is that we used to get customers come to us and ask us, what's the specification of your chip? Do you have MLPerf? Because that's what the NVIDIA part that I can compare to. They pretty soon realized two or three months later that that's irrelevant. It's what your total solution is doing for you on a thermal basis, on a power basis. Today, when we qualify, mainly we come across NVIDIA because they are the only other programmable chip. That's fine with us because we know that the complex problems at the Edge, our architecture, our programmable architecture, and software tools are the ones that customers will find easier to use.
Maybe just switching gears now, I guess across, so you mentioned several use cases: automotive, defense, smart cities, industrial automation. Are there any of these markets, currently use cases, that are still, you think, being underappreciated by the market for their AI disruption potential? And then I guess maybe even looking beyond, are these the largest markets that you think will still be there in three to five years? Or as AI progresses, do you think there are other markets that will sort of pop up, per se, that you can sort of go after in the next few years as well?
I do think that we're at an early stage of a massive exponential curve of adoption into all these markets. We're already witnessing a lot of uptake from camera-based use cases. Every camera, it's not anymore a doubt. They all want to make them smarter. How do they make them smarter? This is all a thing that's happening right now. Coming to defense, the spending, the budgets are allocated, and they want to put it to use. It is a good market for us to be in. Coming to smart cities, traffic management, et cetera, use cases are the initial pieces. There are already projects coming in. Within a certain CapEx and OpEx, they want to get operational. Now, to answer the second part of the question, hey, where else would it go or how do you see this growing?
Pretty universal, I guess, to every place where there's any kind of a sensor that's generating data, there is a possibility for AI. It could be a range of things from X-ray machines in hospitals to patient health care. By the way, we are part of some of these use cases as well: elder care, nursing homes, et cetera, where you're trying to prevent falls for elder care, et cetera. Also, every industrial automation, every factory floor, there are multiple thermal sensors, et cetera. How do you use that data of different sensors for predictive maintenance, for example, on factory floors? It's a very real use case. You don't want to know when it's two years ahead of it. If you say, hey, service the machine, it's pointless. Or if the machine is already broken, it's pointless.
Within a two-week window, if you're able to predict that the machine's going to have a downtime, that saves companies a lot of budgets and so on. Every place where there's data being generated, AI is happening. How do you make it seamless and easy to adopt is the mantra. I think that's what we're focused on.
One point is automotive for us is towards the end of the decade. Any of these ADAS L3 plus L4 systems, OEMs are working and Tier 1s are working today on those solutions, but those are not going to get into production until 2030 plus at scale. The other point I would add to just what Dinakar said is in these verticals that already exist, it's going to be much easier to go deeper into that vertical. If you imagine a large retail store deploying AI, being one of the early adopters of AI, in two or three years' time, a convenience store will be able to afford to get a little black box with some models. You pay $20 a month per camera. You pay $20 per camera per month, and you get a host of AI.
I think that's kind of also the trend that you'll start to see, that just in the same way that flying became cheaper for a lot more people could fly around the world, I think that's kind of what we'll see with AI.
I guess let's switch gears again. Let's talk about models maybe. And you touched on this very briefly, but we're seeing a lot of rapid advancement in AI models. They're significantly changing our expectations around inference at the Edge. How have developments in model efficiency and performance sort of reshaped customer demands for your hardware specifically?
Models are evolving rapidly. And being more domain specific, I would say, is the way to go. Because in the cloud, you have massive models. I can do it all kind of thing. And that's great for the cloud. When you come back to the Edge, you want to be able to solve very specific problems a lot more efficiently than traditional approaches. This is where having a model more refined to a particular use case, more accurate, those are the things. At the same time, it has to be compute efficient. What we are witnessing, of course, is also a true multimodal, as in just like the human brain. It can see. It can perceive. It can hear. There is an element of speech.
Getting all of these aspects into an AI model and having the hardware, this comes to the programmability of Blaize platforms, making it programmable so such multimodal AI can run on it efficiently and create a use case. I think that's what we're seeing. The margin of error is - accuracy, rather. Let's put it this way. Every model that any of OpenAI, et cetera, any of these companies put out, there is the 600 billion parameter model, massive in scale. You can do your ChatGPTs of the world. Then there's a $70 billion parameter, $7 billion, and $1.5 billion. Each of these, we're now at a stage where the $7 billion, $8 billion parameter model is probably the most highest downloaded model on the internet compared to any of these flavors. Its accuracy is within a few percentage points of the big gigantic one.
This model has become practically more usable, more deployable because it's smaller in size, et cetera. These are some of the things that we're seeing: smaller efficient models, almost as accurate, solving particular problems for the world, being able to run on a more nimble server, and creates customer value.
I was just going to, you have a very good description of how the DeepSeek Mixture of Experts and how our architecture might be very well suited. Talk about that.
Absolutely. Yeah. Another thing is when you have these large models, which are, hey, I can do it all, that's very valuable for the cloud. Imagine there's a question that you have, like, hey, what is two plus three? You don't need to wake up an entire large AI model. What AI models have done is they've trained their models in something known as mixture of experts, where they have a math tutor equivalent of a model, a lawyer model, a doctor model, to simplify it. You just go wake up the math tutor model and say, hey, what is two plus three? In maybe a 1,000th of the power, you get a quick response saying it's five. When you have that, you need a different kind of computing. Blaize's processor architecture is more suitable for these mixture of experts kind of compute.
That is why there is another shift from, hey, rather than run it all, can you run smaller pieces of the model more efficiently for the Edge, especially? These are trends that we are seeing, and we are very well positioned to take advantage of these.
No, that was great. You touched on this very, very, very briefly. You mentioned domain specific models. Do you expect that for a lot of these Edge use cases, is it that we will not be using OpenAI's general purpose models for a lot of these Edge AI use cases? Or will someone like an OpenAI, Anthropic, will they have to decrease the parameter count, create smaller general purpose models? Or, as you mentioned, will we just be using very highly specialized domain specific models that maybe come from other smaller labs? Maybe we will not be using OpenAI, Anthropic for some of these Edge use cases.
I do think that all the wonderful work that's being done for true generic models, when you go into use cases which are very particular for predictive maintenance, for example, or for security for a defense problem, you do need to specialize into those because the accuracy needs to be very accurate specifically for a certain use case, for a certain data. The total cost of ownership. If you can do all of this in a fraction of the CapEx and OpEx budgets, those will be picked. A lot of these industries are very sensitive to their data. They will not want to have it all run on the cloud. They want air-gapped solutions run on-prem and so on.
For a variety of reasons, I do see that having just one gigantic model, send it all to the cloud kind of approach, will not work for security reasons, for data sovereign AI, et cetera, different reasons. Being able to do all of this in a smaller power and cost envelope, being able to do it more accurately, targeted towards use cases, these are all fundamental things why I feel domain specific will happen. It is already underway. Of course, the same companies participating saying, all the work I have done, I am going to create a smaller model for this. Collaboration is required for us to collaborate with. With the hardware layer, we can accelerate models from OpenAI, whoever else, if they are of that size. That is how I see the world progressing.
Yeah, I can't add any more to that.
Yeah, I guess maybe playing devil's advocate on the other end of the small models argument, is it that maybe we don't, will it be maybe, call it two years, a year from now, that the progress that DeepSeek made with R1, like a lot of the big improvements there was it was very low memory consumption, which made it easy. I think you had a really great explanation for why DeepSeek R1, what was a really great advancement for your hardware. Is it that we might be able to use larger parameter models that just consume less memory? Or do you think it'll be a combination of both, that we need the small parameter count and low memory consumption? Or could we have the 500 billion parameter model running on Blaize hardware that just has the same memory consumption as the 1.5 billion parameter model?
I think it will be a combination of all of what you described. We also see the world as a hybrid approach. What I mean by that is this is something today we're working with city governments, where the project is such that for the simpler AI, nimble AI that you may call it, just run it on Blaize all day long. Imagine you're, let's say it's a video security application. You're looking at an image, and you're saying, hey, who are the people in this vicinity? What are they doing? OK, there's somebody with something in their hand. They picked it up. I don't know if it's a phone or a detonator. It's come to that thing. OK, freeze that frame. Now take that frame, blow it up.
In a very accurate manner, very high compute, run what is known as a visual language model, a VLM, a visual language model. That is when you put it to a higher-end GPU. 98% of the time you are running on Blaize. For the 2% of time, send it to GPU and tell me if it is an actual phone or a detonator. You get that. This is hybrid. You run what you can on Blaize and the rest of it on GPU. That is how you made the project CapEx efficient, OpEx efficient. We are seeing such trends. This will keep accelerating, we believe.
Right. That's great. That was the last one on models, I promise. I guess touching on software, you mentioned your AI Studio. We hear a lot about NVIDIA and that their hardware is not really the moat there. It is the software. It is CUDA. It is all their compilers. I guess for Blaize, is it your advantage in the hardware, specifically in the GSP? Or do you also see that you are having the same advantage, or even maybe more in the software as well, whether it is compilers, tooling, et cetera?
We have taken an approach years ago. This is one of our foundational pillars that, hey, programmable hardware. When you make a programmable hardware, it is only as useful as your software. Because the software is the layer that is communicating to real-world applications. We have invested significantly in our compilers, in our software layers, how to make it seamless, easily adoptable. Our hardware plus software is efficient hardware plus easy to use software, which is like the Macintosh of AI. That is what we have perfected. We have our secret sauce in hardware as well as the ease of use software. You are right. NVIDIA, they have a strong moat around the ecosystem that they have created for the last 20-30 years. Customers are moving to PyTorch and all the newer AI efforts. They do want independence from a specific hardware architecture.
They want to pick and choose for the previous reasons I mentioned, the CapEx, OpEx reasons, et cetera. That gives an opportunity for a truly programmable processor with ease of use software that can actually get into these areas. We are securing our initial wins as we are witnessing. I think that is one of the ways I see it.
I think I don't know whether you mentioned, but it's ours is on open standards. So there is no CUDA lock. Of course, there's plenty of CUDA developers around. There is a very not even an anecdotal story, a real story of a retinal surgeon who there's a particular disease which even he could only at 60% of the time identify something that's going to make you go blind. And he wanted to use AI. It's about a year and 18 months ago. And he taught himself CUDA and how to code. But he still struggled to get that AI to work. I don't know how we came across each other, but within six weeks, he was using the AI Studio platform, which is a drag and drop, taking a pre-trained model and customizing it for his own data. And his underlying accuracy went up to 82%- 83%.
That's just one of those things that we'll see a lot more happening just because the expert who take me, I'm finance. I look at Microsoft Excel pivot tables arrived. I don't really care what happens behind the scene. I can use that data, and I can make sense of it. Before that, I had to have somebody write me a Visual Basic query on Access. That's what AI Studio is for AI.
No, that was a great example. Maybe, Harminder, sticking with you, maybe switching gears more, could you maybe just touch on your current go-to-market strategy? What does that look like? I know you talked about, you both have talked about use cases, but what's the profile of customers that you're currently seeing? What kind of customers are you guys going after right now?
Right. First of all, there are two broad paths for us. Number one is where we are talking directly to the customers. The defense entities and so on fall into that category. What does that mean? It means that Blaize is responsible not just for the AI, but we work with partners to bring servers. We bring other applications along so that the customer gets one seamless solution. The other one is working through what we call ISVs or ecosystem partners. These are software people that may specialize in smart city, may specialize in retail, may specialize in drones. They are the ones that may be trying to or may be using NVIDIA today. We work with them, and you use a beachhead customer to try and port across their application that can run efficiently on Blaize.
When we look at our pipeline today, which is starting, the conversion is going to start to accelerate. Because these beachhead customers, it might take us nine months, it might take us 12 to work with those, to get them deployed. Once that's done, the next, the 10th, the 2nd, the 10th, the 100th customer, it's much, much faster to market because all you're doing is a little bit more of customization around the end. We'll still see these where the end user is the one that's contracting with us, but a lot of it is going to be through these software partners.
Maybe I guess where does Blaize currently stand regarding revenue generation and growth trajectory for the remainder of the year? Maybe you can touch on the high-profile Gulf state engagement that you guys have. I guess how should investors maybe think about the growth potential in the medium term, just given the market dynamics that you've been talking about?
When we did our Q1 earnings, we provided a range for this year. It is a wider range, and part of it is just the pace at which some of these big deals that we are working with get deployed. We talked earlier about being very deliberate about qualifying our pipeline. If somebody comes in with a single use, you can use a HALO chip to deliver how many people are counting. Generally, we do not deal with those. The pipeline is growing. It is very well qualified. We have more and more ISVs now that have their models running on Blaize. For us, 2025, particularly the back end of this year, is going to look—we are confident that we will be in that range that we gave.
Twenty twenty-six onwards, some of these things that Dinakar was talking about in terms of the adoption and the ease of use, we certainly see an acceleration. Of course, late 2027, early 2028, when our next generation chip is in the market, we will see another hockey stick growth because it brings another part of the market, makes it available to us. Of course, automotive towards the end of the decade is where another hockey stick. That is how the steps that we see in revenue growth.
You mentioned the hockey sticks and the next generation chip. This is the last question, I guess, for the both of you. Where do you envision Blaize's role in the AI ecosystem, call it 5, 10 years from now, especially at the Edge market? Maybe you can then touch more into next generation hardware and maybe those other hockey stick opportunities.
I can go first and Harminder next as well. We see that if you look at ChatGPT that really created AI and the cloud and all of these trillion-dollar companies, outside of the data center, there's about 1 million times more amount of data. It is under-addressed for various reasons. We have foundational technology that is actually helping our customers productize AI. I'm not talking about trials or proof of concept, real commercial deployments. This will only accelerate. We feel we're at that where the company is actually making from a hardware plus software and partnerships point of view, where we're trying to make this all come to real value for customers in terms of operational efficiency, or they're able to now do use cases like in the doctor example which they otherwise could not do.
The opportunity is multi-fold from where we are over the next few years, how it grows. We look at ourselves as the processor level innovation is there, and we have a rich roadmap. That will continue. In terms of software, we will partner with end application companies to help their software run on our hardware. We'll also partner with the likes of OpenAI and others who are doing all of this work for the cloud. How do you actually condense their models and run them in a seamless fashion to the Edge? This is another key area and a growth driver that we will be seeing. Things like those happening and how Blaize can actually create value to our customers. I'm sorry. I should turn it off. I think that's how we see.
Yeah. No, I think you've said it. The key for us is we'll see, so today we have revenue where we ship cards into hardware manufacturers. We are moving towards shipping full servers powered by Blaize. We are the collaboration that Dinakar's talking about with software people. We're already talking about a revenue share. If an ISV is charging X dollars per camera per month, then we hope to get a share of that. I think when you look in the future, just Blaize is going to be very much part of this ecosystem with this recurring software part probably becoming a bigger and bigger portion. Because even today, we have customers that, if you recall, we said somebody said, what is your specification versus a Jetson NVIDIA? Nobody cares about that anymore. They want a system that works. I think the software portion of the revenue that we'll have with the partnerships that Dinakar is mentioning, Blaize will be very much part of this ecosystem of the future.
OK. No, that was great. That's all I have, Dinakar. Harminder, thank you for joining us. Yeah, thank you for your time.
Thanks.