Liliin Xu. I also have Didier Lasserre. He's the VP of Sales and Investor Relations. This will be conducted as a presentation followed by Q&A. If you would like to submit a question, you can do so in the Q&A function at the bottom of your screen. With that, I'll hand it over to you guys. Welcome.
Thank you, Anya. Good afternoon, and thank you for joining us. I'm going to be quickly going through a little bit of where the company is today, but I'll be focusing on the future. Certainly, you know, we'll have to put the Safe Harbor statement in there. The company was started 30 years ago by Liliin Xu, our President and CEO. We went public in 2007. Since day one, we've been partnered with TSMC. We get all of our wafers from TSMC. We started the company and continue to offer the highest density, highest performance memories in the market. This product line is what's funding our AI, which, you know, is the APU family. We like to refer to ourselves as a self-funding AI company. We've invested over $150 million into this APU development on our own. I'll talk about the roadmap as well.
We finished the fiscal 2025 year at $20.5 million. We have 125 employees worldwide. We try to outsource all the labor-intensive positions like the wafer fab, assembly, sales, and functions like that. The majority of our employees are engineers. We have spent a tremendous amount of effort to get patents. This is important because, as you'll see in the presentation, we have a very unique architecture for our AI chip, and we want to make sure it's protected. We have $13.4 million in cash and cash equivalents. We've never carried debt in the company. Market cap is just under $100 million, and we have a fairly large insider ownership of right now at 27%. Quickly looking at our legacy product line, the SRAM area. This has certainly been a, you know, the standalone division here is very profitable.
We've had the majority of our growth coming from the SigmaQuad family. What's nice about the SigmaQuad family is between the density and the performance, we're sole sourced. And so we were able to get a lot of design wins without any competition, which allows us to keep ASPs and margins high. I'll also talk about how we've taken this family and migrated it to radiation hardened and radiation tolerant for the space industry. The two areas that we're expanding into are aerospace and the AI, specifically the edge and inference with our AI chip. What's important to understand is that, you know, both of these are highly leveraged from our expertise in SRAM. As I mentioned, the rad tolerant devices are a derivative of our current product line, while the AI chip, our APU, is done in an SRAM cell.
Certainly, we have a lot of synergy between what we've done and where we're going. Both these markets are growing markets. If you look at the space, TAM, it's growing at just under 10% CAGR. You know, the AI industry is just going crazy. It's certainly looking for tremendous growth at over 20% CAGR. This will be the last slide we'll talk about SRAMs. This is our radiation hardened and tolerant family I mentioned. What's nice about this market is that it's really difficult to make a robust part for space. Therefore, you get rewarded with high ASPs and high gross margins. To give you a feel for what the ASPs look like, if you take our highest density commercial part today, it sells roughly between $250-$300.
You take that same part and you robust it for space in a rad hard environment, you get upwards of $30,000 of ASP per device. For the rad tolerant, which is a little less robust, you still get $3,000-$4,000 per device. So, you know, obviously, you know, every time we ship a dollar of revenue in this market space, a high percentage of that goes to the bottom line. The market opportunity we're looking at is about $100 million, and it's our intent to get a minimum of 10%-20% of this market over time. The rest of the talk will be about our APU, our AI chip. So the APU stands for Associative Processing Unit. What's, you know, what's really unique about this technology is that it's really a true compute in memory.
A lot of folks talk about CIM, and they toss the acronym around very loosely. What most folks are talking about really are at best case near memory processing. I'll talk about what the differences are in a bit. We also have, you know, a tremendous amount of memory bits, which allows us to have a hugely parallel processing system. We have two families available now, Gemini one and Gemini two, and our roadmap is Plato, which I'll get into details on all of those. If you look at how we compare to a GPU and a CPU, there are some real fundamental differences. You know, first of all, I mentioned the number of bit processors. If you look at a GPU, they have dozens of ALUs, which are arithmetic logic units. You look at GPUs, they have thousands.
You know, the GPUs were really used for AI because they are "massively parallel." You go to our APU architecture, we have on our first generation device, we have 2 million bit processors. We are extreme parallelism compared to a GPU or a CPU. That is one of the differences. The major difference is really in the way that we are architected. If you look at a GPU and a CPU, their title was called the Von Neumann model. If they have a task they need to do, a computation or something, they have to go off chip to fetch the data from memory, bring it back, do the processing, and when they are through, they need to write the data back to the memory. There is a constant back and forth flow of data being transferred.
With our architecture, the memory and the processing bits are in the same place. We are not going to go fetch data and bring it back. The data is there. We use the data for whatever function, whether it is a search or a computation. When we are through using it, it remains there. That is, you know, going to save you all of the power of having to go back and forth, you know, transferring data. The other interesting part of our solution is that we are a bit processing part, which means, as I mentioned, we have two million bits. You can organize those in any method you want. You can have a one-bit machine, a 10-bit machine, and make up a number between one and two million. That is important because if you look at a GPU, they are hard-coded.
You know, you're buying a 16-bit GPU or you're buying an eight-bit or make up something. When you get one of those, you're limited to that. With ours, the customer determines what the resolution or the bit width is. On top of that, they can change it from cycle to cycle. In other words, if they say, "I need four bits now," but the next cycle they need six, they can go ahead and change to six on the next cycle, not a problem. That really also allows us to really future-proof our solution. What I mean by that is researchers are taking a tremendous amount of effort to try and optimize models and use cases. They're finding that different resolution or bit widths are, you know, the most efficient mechanism.
For us, we do not really care what that resolution is. It can be anything, five-bit, 12-bit, make up a number. We are able to address that with our device immediately. What is also interesting about our solution is the way we are architected, it almost resembles a memory. As you know, with memories, you can add memories together. You can kind of cascade them together. We can scale our solution. We can put multiple boards, APU boards in a server and have them act as one. The scalability is certainly important when you see some of the sizes of some of these models and databases. A little diving down into some of the families. Gemini one was our first family. It was really our way to illustrate our unique technology. We do want to monetize it.
We're looking at a couple of niche markets, specifically fast vector search, on Earth SAR image creation, and database index builds. Now, in case some of you don't know what SAR is, it stands for Synthetic Aperture Radar. It's, you know, similar to, let's say, LiDAR, which you might be familiar with, but instead of using beams, it uses microwaves. It works well at night. It works well through weather. In fact, if you turn on CNN right now and you happen to see some images of something, the war in Ukraine or something, the images are most likely SAR-generated images. Gemini two, which we actually saw first silicon last year, and first silicon looked fantastic. It had, you know, a few bugs, but we were able to do software workarounds on all those.
We will see the, you know, our finalized second silicon chip, which we think will be production-worthy within the next couple of weeks. With this device, it's going to have similar-ish markets in the fact that we'll be able to do SAR as well with Gemini two. The difference, though, is we're looking for onboard SAR. As I mentioned, Gemini one would be for processing done in a data center or someplace on Earth. While with Gemini two, we're able to bring that technology and put it on the satellite or the drone or what have you itself. That is important because instead of currently, you're taking images and data from the satellite and you're projecting it to Earth to create the images, and at that point, you're making a decision.
Now you can do the image creation on the satellite itself and be able to take some kind of action immediately without having to come down to Earth. Lastly is Plato. That is our next generation chip. Plato is actually going to address a different market. We are looking at multimodal Gen AI along with large language models. We are really looking at it on the edge. What I mean by that is, you know, certainly when most people think of LLMs, they think of ChatGPT, and they think of, you know, these huge models that are sitting in data centers. If you look at the GPUs today that are being used to address those, there are hundreds of W, and then some of the newest ones are kW of power usage. With Plato, we are looking to do these LLMs at the edge.
A high power budget is not in the works. If you look at Plato, we're targeting it to be sub 10 W. This is something that could actually be powered by a battery if necessary. If you look at some of the early interesting discussions we've had with customers, it's revolving around areas like drones and automotive and surveillance and places like that. You know, drones is actually extra interesting to us because there are use cases where, you know, their GPS navigation and it's being jammed. Now you have an issue. With us, with the onboarding of the Plato, you can do some SAR imaging, and with the SAR imaging, you can see where you are on Earth by recognizing certain landmarks, and then you can navigate that way. That's called a, you know, a no-GPS environment.
That's, you know, very important for, you know, for our customers. If you look at the software frameworks for Gemini two and Plato, they're both going to be Python and TensorFlow. We should have our first release of the beginnings of our compiler stack in the next month or so. I'll quickly go through this because I kind of went through it already. Again, SAR on Earth for Gemini one along with fast vector search. You know, just to kind of give you a feel for the technology and how it works in real life, this is something that was done with our first generation Gemini one part.
Some engineers at AWS a couple of years ago looked back and said, "Hey, what would it take hardware-wise to do a 1 billion data set search?" And they said it would take 12 nodes of an Intel Xeon Platinum. That would be, you know, at 200 W a node, it would be 2,400 W per hour. We can do that same search with one Gemini chip that runs at 40 W. Candidly, we need a host, so you can throw in, you know, one Intel at 200 W. We are at 240 W per hour. We are, you know, 10% of their power. Obviously, you know, that would tremendously lower the operating costs of the system. That kind of gives you a feel for the technology power. Gemini two, as we discussed, will be more on the edge along with onboard satellite and drone applications.
We are in the process of going after some government funding to do the radiation testing for Gemini two that will allow us to have all the test requirements done to be able to put it in space. Turning to Plato here, as I mentioned, it's, you know, it's got the LLM. What's interesting about our offering is that, you know, as I mentioned, we have this bit processor, so we can get down to one bit or two bit or whatever your fancy is. There's a real push to do quantization in the LLMs. You know, these models are so large, it's just becoming too cumbersome, you know, performance-wise. They're taking some new approaches, and they're quantizing these LLMs, and they're putting them in low precision formats. They're able to do that to speed up, you know, the processing without compromising accuracy.
You know, as we talked about, it's low precision. We're a bit engine, and so we fit in absolutely perfect with that. As I mentioned, with our 10-W power, we're going to be really enabling the LLMs at the edge. The, you know, what are the challenges and where do we fit in with our APU? We talked about the high power consumption, and we talked about the CIM, you know, without the transferring of data so we can reduce power. You know, as these models and these databases get bigger, they're throwing more and more hardware at it. The scalability, as I mentioned, is a challenge with the GPU. With our SRAM memory-like architecture, we're able to scale very easily. With Gemini two and with Plato, we're going to be bringing essentially data center performance to the edge.
Lastly, you know, with, and I just discussed this, with the, you know, the density growth and the LLMs, you know, they're going to this quantized model. With our single-bit structure, we're able to support that right off the bat. Where does the APU fit in our growth? We're looking to, you know, like I said, showcase and monetize SAR with Gemini one. It's also enabled us to be able to have Gemini two b e successful on the actual onboarding. We've also been very successful with SBIRs, which stands for Small Business Innovation Research. These are grants from the government, and I'll get into more details in some following slides on this. The Gemini two, as I mentioned, is we have a low-power version of that, which will be for, you know, extreme mobile edge applications and satellite.
Quick snapshot of our financials. You know, the revenues have been growing the last few quarters. A year ago, I want to say we're at about $4.5 million. You know, we've had some nice growth in the last year. The majority of this is attributed to the build-out of AI. And what I mean by that is these SRAMs don't go into data centers, but they go into the support of the ramp of GPUs for, you know, the AI build-out. Our largest customer now is KYEC, which we've just announced. They do the burn-in, which is part of the manufacturing for GPUs. Some of these new GPUs are starting to ramp. They are building out these burn-in systems to support that GPU launch. These systems require, you know, some high-end memories from us.
Another area that we're seeing significant growth in is with the emulation and simulation of some of these GPU hardware designs and any hardware designs for that matter. These designs are very complex. Companies, to save money, you know, they don't want to waste money on mass sets and so on. They're doing a lot of emulation in software to emulate the hardware. One of these customers, who I'm sure will be a 10% customer for us that we'll announce shortly, uses our highest density and our highest performing part for that emulation. We've taken some strategic measures to lower our costs. Our operating costs have come down to $5.6 million in this past quarter. I mentioned earlier that we have $13.4 million in cash. We burned just over $1.5 million last quarter.
You know, certainly if you look, if nothing improves at this point, we have about two years' worth of cash. Certainly we're anticipating some revenues growing. We're looking to slow that burn and get rid of it in the future. I mentioned the SBIRs. In the last year + , we've actually won three of them. One was with the Space Development Agency, one with the US Air Force Labs, and the last one recently was the US Army. The first two were direct-to-phase twos. One of them is $1.25 million, and the other one is $1.1 million. We do not treat this as revenue. We treat this as an offset to R&D costs. Think of it like essentially grants from the government to help pay for our design efforts.
Besides bringing in some grant dollars or R&D dollars into the company, what this has also given us is exposure to these DOD units that will allow us in the future to get some revenue out of these areas. Now we're continuously submitting new SBIRs. Right now, we have enough in the pipeline for about $6 million. Lastly, besides these SBIRs, we are also pursuing other funding sources within the government to help pay for some of the deployment and development of some of these future products. Just quick near-term milestones. Gemini two, I mentioned that we'll have the second silicon within the next couple of weeks. We'll also have the respun LiDAR board that these Gemini two parts go on. We'll have that in June. At that point, we have what we feel will be the production-worthy hardware.
This will also allow us to ship some of these boards to a few customers, including the Air Force, which is one of our SBIR milestones. One of the SBIRs was an algorithm development, and it was specifically for YOLO, which stands for You Only Look Once. It is a way for folks to identify certain objects or what have you very quickly, real-time object detection. We will be delivering those algorithms this quarter. Now, lastly, if you look at, you know, where we are, you know, certainly $13.4 million is not what we are looking for. There are certain areas that we are looking to raise money. Plato is one of them. We are looking for the funding for Plato. Plato, however, we are looking to fund via customer partners.
For Gemini two, we are looking to obviously launch and promote Gemini two, and we need some other efforts and extra resources for areas like software and compiler stacks and so on. We are right now working with Needham and Company to raise some equity funds. With that said, we're also a publicly traded company. We have a duty to our shareholders. We are looking at, you know, other avenues as well as possible spinoff of assets or IP licensing. Obviously, we're, you know, at this point, we would be open to mergers and acquisitions. Our first choice, obviously, is to raise some funds and to be successful on our own, but we certainly are open. At this point, I'd like to open it up to questions and answers.
Okay. Thank you so much. That was a good overview.
We have a couple of questions here from the audience. First, can you talk about the deal pipeline? How are you developing sales into hyperscalers and defense integrators?
Yeah. Candidly, we've had the discussions. We've talked about this in the past with the hyperscalers. They're a longer-term play for us. What we found is, you know, we want to generate the revenues as quick as possible. We've seen a lot of interest from more of the edge folks and, like, especially with the government and the military. While we're still talking with the hyperscalers, the majority of our efforts right now, candidly, are around getting the successes for the short-term sales with the government and the military.
Okay.
I just want to remind the audience, if you want to participate, you can submit your question in the Q&A function at the bottom of your screen. We have another question here. How do you see revenues growing in the future? Do you see this, like, flipping a switch, and then we will see revenue grow rapidly to $10 million-$20 million a quarter? We kind of been running in place or going backwards the past few years.
Right. You know, with the SRAM, as we see that slowly coming back, we also, when I would say the SRAM is specifically the commercial SRAM. Now, we also have the radiation hardened tolerant. We did mention on our last earnings call that we shipped a few parts this past quarter into a customer that's looking to launch their system sometime.
It looks like it'll be the end of next or, I'm sorry, the beginning of next year, which means they will probably take product from us at the end of this year. It's, you know, the visibility isn't perfect as far as the timing. That would be our first radiation hardened production order. Certainly that would add as far as the, you know, this exponential growth that your person asked, that would have to come from APU. Right now, you know, the hardware in the next month or so, as I mentioned, we'll have the production-worthy. We still need to get all the software tools out there, and that's going to take us, you know, a quarter or two to do that.
We are looking to do some of the seeding in the second half of this year to get some of the pipeline going and looking into next year for any kind of the revenues.
Okay. What's been the feedback from early-edge AI partners around latency and the determinants?
I'm sorry. The feedback from the AI, the edge AI folks?
Yeah.
Yeah. The feedback has been really good. As I mentioned, you know, one of the use cases I mentioned was the drone in a no-GPS environment. That's just one example. I mean, there are a lot of use cases that are coming out where they need to be able to do not just LLMs, but computer vision as well at the edge. Most of the focus by the larger folks in the industry is really focused at the data center LLMs.
We feel like we have a nice niche on the edge with this 10-W part that we think is going to be very successful. Certainly, the early indication from the discussions we are having is very positive.
Okay. Another question here. What is the timeline to raise money for the initiatives that you have mentioned?
Okay. As I mentioned, you know, there are two areas we are looking to raise funds. One is for the development costs of Plato. Those have been active. Certainly, there are a few benchmarkings we need to do on Gemini two for at least two of the folks to demonstrate the reality of it. That will start that development. We are looking for that funding within the next two quarters, so by the end of the year.
For the other half, which is the equity raise that I mentioned for really the launching and the success of Gemini two, really we need that within the next three quarters or so is what we're focusing.
Okay. I have one last question here. Given trends in the stock price over the last couple of years, what do you think the street may be underappreciating?
I think they're underappreciating the uniqueness of our IP and our technology. You know, certainly, I understand that, you know, the stock doesn't reflect what we feel the technology is worth. It's clearly we need to get a couple of wins to show that it's real. I think that the underappreciation is the value of the technology, the IP, you know, all the patents that protect all that. Certainly, that's worth a lot more than our market cap is today.
Okay.
Thank you. Time is up. I just want to thank everyone who participated. Actually, we have one more question here. Do you believe GSIT is the only true CIM company? Can you discuss what hurdles there are to product recognition in the marketplace?
We see nothing else that is doing true computer in memory. You know, we've seen a few papers out there. In fact, there was one paper by a very large company. It was sometime last year where the headline was CIM. As you start reading the paper, it's clear it's not CIM, but it's near memory. Their last paragraph, which was a recap review of the paper, says, "And in our near memory solution." Like I said, a lot of people talk about it. We have not seen anybody who really is doing what we're doing.
Certainly, we feel that we are well protected with our patents on that technology.
Okay. Actually, time is up. There are no more questions in the queue. I just want to thank everyone who participated and to Didier and Liliin who joined us today. I am going to hand it over to you, Didier, for some closing remarks before we close it off.
Again, I think we are well positioned with our technology. You know, we just have to get over this last little hurdle. We do think that some of the milestones will be kicking in in the second half of this year. Thank you for joining us.
Thank you. Thank you, everyone.