Good afternoon, and, welcome everybody to the final day of Needham's 26th Annual Growth Conference. My name is Quinn Bolton. I'm the semiconductor analyst for Needham & Company. It's my pleasure to host this presentation from GSI Technology. Founded in 1995, GSI is a leading provider of SRAM semiconductor memory solutions. The company recently launched radiation-hardened memory products for extreme environments in space, as well as the Gemini Associative Processing Unit or APU, a Compute- in-M emory solution designed to deliver significant performance advantages for diverse AI applications. Joining me from the company today are Didier Lasserre, VP of Sales, and Doug Schirle, CFO.
Before I hand the call over to management to walk through the slides, just if anyone listening to today's webcast would like to ask a question, please do so by submitting a question in the dialogue box at the bottom of your screen. With that, let me hand it over to Didier.
Thank you, Quinn. Again, my name is Didier Lasserre, and I'll be going through the presentation today. So quickly, I know we won't spend time here. Obviously, we'll be going through some forward-looking statements, but let's jump into the presentation here. As Quinn said, we were founded in 1995 and went public in 2007. We are a fabless semiconductor company. We have used TSMC since day one, have a very strong relationship with them. We were the first company to have working 0.15-micron technology parts, and then that led us to be the 0.13 copper process technology partner with them. And so we've had, over the years, a very strong technology and business relationship.
As Quinn mentioned, we started the company as an SRAM company, very high-end memories. These are memories that go into networking, telecom, military kinda applications, not commodity applications. So very stable ASPs, very long life cycles. Sometime back, about seven years ago, we made an acquisition for some very early AI technology, and so that's what we're focusing the company on, and that's what this presentation will mostly focus on as well. Jokingly, internally, we refer to ourselves as a self-funding AI company, so we're using our legacy SRAM product lines to fund the R&D for our AI APU chip. Last year, we finished the revenues just under $30 million. The majority of that is coming from our legacy product line.
We outsource the real labor-intensive functions at the company, like fab and assembly, and so on, and so we have 156 employees worldwide. The technology from our AI chip is very unique, and so we want to protect that, and so we've been very aggressive on filing patents. We have 126 patents that have been granted. We have several in the pipe, and generally, we have two or three of them are granted a quarter. We have just south of $25 million of cash and cash equivalents, and we've never carried debt in the company. Our market cap, I think this morning, was somewhere around $53 million. We do have a strong insider ownership. Management has 32% holdings within the company.
So there's two areas, as Quinn had mentioned, that we're gonna be focusing on, two new market segments. The first is space semiconductors, which is geared to grow about 50% over the next six years. And then, of course, what we'll spend most of the time talking this morning or afternoon, I should say, the AI semiconductor TAM, which is poised to grow almost 3x over the next six years. What's important to know is that we have a lot of synergy with what we've been doing over the last 20+ years with what we're gonna be offering going forward. Specifically for the aerospace and defense, it's really a very close-knit to our commercially available product.
We had to change the recipe and the packaging in order to withstand the robustness of putting parts in space, but it wasn't really a full design change, so it was a nice easy entry for us in that respect. For the AI chip, we are, you know, and I'll describe it later, but we're really using an SRAM memory cell in order to do our AI chip, which is something we've been doing for over 25 years, so it's really our sweet spot technology-wise. So there is a tremendous amount of synergy from what we've done to where we're going. So it's not, you know, a whole new learning curve. Very quickly to talk about the rad, radiation-hardened, radiation-tolerant product. So these are for applications, again, in space, mostly, satellite type of applications.
What's very nice about these markets is that the ASPs and the gross margins are extreme. To kinda give you a feel for it, our highest density, highest performing part on a commercial level sells for about $300 a part. That same part, if you were to take full Rad-Hard, put it in a ceramic package, could run upwards of $30,000 a part. And so obviously, you know, there's a tremendous amount of effort that gets us to put parts in space, and so therefore, there's, again, that high ASPs that accompany that. We put 85% gross margins. It really is over 90%, for this market. And we're addressing markets that are about $100 million a year annual, with very, very long life cycles.
It's our intent that we want to grab about 10%-20% of the market. But again, what's why it's key for our growth and profitability in this market is essentially every dollar we sell almost falls to the bottom line, so it's critical for us. I'll spend the rest of the presentation talking about our AI chip, but before I talk about our chip, we really need to understand what the challenges are in this market. So first of all, you know, there's an ever-increasing demand for usage in this market. You know, there, besides databases growing daily, there are use cases that are really affecting capacity issues. I mean, ChatGPT is a great example.
A year ago, no one heard of it, and now, you know, it's being used extensively, you know, widely, and it's a tremendous amount of users are jumping on there, and it takes a tremendous amount of bandwidth. Another challenge for this market space is the constant retraining. I mean, we talked about ChatGPT-3. The rumor is it costs $300 million to train that model. And what's happening is, in most models, is that you, you get trained up, and then if any new parameter gets introduced, you have to do a complete new retrain. Great example: let's say you have a model that identifies animals, and so you have trained it on what a dog is, what a cat is, what a horse is.
But then if a new animal comes in, like, let's say a rhinoceros, it won't recognize it, and so it'll ignore it. So you show it a picture of a rhinoceros, and it says, "I have no idea." So that's, you know, you have to go back if you want to add that rhinoceros, you have to go back and retrain the full model. So that, again, very costly, very time competitive or consuming. And then lastly, you know, one of the solutions to address these capacity issues and retraining is to consistently throw more GPUs at the problem. And all that's doing is obviously increasing the costs out there, but also it's really becoming an issue with power consumption.
You know, some of these new data centers that are being put in are using almost the power budget of some small countries in the world. So obviously, taking power out of the AI, you know, solution is critical. So how does GSI address some of these challenges? Firstly, we have a scalable technology, so as the use cases increase, as the capacity or the users increase, we're there because we have a scalable technology. So we're able to actually put one of our APU, also, I'll use the word Gemini synonymously. We can put a Gemini chip on a board, we can put multiple boards in a server, we can put multiple servers in a rack, and it'll act as a large system. And so we have a very interesting tech, scalable technology.
Our architecture also allows us to do something that's called zero-shot learning, and what that does is that, y ou know, I'll go back to that animal model. So now you have a picture of a rhinoceros comes in. The current model ignores that picture. What we can do with zero-shot learning and say, "Okay, I see you're something unique, and you haven't been identified, but I'm not gonna ignore you. I'm gonna put you into a bucket." And then later on, another picture of a rhinoceros comes in, and we recognize you as that, being that unique thing, and we put you back in that bucket that's been unlabeled, but we know you're something similar to that other picture.
And then later on, somebody can come in and look at that bucket and say, "Okay, it's rhinoceros, and now it's labeled, and it's part of the model." You don't have to go back and completely retrain that model. And then lastly, I mentioned we have a unique architecture. It's something that, you know, sort of does away with von Neumann, and we'll talk about that on the next slide. And that allows us to tremendously lower the power consumption for some of these solutions. So if you look on the left, this is how the market is being addressed now. It's being addressed by, by CPUs and GPUs, and those are tied to the von Neumann model. And what does that mean?
That means that when that chip gets a query or a question asked, it needs to go off chip to fetch the data in memory, bring it back to the chip, do whatever search or computation or calculation it was being asked, and then it needs to write the data back to memory. And so there's this constant back and forth between CPU and memory, and that takes a tremendous amount of power and time. If you look to the right, that's our solution. You notice something missing? The memory banks. So why is that? Because the memory is in our chip, and we're actually doing the search or the computation directly in the memory array itself, and so we're not going off chip. This is what's called Compute-in-Memory.
There's been a lot of research recently to discuss this approach, and there's been some people that have been waving their flags saying, "Oh, yeah, yeah, we do Compute-in-Memory." They actually are not, and in fact, we'll be putting out a white paper shortly to discuss what CIM really is. But what most solutions are is they're doing something called near memory. They're bringing the processing elements and the memory closer together, but they still have the model on the left, which is fetch and rewrite data, while with us, we have a true Compute-in-Memory architecture, where it's actually happening in place.
Why that's important is that we're not fetching the data because it's residing where we're doing the computation or the search, and then we're not having to rewrite that data somewhere to memory because it resides in that place. So it's a very efficient model. So I want to take this opportunity to make sure we understand that we're not focusing on the training model or the training market. So the training market, you obviously, the GPUs have done a good job there. You know, they have a lot of the functionality required for that, like MAC functionality. We are focused on more inference, search, HPC kind of applications, and our solution lends itself well in those markets. First, we have data type flexibility. And this is critical.
This is something that, you know, some of the conversations we've had with some of the hyperscalers, they really focus on this. And what data type flexibility means is that our solution is a bit processor. Our Gemini-I, our first generation chip, has 2 million bit processors on it. Now, there is a lot of research happening now that discusses that certain models are more efficient using different precisions. It might be a 4-bit precision, a 5-bit precision, make up a number. And so for things like GPUs, they're hard-coded for certain precisions. So a GPU might be a 16-bit GPU or a 32-bit GPU, and it's hard-coded for that. And so if your traffic patterns come in and they're 16-bit and 32-bit, that's great, very efficient model.
But if they come in at 4-bit or 5-bit, it's not an efficient model. I mean, if, if you had that kind of a traffic pattern, you would most likely take an 8-bit GPU, and then if it's a-- if it's a 5-bit traffic, then you're throwing away 3 bits every cycle. That's not efficient. With GSI, as I mentioned, our solution is a bit processor. We don't care what the traffic looks like. It can be 1 bit, a, a 10-bit, make up a number between 1 and 2 million bits. And then also, since we're not hard-coded, we can change cycle to cycle. So if, if your traffic pattern this cycle is 4-bit, we process it at 4-bit. If the next one's 16-bit, we process it at 16-bit, and so we can, we can change on the fly.
We also have a very large L1 memory on the device, and the bandwidth to that memory is extreme, and that's really, you know, our advantage here, along with the lower power consumption. So I wanna give you two real use cases, and they're different types. The first here is a 1 billion item dataset search, and this was. The numbers on the left were taken from a paper that was published by some AWS engineers. And what they did is they said, "Okay, what does it take, hardware-wise, to be able to do a 1 billion dataset search?" And so they came up with a model that would take 12 nodes of an Intel Xeon Platinum CPU.
Each one of those nodes consumes about 200 watts of power, and so for them to do that search, it would be about 2,400 watts per hour. And then the person using those AWS servers, you know, the end users would pay about $54,000 a month in instance charges. So we did that same exercise using our APU first generation chip, Gemini-I, and we could do that same 1 billion data search in one node, and our chip runs about 40 watts. Candidly, we need a host, and so we'll include a CPU, so that takes our power up to 240 watts per hour. So that is 10%, one-tenth the power of using that CPU solution. So obviously, that's saving the data center a huge power bill at the end of the month.
Also, since we're so much faster and higher performing, we can also charge the instances for the users just under $11,000. So it's now reducing their cost by about 80%. And so we have the dual advantage of being able to save the data centers money and power bills, and saving the end user money and usage fees. The second example is for SAR. This is a customer of ours that wanted us to do a comparison for SAR. Now, if you're not familiar with SAR, it stands for Synthetic Aperture Radar. Think of it as like LiDAR, but on steroids. LiDAR really bounces laser beams off of a surface to create a topography, while SAR uses microwaves.
SAR is a superior technology because it works through weather like clouds, and it works at night, and so there, there's certainly a, you know, a huge use case for that. Now, this particular customer came to us and said, "Hey, I need to create an image that's 5 km by 5 km. It has to have half a meter of resolution, and I need to create that image in 1 second. What hardware is required to do that?" So if you did it in Intel Xeon CPUs, it would require 23 cabinets. This customer was looking at a NVIDIA V100, and so in order to do that, it would create five cabinets from NVIDIA for that process.
For us, it would create—it's a couple of servers only, so it's less than a third of a cabinet. So obviously, we're gonna use much, much less power than those other solutions. That's the first advantage. Number two is obviously the capital expenditure will be less. But there was also another advantage that came out of this, is that now our solution became portable. And what I mean is, obviously, the NVIDIA and the Intel solution would have to reside in a data center, while ours can be in a data center, it can be in the back of a van, it could be mounted to a drone. I mean, it expanded the use case possibilities for our customers. So I wanna talk a little bit about our current chip and the future chips.
So our current chip is Gemini-I. We are really focusing those on only a couple of markets. The first one is the SAR I just mentioned, and also anything around that kind of a process. So image processing, facial recognition, also drug discovery by looking for new molecules would fall under that as well. Also, I mentioned the, the fast vector search for, you know, the, for the search functions. We're gonna be using Gemini-I for that. And then it can also be used for the retrieval portion of ChatGPT, which is, you know, going off and getting the data in order for, for the model to be able to give you an answer.
So on the software side, we have developed many libraries to support all these different functions, and we've also written some algorithms for specific use cases like SAR and drug discovery. But we can't write all the algorithms for all the potential use cases. That's just not, it's just not possible. So what we're doing is offering a compiler stack so folks can write on a high level algorithms that then get translated into a machine code that our part recognizes. We introduced last year a C-based compiler stack, and then shortly we'll be introducing for Gemini-I a Python-based compiler stack, and this will be available to the market. We've also announced this past quarter that we have our second generation chip coming out, Gemini-II. We finished the design last quarter.
We're in fab now, and we'll have chips in hand for silicon by the end of next month, end of February. We're excited about Gemini-II because a couple reasons. Number one is it has 8x the amount of memory capacity on chip, number one. Number two is it's 2x the clock frequency, and so therefore, we're looking at about 10x performance of Gemini-II versus Gemini-I. Also, Gemini-I solution, we sell it as a board because we have an FPGA on there that does certain functionality. All that functionality that's on the first generation solution in that FPGA is now on die, on chip with Gemini-II, so now we don't have to have that FPGA. So what that's gonna allow us to do is go closer to the edge for applications.
Gemini-I will remain focused just on the niche-y markets we've discussed, and then Gemini-II will go out after the higher end markets and more of the edge. Gemini-III, which I'll talk about on the next slide a little bit more, is a next generation. You know, what's the Gemini's role in growth for the company? Obviously, with Gemini-I, we're looking to start working with some of the SAR customers, and that will be on either a hardware, software, on-prem solution, where they buy that from us, or also a SaaS model. Some of the SAR folks actually use data centers like AWS to do their processing.
And so we'll be, you know, using our data centers to be able to take that offload from AWS and do the processing for them. As I mentioned, the tape-out and the design finish of Gemini-II. So as I mentioned, we'll have first silicon next month. We'll do a debug process, so it'll be most likely summertime before we have second silicon, which is something that's most likely a device we can do benchmarking. So we're anticipating doing some kind of sampling by the end of calendar 2024. Also something unique that we started looking at is doing some IP sales and licensing as well. And so we do have an interesting technology, and there are lots of use cases that we're not gonna be going after.
Things like mobile, things like IoT, where if you can take just a small portion of our circuitry and embed it into an ASIC that goes into those markets, it would be certainly an advantage for them. And so we've started discussions with folks to be able to license our IP. And then lastly, I mentioned Gemini-III. So Gemini-III for us is gonna be focused on generative AI or all these large language models. And so it's really, if you look at ChatGPT, you have the retrieval portion, and you have the generative portion. The retrieval can be done with Gemini-I or Gemini-II, but the generative portion has to be Gemini-III.
We're actively looking for A, a technology partner, and B, an end customer kind of funder for this, 'cause this is, you know, this will be an extensive project, and we'd like to have somebody else fund it. You know, again, if you're familiar with ChatGPT, you know the use cases are extreme. Anybody who's used it, it's an unbelievable tool. I mean, it can write papers for you, press releases, it can recommend where you should go on vacation in June, what have you. And so this market is going to explode, and that's why we are very motivated to make sure we get the current generations for the retrieval portion and then the next generation for the generative portion, just to make sure we're there for this market.
So lastly, you know, the radiation- hardened memory and the APU is really the future of the company. They both have growing markets, both have high margins, certainly north of anything we have today. You know, the APU really is a unique solution. You know, the fact that it's a bit processor, the fact that it's not stuck with the von Neumann architecture, really gives us an advantage there, both in performance and in power consumption. At this point, I'd like to open it up for question and answer.
Great. Thank you, Didier. I've got a couple of questions, and I'll kind of monitor the web chat for incoming questions from the audience. But wanted to start off first, you know, certainly not as sexy as generative AI or the APU... but radiation-hardened technology getting you, you know, kind of 85%-90% gross margin, certainly not a bad market. What did you guys do to radiation-harden your SRAMs? Is this just you build in lots of redundancy? Is it a process technology change? Can you sort of walk through what changes from a normal SRAM to a Rad-Hard SRAM?
Sure. Yeah, it was really more of a process change than it was a design change. We really had to experiment with the process, and what I mean is there's two key elements there. You have something called TID, which is Total Ionizing Dose. So the part needs to absorb, you know, it has to absorb a certain amount of ions. And then you also have ESD, obviously, the Electrostatic Discharge, and then you also have all the Single Event Latch- ups and upsets and everything else. So the part can't latch up, and what I mean is basically freeze. And so those and they all work differently, so we had to find the right balance. And so it was actually.
It took us about a year to find the right recipe for that, that basically gave us the maximum amount of each of those three without hurting one of them. And so that, along with also some package changes, allowed us to have that robustness that are required.
Yeah. So these are things like maybe different, you know, film thicknesses on, on the process technology side. Did you have to go to things like error-correcting codes that would kind of keep track of bits to make sure one didn't flip or you were saying to latch up? I mean, is that something you can detect in, you know, in the silicon if that latch up occurs?
Sure. Our highest-end parts have ECC already in them, even on a commercial level, and so that was already built in.
Okay.
But that does help with single bits, you know, kind of stuff. But this is more than just single bits, right? You know, you can't have latch-ups, and that's something that doesn't happen on a commercial level. But with the radiation that comes from space, those are certainly very, you know, very real potentials, and so we had to make sure that we didn't have those issues.
And then as you sort of target those radiation-hardened units, you know, are these mostly military, applications? Does it also include the commercial side of, like, satellite communications, LEO satellites, things like that?
Yes. So if you, if you look at LEO, LEO, you know, as you say, are, you know, low Earth-orbiting, kind of satellites. And so the ones that stay close to Earth are less of an issue, and so generally, Rad-Tolerant is good for those, and that's kind of a little lower. As you get into the GEO, orbits, which, you know, get a little wider and get closer to the sun, those generally, especially for communication satellites, that want to be up there for quite some time, you need a Rad-Hard. In fact, anything that gets close to the sun, the closer you get to the sun, the worse issues you have. And so, so GEOs tend to be Rad-Hard.
Anything that's LEO can be Rad- Tolerant, and also we're working with some customers on things like Mars probes and so on. That's going away from the sun, so generally, Rad- Tolerant is sufficient for those as well.
Got it. Okay, we've got a couple of questions that have come in from the audience. The first one: Is Gemini-I an SRAM derivative or something more differentiated that can do matrix manipulations, arithmetic? Is Gemini-I Rad-Hard or just some of your SRAM are Rad-Hard?
Great questions. Okay, there's a lot of questions there, so let me hit some of the
Yeah, there's one more, so I'll come back, but I- didn't realize there were three or four of them baked in.
Sure.
So.
So let me hit the easy, quick ones first. The Gemini or the SRAMs are, yes, Rad-Hard, Rad- Tolerant. Gemini-I, we have gone under the beam this past... Well, I was gonna say past month. It's actually two months ago in November. So we put Gemini-I under the beam, and the report will be available shortly. The good news is there was no soft error latch ups. That's key. There were some bit flipping, which is to be expected, and we'll have the report out. So that will not be Rad-Hard, that will be Rad- Tolerant, and so that's what we're focusing on for the APU, is Rad- Tolerant. What was the other? There was—what were the other questions in that? There were a couple more.
Is Gemini-I an SRAM derivative or something more differentiated that can do matrix manipulation or arithmetic?
It's a bit processor. It's a processor that, but we do use SRAM cells. So the design is SRAM cells, but it's and it is there's a lot of memory there, but it's a processor. And it does do Boolean functions, so it's not just a memory that has some processing elements near it. It's a processor that is built within the memory array.
So, I think you said 2 million bit processors. If you had inputs that were, say, 6 bits, could you link 6 of these bit processors together to do multiply accumulate, accumulation functions, or, you know, if the application requires multiply accumulates, is that something that that's, you know, you'd probably choose a different architecture, or you know, like a GPU or, you know, some other kind of AI processor rather than the APU?
Sure. So the answer is we can do MAC functionality. It's done on a software level. So the GPUs, their MACs are hard-coded, right? It's done in the circuitry, and that's why they're really, really good at training, because that's important for the training. Right now, our first-generation chip, the MAC functionality is done mostly in software. So we can do training, but that's not what we're focused on.
Okay. Last question from this first investor: What is your current Rad-Hard revenue and forward visibility?
So this past year, I don't have it in front of me, but we did just south of $1 million this past year. And again, this is all prototyping, none of it's production yet. And then this year will be somewhere at least that much, probably $1 million or more this year, and again, that's just the prototyping. And then with the production, we're looking over time to try and get into the, you know, the $10 million-$15+ million, hopefully up to $20 million a year.
Would it be sort of production in 2025, or it could be even further out?
So that's a really good question. The timing of these markets are really unpredictable, and so let me give you a great example. And this is something we talked about on earnings calls in the past. We had one of the prime contractors buy some parts from us that they were gonna be putting on a satellite to get into space, and they bought the parts over two years ago, and their intent was to have the satellite complete and launched within six to nine months of when we delivered parts. That satellite is still not in space. And so, you know, the timelines in these markets are really unpredictable, and so it's far from me to really tell you exactly when it's gonna happen.
The good news is that we have seeded several different applications with several different contract manufacturers, defense contractors. And so it's just a matter of, you know, getting some of those to finally get into space and finally go into production.
Got it. Okay, question from a second investor: Won't you have to go off chip to do inferencing of any decent-sized LLM? The parameters presumably won't fit in the SRAM on a single chip.
That is correct. For LLMs, that is correct. For the LLMs, those are enormous, enormous. I mean, if you look at ChatGPT-3, I believe it's 175 billion parameters, and I want to say that, you know, GPT-4 is gonna be, you know, 3 or 4 trillion parameters. So you're right, that's not gonna fit on a chip. And so that's why, as I mentioned, our Gemini-III is what we're going to be targeting for that. And, as I mentioned, we needed a technology partner, I'm not sure if you caught that, because what we're really, really good at is memory bandwidth. And this LLM needs both memory bandwidth and very, very high-capacity memory, which generally are mutually exclusive.
And so we're bringing that together, and so we need a technology partner who can bring the capacity, and we have the memory bandwidth, and we want to marry that. And so your viewer or your question is correct. In our chip today, we couldn't do an LLM. But, you know, some of these Llama models, you know, we can do, but when you get into the larger models, we need a different technology, and that's what we're working on.
Yeah, maybe I was gonna ask more about that technology partner. It sounds like it's, it's, it's technology, some kind of memory technology you need to pair up with Gemini. And so, can you elaborate, you know, more on what kind of memory or what kind of memory or technology partner you would be you'd be looking for? It sounds like it's not just a financial partner to provide NRE. It sounds like, you know, they're really bringing part of the solution to you.
Correct. I can't get into too much detail, but the answer is you, you pretty much hit it. It's somebody who's gonna have very high-capacity memory that is very high bandwidth. And you're right, they are, they are more of a partner. You know, whether they bring some of the NREs to be discussed, but the NRE will be more from an end user. That's why I mentioned, it really needs to be a kind of a triangle relationship, where we have an end user who will do the majority of the funding.
Got it. So it would be the, I mean, maybe a hyperscaler, not necessarily, 'cause that's probably more training, but somebody who's gonna deploy this in an inferencing application, would be that third partner, or that the third. Yeah. Okay.
E xactly.
Let me see. We've had a couple more come in. What is the revenue visibility for Gemini-I? Is this the $1 million+ for prototyping, $10 million-$15 million strictly for Gemini-I, or is that on the Rad-Hard SRAM?
That was the Rad-Hard, Rad- Tolerant, both. I mean, I combined them, so that's-
Okay, Rad-Hard, Rad- Tolerant.
That's for the Rad-Hard, Rad- Tolerant. For the Gemini-I, we've basically been seeding with that solution. And what I mean is, we've sent either boards or servers out to different entities. We've been focusing on government, defense, military, kind of seeding, and also researchers. And so, excuse me, we've sent boards and servers to researchers and for different applications. Some are for encryption, some are for genomics, DNA sequencing, just different types of applications. And so, you know, we wanna take the model of having, you know, researchers help us. And so, and what I mean is, we know we can't create the ecosystem all by ourselves, we need help, and so we really are focused on those areas.
And they have all agreed to, you know, obviously, after their findings, release, publish papers. Two of them have come out. There's been one from NAU, Northern Arizona University, who focused on a encryption solution, and so they have released a paper on that, and then recently Cornell did one as well, and I wanna, I wanna say that one was on DNA sequencing. And so we certainly wanna continue to, to get the word out of our technology and, and expand some of the markets with help from folks like that.
Great. It looks like we've got about a minute left, then there's one more question in the queue from investors. Any comparison you can make to SK Hynix's Compute-in-Memory solution?
I'm not familiar with theirs. The only, there has been an announcement about SK Hynix with a potential relationship with NVIDIA in the future, but as far as Compute-in-Memory, most everything we've seen has been really more PIM, which is Processing-in-Memory. And people try to use Compute-in-Memory and Processing-in-Memory synonymously, and they're not. Really it's just the PIM just has, you know, the processing elements near the memory elements. But at the end of the day, I'm not specific on the SK Hynix, but my guess is it's a near memory solution and not a true Compute-in-Memory.
Okay, we are at the end of our time. Didier, Doug, thank you very much for joining us at the Needham Growth Conference. We really appreciate your participation.
Thank you, Quinn. Thank you.
Thanks, everybody.