Great. I'm Tristan Gerra, Senior Semiconductor Analyst at Baird. I would like to introduce Everspin Technologies, a supplier of next-generation memory technologies, and we're pleased to have with us today Sanjeev Aggarwal, President and Chief Executive Officer, and Anuj Aggarwal, Chief Financial Officer. This is a formal presentation, so there's going to be slides. So, Sanjeev, the floor is yours.
Thank you, Tristan. Thanks for the opportunity to present over here.
We can see the slides perfectly.
Great. So to kick off, I just wanted to make a safe harbor statement about forward-looking statements. So any forward-looking statements that you make, just be aware that it's not a guarantee, but it's what we're planning to do. So in terms of the company overview, Everspin is the leading provider of MRAM technology and products for mission-critical applications. We've been in production since 15 years now. We were formed in 2008, and we have fab partnerships with NXP and GlobalFoundries. NXP, the fab is over here in Chandler, Arizona, close to our headquarters. That is an 8-inch fab. And then we outsource our 12-inch work to GlobalFoundries, which is where we do our STT, or spin-transfer-torque, MRAM production. Our parts go into data center, industrial, IoT, automotive, and radiation-hardened applications.
We have shipped over 150 million+ units since our inception to over 2,000+ customers, which include companies like Siemens, IBM, Schneider, for example. I will go through a more detailed list as I go through the presentation. We maintain our leadership in this technology, and we have actually filed more than 650 patents and applications worldwide and successfully licensed other companies. For example, GlobalFoundries is licensed for our STT-MRAM technology. And similarly, the U.S. government, through Honeywell, Frontgrade, and QuickLogic, are licensed for our STT-MRAM technology for specific fields of use. We do have our satellite sites that are shown on the map over here. We are in Asia. We are in Europe. And obviously, we are in North America as well. To give you some highlights, we are the singular domestic provider of MRAM for mission-critical applications.
That makes us of strong interest to the national government for security applications. Our technology is naturally radiation-immune, which is why it's critical for outer space applications. It's very robust at extended temperatures, up to 150 degrees Celsius, which is why it's of interest to the national government security applications as well. We have a diversified blue-chip customer base that I'll go through in a couple of slides. I'll talk about the end markets and applications over there. But it's mostly focused on industrial automation and automotive applications. I'll go through that. We have a large market opportunity that exceeds approximately $7.4 billion by 2027. We are able to do this with a proven management team that has extensive experience delivering market-leading technology solutions.
I would like to mention that we are the pioneers in commercializing this MRAM technology, first in 2006, with which we called Toggle MRAM, and then again in 2014 when we commercialized spin-transfer-torque MRAM, being the leader in both of those applications. We do have a strong financial position. We are now debt-free. We keep expanding our operating margins and have positive free cash flow. So why, why MRAM? What is the value proposition that we bring to the table? The first one is persistent, which basically means we do not require power. So you can turn off the power, and our memory retains the information that you've written in it for extended periods of time and over extreme temperatures. Performance, we have very low latency, similar performance to SRAM and DRAM-like performance. We have endurance, superior read-write cycles. It supports different memory workloads.
And you don't need any sophisticated management like you need with NOR Flash or SRAM, which degrades over time. And in terms of reliability, we have best-in-class robustness. Like I was mentioning, -40 to 125 degrees Celsius, even 150 degrees Celsius, and tested for extreme conditions, which includes space as well as temperature. So you can actually use this memory to behave like an SRAM or DRAM or even non-volatile capability like flash. So based on these performances, what we have done is we have identified or divided our business over three different categories. One is PERSYST. And that is a play on the words for persistent data memory. So basically, what is included in this technology is Toggle MRAM, the one that we brought to production in 2006.
STT DDR, this is the commercial parts that we've actually been shipping into the data center since 2014. And then the last one, STT X5, which is what we commercialized in 2022. And the common to all of this is our fast data logging, and extreme temperature reliability. The time for this market for all of this is about $2.5 billion by 2027. The second one is unified code and data memory. So this is where you want to store your code on bootup and then be able to execute the memory. And there you can also store some data, and which is where MRAM is unique, where we can actually execute in place as well as have code data storage for executing the memory itself. The play on the words is we are calling it UNISYS. And this includes the xSPI family. xSPI stands for Serial Peripheral Interface.
And by xSPI, we mean it can be quad-SPI, dual-SPI, or single-SPI. And we have products in this family in the market today. Then the SIP solution, you can actually use known good die of this product and package it in solutions, for example, for FPGA or microcontrollers. The next one is an LPDDR interface. And what that enables is you can do fast reads and writes using our STT-MRAM. The challenge with the current technology is flash is very slow to write. So this actually brings an improvement to the LPDDR, to the memory over there with this LPDDR interface. And it's directly related to the automotive applications. And I'll talk about that in a couple of slides. And last but not the least is a chiplet.
Obviously, this is in its early stages, but we are working with customers to actually provide and develop the right interface that is of interest to them. The last one is AgILYST. And this is where we are promoting our innovation for transformation, AI for the inference engine. And then AgILYST basically stands for agile. So it's actually we have a distributed MRAM across the FPGA, for example, that allows us to be agile as well as relevant for artificial intelligence inference, sorry, AI inference engines. Our path to market over there is actually selling those memories as well as licensing the technology to the foundry so that we can actually deploy it based on the customer use. The TAM for that is on the order of $17 billion. And the TAM for UNISYS is on the order of $4.9 billion by 2027.
So how does this tie up with current memory? So what I'm doing over here is plotting the read-write cycle on the Y-axis and plotting the latency on the X-axis. So on the extreme right is where you have the storage market. That is not where, you know, for example, where NAND has the biggest play. So it's a really slow memory, and it's really cheap. So that's not where MRAM plays. But MRAM can play in replacing this NOR, which is basically pretty slow for writes. And our memory can actually write pretty fast on the order of 100 nanoseconds, almost three orders of magnitude better than NOR Flash, and also an order and a half better read-write cycle endurance. So that's the STT MRAM for enhanced NOR. And like I was saying, this is a Toggle MRAM.
It's actually very similar in latency to SRAM and DRAM with almost unlimited read-write cycles all the way up to 10^14. So comparing it to the previous slide, so this is where persistent memory is, and this is where our UNISYS memory is. Okay. And if you look at the market, this is data from memory industry in 2022. On the left over here is the memory market space in 2021 and it growing at a CAGR of 8% by 2027. The three markets that you are interested in or that MRAM is relevant to is this NVSRAM, FRAM, and the emerging non-volatile memory. They fall under the category of persistent memory. And you can see that the persistent for NVSRAM and FRAM is flat, going over 2020 from 2021 - 2027 at $0.7 billion.
But the emerging memory actually grows from 0.6 billion to 1 billion to 1.8 billion. If you look at UNISYS, the time actually goes from 3.5 billion to 4.9 billion. The reason over there is because flash actually stopped scaling at 40 nm CMOS. If you're looking for advanced technologies and trying to use flash, then you have to make a choice. MRAM seems to be the ideal choice. We believe that this is a very large market that MRAM can play a role in. Next, I'm going to go through some examples. This is the PERSYST application examples. The common theme over here is low latency or fast data logging and reliability at extreme temperatures. We're going through a couple of examples.
Over here, if you're looking for aerospace and transport, for data recorders or the black boxes, you want to log the data pretty quickly. So you want really, really fast writes. So flash doesn't work. And DRAM, for example, is not non-volatile. You'll actually lose the memory. So that's the reason why MRAM plays a role over here. And it's being designed in by various automotive customers and definitely for aerospace. Same thing for medical patient record. You want fast data logging. In automotive, for EVs, for example, you want real-time monitoring of the inverters and sensors that are actually used in the batteries. So it basically plays a strong role in the battery management system. Gaming is another example over here. So in casinos, you actually have to log the data or every action in three different locations at very fast speeds.
So in fact, we have a very large portion of the gaming market. And by gaming, I mean casino gaming, not computer gaming, where in Europe as well as in the U.S., we actually have been able to capture almost 80%-90% of the casino gaming market. And then because of our radiation immunity, we have applications for code and data storage for aerospace, a large market of the PLC, the Programmable Logic Controller Module. So companies like Siemens, Keyence, Schneider, they use our memory. And the idea over there is if you're using standard memory and you lose power, basically, everything that you have in the manufacturing line is crap. You lose all of that volume of material. With our technology, if you lose power, our memory is persistent. So it remembers where every robot arm was.
So on power on, you can continue wherever the robot stopped. And therefore, you don't have any, any scrap. And similarly for, like I mentioned earlier, anywhere that you're using batteries, for battery health management, we play a role. So once again, our value proposition is low latency and reliability at extreme temperatures. Here is an example of UNISYS, an example, for industrial IoT. On the left, you're looking at industrial IoT, 5G applications where you have sensor inputs or LPDDR or LPDDRAM, low-power DRAM. And you're actually using three different memories for different purposes, for code storage, then for data logging, and then for data storage, NAND. What you can do with UNISYS MRAM is basically replace all these memories with one single memory. And that will do all three functions. For data storage, it'll log data quickly, which is actually our strength. And then also, it'll, store data.
In some applications, you might want to keep the NAND because it's much, much cheaper. But if you're looking for consolidating all the memory, the UNISYS MRAM is a good is a good choice. Similarly, in the FPGA market, if you're looking for fast over-the-air updates, like I said earlier, our UNISYS MRAM can, you can write on the order of 100 nanoseconds as opposed to a microsecond that for current that you have for NOR Flash. So much faster writes and then also multipurpose memory function in one chip. So that's the value proposition of MRAM in this industrial IoT space. Here is an example. If you had a 128 MB program time, it basically reduces to 2 seconds versus several minutes if you're using the traditional NOR Flash memory. Here's an example of application for the UNISYS in automotive cases.
So this is where, you know, where the industry is headed. You know, you want everything is going from zonal to central. And it requires a much faster code execution and a lot more memory. And NOR Flash, because it stops scaling at 40 nm, doesn't quite do the job. So we have three different options over here. One is you can actually use MRAM chiplets. And that will serve as the boot where you can actually boot off of the code that you actually stored over there. The other example is where you have an open architecture. You can actually have the MRAM outside the chip. But you can then use it for flexibility. You can rewrite or reprogram the code because it's easier to upgrade. It's very fast. And then it's you can see it in a more advanced process. Flash stops at 40.
UNISYS MRAM can be available at 22 and even 1X node 16 or 12. Then the other one is the hybrid architecture where you can actually have the code inside the memory. Then you basically use the MRAM UNISYS MRAM outside. So you actually use a chiplet for booting. Then you use a discrete MRAM for program and execution and flexibility. So here, you basically see the performance and reliability and power or slash speed as a value proposition that MRAM brings to the table. Next, I'll go through some of the customers that we've been working with since 2008. On the left, you have the enterprise customers, for example, IBM, Dell, Microchip. The idea over here is you're replacing the RAID memory with MRAM. So the RAID is actually very slow to write.
Because of our fast data logging, all these companies are actually replacing their RAID general memory with Everspin MRAM. The next one is industrial automation. This is the Siemens, Schneiders, Keyence, Mitsubishis of the world. The idea, again, over there, I was talking about, the persistent benefit that we bring to the table. If you lose power, you don't have to scrap all the material in the line. You can actually start wherever the manufacturing line lost power or got interrupted for whatever reason. Medical, again, fast data logging, network and infrastructure, same reason. Casino gaming, right? That's what I was talking about. For example, in Europe and U.S., the IGT.
Then now also in Japan, they have realized that this is the way to go if you want to do fast data logging and store all the actions that are happening on the casino machine. One other thing I didn't mention earlier is our memory actually has a tamper detection. So if anybody tries to turn off the power or use a huge magnet to actually disrupt the memory, it'll detect it, and it'll shut it down. So you don't lose any data. And the customer or the consumer is unable to use that casino gaming machine anymore. Last but not the least is the military, aero, and transport. This is a high-margin, low-volume business for us, for the U.S. government.
Or, as you see, the various DIB customers, Frontgrade, Honeywell, also in Airbus, which is basically looking for data recording for the black boxes. So we are actually designed in over there as well. e2v upgrades our parts and supplies it to the space industry as well. So a large bandwidth of customers that we've been serving. And these form, you know, the categories form the 2,000+ customers that we've been shipping into since 2008. Here, I'm highlighting our mission-critical applications for outer space. So you can see on the left, our memory is actually on the Mars rover, sorry, or the Perseverance since 2020. And we are actually actively collecting data from Mars on a regular basis. The other one is the Lucy mission, that's going to Jupiter. Our Toggle MRAM is also designed into this one.
And last but not the least, over here, I'm showing the EVs where we've actually been designed into the train system or the battery management systems for EVs. So we have been designed in because of our extreme temperature reliability and the fast data logging. And STT-MRAM provides a promise of scaling as we go from low densities like 16 Mb to high densities all the way up to 1 gigabit for these applications. So in terms of capabilities, we have a design team that has actually been designing our parts. So we have done various designs. We do a serial interface, the xSPI, random access interface, and then also any custom interface for our customers, which includes the U.S. government where they come with some requirements on modifying the interface. And we've been able to enable that design.
This also includes GlobalFoundries, for example. Their embedded design was supported by Everspin and is based on our initial embedded design that we transferred to them. So we have experience with multiple successful engagements, whether it's the U.S. government, Honeywell, Frontgrade, or GlobalFoundries in the commercial space as well. In terms of advanced nodes, we have 12-inch STT-MRAM with GlobalFoundries on 40 nm, 28 nm, and 22 nm. And we do have a JDA signed on 12nm as well. We have been in production since 2017 at GlobalFoundries on 40. And currently, we are on 28 with a 1-gigabit part for the data center. And we are planning to bring out a 22 nm part in 2024, going into high-volume production in 2025. We also have an 8-inch line over here in Chandler.
And I say that for the last is because it actually provides us an edge over our competitors. Having our own line has been a blessing because it actually has given us the ability to support trusted U.S. government programs. And we can do all our R&D over here. So we do all our innovation and R&D in our 8-inch line. So all the IP is secure. And then we also are not dependent on somebody else to do the innovation. And that leads into the next topic that I wanted to talk about. This is where we have done our most recent innovation that we're planning to bring to product in 2024, 2025, 2026 timeframe. The first one is MRAM for FPGA. What we have done is we've built a distributed MRAM solution. And we're calling it the configuration MRAM. Actually, that's this one.
Over here, we are basically talking about replacing Flash with chiplets in the or in system-in-package solution because Flash stops scaling at 40 nm. So if you want to use a 22 nm or a 16 nm or a 12 nm MRAM to replace NOR, we offer we offer this capability. And that's our UNISYS product that we talked about earlier. Here in Chandler, we've actually developed a novel Everspin IP, which actually is distributed MRAM. And what that does is it gives us instant-on, fast reads, and low power. So what that enables is if you're an FPGA chip, on power up, it actually downloads the configuration from an external chip. And then it executes through the SRAM distributed across the chip, the lookup tables. Our memory replaces the external memory RAM and replaces the SRAM with this distributed configuration. So now it's reconfigurable. It's instant-on.
And it's fast reads. This is something that we are planning to demonstrate on our 90 nm CMOS in 2024 and have the goal of demonstrating it on advanced nodes in 2024 and 2025 as well. The last but not the least is supervised and unsupervised learning for neuromorphic networks. Again, our memory is ideally suited because it's non-volatile. It's instant-on. And it is basically can be on the edge for supervised and unsupervised learning. So this is more in the exploratory phase, but the FPGA and the distributed MRAM is closer to bringing a solution to the market. This shows our roadmap today, PERSYST, UNISYS, as well as the distributed MRAM or the AgILYST technology. PERSYST, we have the Toggle MRAM that we've been selling since 2008, varying from 1 Mb all the way up to 16 Mb. And if you stack it, 32 Mb.
The new products that we brought into production in 2022 and 2023, it goes from 8 Mb all the way up to 128 Mb. And it's an xSPI interface, the fastest memory in this category. 400 MB/s is supported with octal SPI interface, and this is both on 28 nm CMOS through GlobalFoundries. The next product that we plan to bring on market is this 1 Gb enhanced NOR or the UNISYS. And again, it's going to be an xSPI interface at 200 MHz. It'll be the fastest memory and much faster than NOR Flash. And this is targeted towards the automotive and the industrial markets that I talked about earlier. AgILYST or distributed MRAM, this is the product I was talking about. We plan to demonstrate that on 90 nm in 2024 and then bring it out to the commercial market in 2025 and later.
So, relevance to the FPGA market—I mean, it's a pretty huge and pretty growing market at about 8.4% CAGR. You can see over here, it grows all the way from $4 billion all the way up to greater than $6 billion in five years. Since Flash does not scale below 40 nm, we believe that we have a strong play. This is something that we are working with FPGA vendors to design in, through either a chiplet or a system-on-chip type solution using our STT-MRAM solution on 28 nm and our forward-looking 22 nm solution as well. How do we make this happen is through a very experienced executive team. Anuj is our CFO with experience from Intel as well as Thermo Fisher Scientific.
David Schrenk is our VP of Sales and Business Development with a strong experience from Intel. Amit is our VP of Backend Operations, who has experience with Marvell and Semtech. Khaldoun Barakat is our VP of Fab Operations and Quality with extensive experience from Intel as well as Samsung. Kerry Nagel leads our technology R&D division and has been with Everspin from the very beginning. He transitioned from us to Motorola to Freescale and now to Everspin. So he's our technology powerhouse and our knowledge base that is behind all the technology that's been developed at Everspin Technologies. Yong Kim is our VP of Product Development, brings experience from Cypress, Infineon, and now is at Everspin with us. So a proven team with a strong experience in developing market-leading technology.
The last slide I just want to leave you with is our revenue has been steadily growing from 2021 to 2022 to 2023. We started at $55 million, when I took over. And now it's at $64 million in 2023 with pretty strong growth margins from mid-50s all the way up to 60%. We ended up 50, with 58.4% for the fiscal year 2023. And we keep generating cash flow over here. So you can see in 2023, we had record cash flow with $11.7 million at the end of the year. So thank you for your attention. And I will take any questions.
Great. Thank you for the presentation. I think we only have one minute left. But I wanted to, for the one question that I could fit in in that one minute, is, you know, how should we look at the inflection point for your revenue in the medium term? You know, is this going to be, you know, AI type of application that's going to drive increased content? Is there also, you know, some type of shrinking gap from a pricing standpoint if you could help us, you know, understand, you know, what could be the next catalyst at the top line? And you've shown already some nice top-line growth over the past few years.
Sure, Tristan. In the very near term, we see the persistent memory solution that we brought to the market in 2022 and 2023, the low-density STT-MRAM. I think that's going to bring us revenue in the second half of 2024, going into 2025 and 2026. Forward-looking, the UNISYS that we have in design that we plan to tape out in 2024 and bring to production, volume production in 2025, you'll see that growing. That's a huge market. And NOR Flash does not scale. So I think it gives us a large opportunity over there. And finally, we are really excited about this distributed MRAM for AI inference.
I think that's going to be an inflection point in 2026, 2027 time frame where, you know, you people will start valuing or our customers will start valuing the fact that it's a low power, instant-on, and zero standby current on the edge where currently, SRAM, you know, it's a very leaky solution. It's not non-volatile. That's where we have the edge. I think that's where MRAM will play a, a strong role.
Great. Sanjeev, thank you very much for being with us today. Anuj as well. This concludes our presentation. Have a great day.
Yeah. Thank you, Tristan. Appreciate the opportunity.
Thank you, everyone.