Nautilus Biotechnology, Inc. (NAUT)
NASDAQ: NAUT · Real-Time Price · USD
2.590
+0.020 (0.78%)
At close: Apr 28, 2026, 4:00 PM EDT
2.690
+0.100 (3.86%)
After-hours: Apr 28, 2026, 7:10 PM EDT
← View all transcripts

Morgan Stanley 22nd Annual Global Healthcare Conference

Sep 4, 2024

Speaker 1

Hi, my name is Yuko Oku, and I'm on the Life Science Tools and Diagnostics team here at MS. Before we begin, for important disclosures, please see Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales representative. My pleasure to host Nautilus today, and speaking on behalf of the company, we have CEO Sujal Patel. Thank you for joining us today.

Sujal Patel
CEO, Nautilus Biotechnology

Thank you, Yuko, for you and Morgan, for having us here.

Maybe to start, for those not as familiar with the story, where do you see Nautilus fitting within the evolving proteomics landscape?

Thanks for that, to kick us off here. Maybe I'll just start briefly with what proteomics is and why it's important. Proteomics is the study of proteins, and proteins are the functional unit that does all the work in your body. And so, genomics, study of DNA, is a field that's kind of been conquered over the last couple of decades. But the DNA in your body, while it's important, it really doesn't change much from the day you're born to the day you die. It really doesn't change at all. Proteins, they make up your cells. They change every day in response to what's going on in the environment around you, what you ate, your disease state, and because of that, they're the critically important molecule. Some 95% of our FDA-approved drugs target proteins.

Most molecular diagnostics target proteins, and so you really have to be able to measure proteins comprehensively and with high definition, good fidelity, if you really wanna understand biology. And today, the state-of-the-art is using a patchwork of technologies, which I'm sure you're gonna ask me about, and the end result is analyses are very incomplete. They're not highly sensitive, and because of that, there's a real gap in the biological insight that you have, and it really affects medical research in a wide variety of areas and fields. Eight years ago, Nautilus was founded with a very new approach to measure comprehensively virtually the entire proteome from any sample, from any organism.

It is a brand-new technique that was developed by my co-founder, Parag Mallick, who's Stanford faculty, and it is a product that requires a large amount of development. We've been at it for just under eight years at this point, and we still have about a year's worth of development remaining in our process as we get this product out into the marketplace. We've been public for about three and change years at this point, and that's a good, good place to start here.

Okay. Could you also elaborate on how Nautilus's proteomics platform is differentiated from peptide sequencing companies like QSI and Encodia, as well as proteomics platforms like Olink and SomaLogic, now part of Standard BioTools? And then how about with Seer, which is more complementary solution to mass spec?

So the proteomics landscape is a more complicated landscape than you have with genomics. You have what I would say are two ends of the spectrum. You have companies and products that are focused on measuring comprehensively as much as they can from a sample, and that's often equated to the discovery side of the spectrum. And then you have companies that are focused on more targeted analyses. They're focused on looking at, you know, a small number of proteins or even one or two proteins and understanding in a great, great amount of detail what's going on. And then you have companies. This is not just two polar ends, it's a spectrum, and there are companies that live across that spectrum. And so you kind of mentioned, you know, a few different public companies in there.

Let's talk about the existing players first. At the discovery end of the spectrum, the gold standard is using mass spectrometry-based proteomics. So that means using the mass spectrometer, which is an instrument that is an analytical instrument that's used in a wide variety of applications, but using it with particular set of technologies, workflows that enable you to use that for protein analysis. That is Thermo Fisher, Danaher, Bruker, Agilent, and others. The other end of the spectrum, on the targeted side, you have traditional assay companies. So you mentioned a few of them, SomaLogic, which is now part of Standard BioTools, Olink, Luminex, and others.

And on this end of the spectrum, you are generally using an antibody or an affinity reagent for every protein that you wanna look at, and you're using an assay that's not a lot different than an ELISA or a Western blot. Some of these assays have been around for the last couple of decades. And it is a spectrum, so there are various companies in different parts of it. You know, Quanterix, for example, is a name that sits on the clinical side of the targeted world, really focused on very specific molecules that have relevance largely in areas like neurology. Nautilus is a company that is really focused on two ends of the spectrum.

The primary use case for our platform, and the primary use case that we've been building for many, many years, is to be able to analyze comprehensively the entire proteome from any sample, from any organism. That means take the analyses that you can do today on a mass spec and make it much, much easier and accessible because it is extremely complex to use that workflow on mass spec today, and to deliver data fidelity that's, you know, 10-30 times higher sensitivity, dynamic range, and coverage of the proteome than you can get with a traditional mass spec-based system. Now, that being said, you know, you've been on our earnings calls.

One of the things that we have also, with our platform capability to do, is we have the capability to analyze, in a targeted way, proteoforms with a very, very deep analysis that's not capable with any other assay. And so there are other assays that can look at a protein molecule and tell you, "Hey, it's modified with a phosphorylation at this site," or, "It has this methylation." But being able to map a single molecule and understand where all the modifications are is a use case that's not possible with existing technologies, and one that is enabled in our platform. And so we've been working with collaborators like Genentech, and MD Anderson, and Amgen, and others, on use cases where we can analyze protein forms.

That's a use case that's on the targeted end of the spectrum, and I think that, you know, over time, as our business unfolds, we'll have business in both areas. But the predominant form where we're spending most of our money, most of our time and energy, is really on this discovery end of the spectrum.

Got it. So I wanted to start with your underlying technology, PrISM. Protein Identification by Short-epitope Mapping, or PrISM, is the underlying technology for the platform. Given that multiple affinity reagents are required to identify the molecule in PrISM, what are the ways that you can ensure that you're accurately identifying the protein? Particularly as you increase the number of cycles, what are the risks of inaccurate identification due to cross-reactivity or non-specific binding to the target? And what are the ways you're validating each of these affinity reagents and ultimately the identified targets?

Yeah. This is a really important differentiator for our platform relative to what else is out there, right? So, you mentioned in your last question a few different approaches. Let's compare and contrast them with what we're doing when we-- and then I'll answer your question as part of that. The mass spec is a technology that's not a lot different than the peptide sequencing approaches that are out there, and the only real commercial company in peptide sequencing is Quantum-Si, is the public name. And these companies all take a protein molecule, they break it into tiny pieces, and they identify what the piece is. It's done in QSI by sequencing the peptide, that short fragment of the protein. In the mass spec, it essentially weighs the peptide and infers what the sequence is.

Both of those measurements are highly incomplete, and so really what you're doing is you're looking at what all proteins might exist in the human body and trying to pattern match and say, "Well, what actually is this fragment that I'm looking at?" That sort of approach is very insensitive, so you have to see hundreds of those fragments at a minimum before you can make a call that a protein is in the sample. And if a protein is rare, being able to say, "Hey, there's more of this than this," accurately, is incredibly difficult to impossible to do. And that is a critical question, for example, in drug discovery, where you're looking at rare targets that might be sitting on the cell surface. Being able to quantify what the differences are is really critical.

Those approaches have a significant weakness in sensitivity and a significant weakness in being able to dig deeply into a sample. One of the things that you have to recognize with proteins is that a sample is very complex, so contrast this to the genome. You probably have about 37 trillion cells, which is what an average human has. Every one of those cells, for the most part, carries the same genome. For the protein content, each one of those cells has a million protein molecules on average, and every one of them is different. If you want to be able to understand what's in a standard cell lysate that you'd see in pharma as a standard sample size, that sample has about ten billion protein molecules.

So if you want to be able to analyze that comprehensively, you need to operate at that scale. That's a scale that's three to four orders of magnitude more than what the mass spec can do, and that mass spec is even more than the peptide sequencing approaches that exist today. So that's one end of the spectrum that you can use to analyze proteins. On the other end, there are targeted assays. I have a molecule, EGFR. I make an antibody that knows what EGFR looks like. There's a binding event. Somehow I observe it. Those approaches are okay, but they also suffer from sensitivity challenges, and they have great challenges in terms of coverage because you have to have one or two antibodies for every protein you want to identify.

And being able to build 10,000 of those is something that's never going to happen. And these antibodies are also very cross-reactive. They say they're supposed to pick up EGFR, but they also pick up 17 other molecules that you didn't expect at varying propensities, and so it's very complicated to get accurate results. So that takes us to PrISM, which is our approach. 8 years ago, my co-founder, Parag, realized that the biggest weakness of these antibodies is that there's this cross-reactivity, and that cross-reactivity leads to very fuzzy results that really hurt science. And what Parag realized, having... You know, Parag's a very unique animal. He's half computational scientist, half biochemist. I know that that's really trendy today, but Parag's been doing that for 25 years of his career.

He's a guy who has, who's got academic degrees in both, and his lab at Stanford sits at that intersection. And thinking at that intersection, he realized that in computing, we'll often identify something, like, for example, your location, when you pull out your iPhone. We'll identify it by taking lots of different data points from satellites, from Wi-Fi access points that I see, and I combine them together to get a shockingly precise, measurement, which is what you have on your iPhone, except in New York City, but most places. And he applied that same approach using PrISM to looking at a single molecule and identifying it. We use antibodies, like you mentioned, and we'll use roughly 300 antibodies to identify almost all of the proteome when we get to market.

But we don't take one data point and say, "Oh, that's an EGFR molecule." We introduce hundreds of antibodies which are not looking at identifying the molecule, but are looking to tell us what the characteristics of the molecule are. And we combine those data points together to get to a shockingly precise accurate identification of every molecule, or virtually every molecule in the proteome. And we do that by having, you know, different 10-20 different touches on that molecule from different antibodies that it binds. So we don't have one measurement that says this is EGFR. I have 10-20 different measurements that all prove that that molecule is that, and we do that in parallel in our instrument for 10 billion molecules at a time.

So ultimately, what that yields is it yields the most precise, most sensitive, analysis of any of the approaches that are in the market today.

Okay. And given each affinity reagent recognizes short epitopes on the protein, how do you arrive at 200-300 cycles needed to identify greater than 90% of the proteome? Is there any way to condense that number of cycles required and shorten the runtime by running more than one affinity reagent per cycle?

That's a great question. So you mentioned our affinity reagents target short epitopes. So what does that mean? If you unpack that and I convert it to English, a protein is a sequence of amino acids. There's 20 different amino acids, and if you want to identify the protein, one approach is to look at short segments of it and figure out what short segments might or might not be in a particular protein. So if I unfurl my EGFR molecule and I have an antibody that recognizes, call it three- amino acid sequences, which is what we generally raise our antibodies to do, you can go and say, "Hey, EGFR has this sequence of YLS in it," which are three of the amino acids, and it doesn't have SLL in it.

And by looking at what it has and what it doesn't, we can go, using computational techniques, determine what the molecule is. And each of those sequences is not unique to EGFR. It'll be present in thousands of different proteins. So one data point isn't gonna tell us what a molecule is, but we combine 10, 20 of those data points, and you're able to tell with shocking precision what that molecule is. So you asked, what is how do we know that it's 200 - 300?

We have done computational simulations using the algorithms that we have developed over the course of the last 7 or 8 years now, and we can tell you, based on the type of probes that we have and what we're developing in our antibody development pipeline, what we think the outcome will be in terms of probe set. There's two parts to the question you asked. You asked, can you put two probes in or more than one probe in to shorten the number of cycles? Because the cycle time is related to how long it takes an instrument to run one sample, and the answer is yes on that front. In fact, the commercial instrument that we will bring to market in 2025, that instrument runs two antibodies per cycle, with two different measurements occurring at the same time.

When we talk about 300 different affinity reagents, we're really talking about 150 cycles. Second part of the question you asked was well, can you shrink the number of antibodies? And the answer to that question is, in the long run, yes, we can, because we can develop a very optimal probe set in the long run that allows us to have less probes. That being said, there's no work that goes into optimizing the probe set for V1 here. I would expect that when we launch a V1, it's somewhere gonna be 300 or even north of 300, maybe 320, 340, 360 probes, somewhere in that range. But, you know, that puts the cycle time in a pretty tight range, and so from a customer standpoint, they won't see any of that complexity.

Got it. And then given PrISM requires a number of stripping and washing steps between each cycle, are you effectively able to wash away the binding reagent from the prior cycle without washing away the proteins bound to the scaffold? How many cycles can you successfully run today?

Yeah, that's a great question. Just for some context, in any of these types of instruments, whether it be a genomic sequencer or the type of assay that we're trying to run, you always, if you're going to have multiple cycles, meaning you're going to introduce more than one reagent into the system, you always have to deal with the fact that every cycle has some loss of signal or some degradation in signal that's occurring.

You know, it has been a five, six years of long, hard work for us to get the conditions right, to remove all of the antibodies from our flow cell, to make sure that our protein molecules are staying stuck on our chip and are still in our flow cell, to make sure that we don't end up with background fluorophores that are creating a background signal that washes out the actual binding events. You know, if you look at where the assay is today, after it's a huge amount of work, the assay is very stable through 180 cycles. What does very stable mean? It means that the background noise doesn't increase appreciably over the course of 180 cycles.

So one of these cycles, times two, would be 360 affinity reagents. We also have great stability where our SNAPs stay stuck, which is the scaffold that we use to attach our proteins. Our proteins stay stuck, they stay accessible for analyses. The chips, the flow cells, all of that system makes it for one to two days, which means that it can survive for the term it needs to for an analysis. And all of the pieces of the systems are functioning well, meaning that the signal loss that you see is pretty minor and very much in our spec.

Great. And then, you know, you have this computational aspect to the platform. So what are the storage and processing requirements for the platform, and what are your plans to make them more manageable? Will the product utilize cloud-based or local processing? And if you're using cloud-based processing, have you started working with vendors?

Yes. So, our system is meant to be incredibly simple for the biologists and the scientists to use. Incredibly simple. The mass spec, incredibly complicated instrument. There's the mass spec, the time-of-flight tube, there's the UPLC, there's the HPLC, there's. If you go and pull one up, it's got gases coming in, it requires crazy power. If you have an Astral, you probably need to reinforce the floor 'cause it's so heavy. There's a computer hanging off of it. Our instrument is a benchtop instrument. It's simple. There's a compute unit that's hidden inside of the instrument. It's run by a touch panel and a very nice user interface on the front. But you're right, there's a lot of compute power and storage requirements and so forth that are necessary. But we've made that incredibly easy for the customer to manage.

So when the instrument does a run, you go onto the touch panel, you start a run, it does its thing. All of the large image processing is done by that onboard compute unit that's in the system. All of the resulting data from that is uploaded to the cloud, where we have a service that essentially does the analysis, does some basic visualization, some analyses, gets it into a customer portal, where literally they can just open up a web browser and look at their analysis in a few hours after the run has been completed. And so we make that process really easy for the customer, and we provide the service as it really is a Software as a Service. The customer pays for that service, and we take care of all the analysis on the back end in the cloud.

Okay. And how about storage requirements? Is there incremental cost to the customer for data storage, or would that be included in the.

Yep, that's included as well. Now, you know, there's one tiny asterisk there that we have to work through, which is that, you know, much of the types of analyses that we can do require significant computing power, and the storage cost does equate to cost for us over some period of time. And so, at some point, there may need to be a tier based on how much the customer uses it. But I think that that model will be pretty simple, and I think the most important thing is not requiring the customer to have significant on-premise infrastructure, which, you know, has been a requirement through the genomics era for a long time.

You've begun to run samples on various model systems of various complexity. Could you provide examples of the model systems that you run through the platform, and what are the key learnings from the process so far?

Yeah, that's a great question. So, for us, what we analyze has been evolving over a long period of time, and so when we started with our very first analyses on our platform, with our assay, as we were building this up many years ago, we were just looking at our snap particle with a peptide attached to it. Could just be a three amino acid or six amino acid peptide. From there, we moved to longer peptides, where we would be looking for our affinity reagents to be able to find a three- amino acid sequence in the middle of the peptide. From there, we moved to proteins as the end target of what we're trying to analyze, and then those samples have been getting more and more complex.

We've demonstrated at the previous conference, the HUPO conferences, the Human Proteome conferences. We've demonstrated deconvolution of simple mixes of proteins within a sample, and we're pushing closer and closer to what is a very big milestone for us that is still upcoming, which is being able to analyze a complex sample, like cell lysate, where there are thousands of different protein molecules in it, and being able to identify some part of some number of the proteins that are within it. That's on the discovery end of the spectrum, the broad scale of the spectrum, where we're looking at, you know, trying to identify all the proteins in a sample. On the targeted side, we've been doing interesting studies looking at different protein molecules.

For example, the tau protein molecule, which is a key biomarker, which is implicated in a large number of neurological disorders, Alzheimer's disease, for example. You know, in that mode of our platform, we've already been analyzing complex tau samples. We've been pulling down the tau, we've been looking at the proteoform landscape of it. I would say, you know, stay tuned for our next conferences as we continue to move up the complexity there and move towards real biological samples.

One of the advantages of using a single molecule approach is the ability to simply count the proteins to determine the concentration. Could you provide context around the significance of this feature and how other proteomics platforms quantify proteins in a sample?

Yeah, so, your question is getting to what's the sensitivity of a platform, and why is sensitivity important? If you have 37 trillion cells in your body and something's malfunctioning, the things that are in the cell that are wrong are often very rare. You might only have one, five, ten copies of something that's going wrong on a cell surface at an early stage of disease progression. And so you want to be able to look at the sample in a great deal of detail so you can see what those differences are. If you only have five, ten copies, even 50 or 100 copies of something, there are really, really no traditional assays that can accurately identify in a full comprehensive analysis of a sample, can identify what those differences are.

And so you said, "Hey, your platform is single molecule." What single molecule means is that I can go and take up a 100 to a 1000 cells, I can crush them up into a lysate, and I can put them on our instrument, and I can see the modifications on those single molecules. Meaning that if one molecule has a particular change, I'll be able to pick it out and say, "Hey, there's a particular change here." But there's two things that are important to realize when you're a system that operates on single molecules. One is, do you have the sensitivity to see a single molecule difference, a single molecule of one particular protein? And the other question is, do you look at a large enough population of molecules where it's likely that that rare thing happened to be in your sample?

That's called the dynamic range of the analysis. And so if you know if you're a competing solution that does peptide sequencing and you're only looking at 2 million peptides per run, that's the equivalent of some 200,000 molecules. 200,000 molecules versus 10 billion molecules in a typical cell lysate, there's a huge impedance mismatch. Even if you could see a single molecule, you'll never see it because you're looking at a very small subset of the sample. And so there's two parts of our system that yield real biological relevance for the customer. One is that we can look at single molecules and identify them. Two is that we can look at 10 billion molecules per run of our instrument, and that ten billion is no mistake.

We were told by pharma eight years ago, the ideal would be for an instrument to manage, to analyze 10 billion molecules per run, so it matches what we see in a, the number of molecules in a typical cell lysate. So the customer has full control over how many molecules they're gonna run per sample, and that could be up to all the molecules that are in the sample. And so with those two things for the customer, what that means is you can dig way deeper into a sample, and you can find the rare things that differentiate healthy and sick cells, that are the biomarkers that are potentially the next great drug or diagnostic target. So for our end customers, the scientists, that is a really, really critical differentiator for us and one that sets us apart significantly from all the other approaches in discovery proteomics today.

Can you also walk us through the sample prep workflow required for your platform? How clean or pure does the sample need to be in order to minimize that background?

Yeah. So, so how pure the sample needs to be, let's answer the question in two ways, right? So, the sample prep protocol is meant to be pretty simple. It's meant to be kind of like the sample prep that you would use for Illumina sequencer, and that means it's simple pipetting steps, it's simple, you know, incubations and mixing and that sort of thing. It's nothing that a pair of research hands can't do in a lab in a couple of hours. The primary steps of that to deal with the fact that whatever sample has proteins in it is gonna have lots of other junk around it is that there's traditional steps that exist today that extract all the protein molecules and get you a clean sample.

From there, that sample piece of the sample prep doesn't look a lot different than what you do at the mass spec. But from there, it diverges quite a bit, right? The mass spec will break the proteins into peptides, they'll ionize them, they'll shoot them into mass spec. For us, you would incubate those with one of our reagents that attaches the proteins to our SNAPs, which enables them to be deposited into our chip and run the assay on it. But all those steps are meant to be very simple, right? Remember, that goes back to the original premise, right? The entire system, from sample prep through instrument, all the way to analysis in the cloud, is meant to be a turnkey system that can be used by any biologist, any biochemist in the entire world in an easy way.

We think that that's necessary to be able to really bring proteomics to the masses, which it is not today. It is really a niche technology that's used by those that are in the know, and that's what we're trying to change.

And we touched on the proteoform a few times in this conversation. Following enthusiastic reception at US HUPO, you're heightening your focus on proteoform development activities. Could you provide an overview on how the same proteomics platform can also be leveraged to perform a broad but also a targeted proteoform analysis? And do you anticipate you would be releasing kits to complement the broad scale proteoform approach, broad scale proteomic approach, and also the proteoform approach? And do you also envision potential for customers to use their own antibodies over time to look at various proteoforms?

Yeah. Let me back up and talk about what a proteoform is, because if I polled my investor base, I don't think any of them know what a proteoform is. Let's back up and just kind of do this in story form here. When the Human Genome Project undertook the mission of figuring out what all the genes are in a human, it was pretty well accepted that there were gonna be at least a 100,000 genes there, because the human body is incredibly complex. We're gonna need lots of different protein molecules to deal with that complexity. And in the end, we found 20,000 genes, which is not a lot different than a banana.

And so you might be asking, "Well, where's the complexity that makes a human a human?" And that complexity lives in the protein level, and it lives in the fact that each protein molecule can have hundreds of different modifications on it, that happen after transcription from DNA to RNA into protein. Then there are different enzymes and kinases that will go and modify that protein, and those modifications have a profound impact on how the protein functions in a cell, how where it's distributed. Is it in the nucleus? Is it not? It has a profound impact on the degradation of that protein. And so if you don't understand the modifications, you have a very incomplete picture of what's really going on. And for customers that have high-value targets, like tau, for example, in neurology applications.

You really wanna understand, in specific use cases, what exactly the modification landscape is of a protein. And today's technologies lose a ton of the information about the molecule, because what you end up doing is you end up breaking this tau molecule into different peptides and then looking at which peptides might have modifications. That is an approach that loses the information of where multiple modifications on a single molecule were. And we have an approach that can give you rapid access to be able to analyze that proteoform landscape for any molecule that has existing affinity reagents from other companies. We're an open platform. We can use affinity reagents from Abcam and other companies like them in order to do an analysis in a way that is different and value add relative to anything else that's out there.

Those are the types of analyses that we've been doing, and we think there's a ton of value to add in helping largely our pharma and DX customers look at these different forms of proteins and understand what the modification landscape is, and are those modifications potentially indicative of therapeutic response, disease state? Are they potentially a drug target in and of themselves? We think there's a lot of gold there, and we intend to dig in more in there because customers are asking us to do more and more.

Given the confidence in the protein identification increases with number of affinity reagents flow through the platform, it's very likely that development progress accelerates rapidly once you hit a stride. On the other hand, does it also decrease the visibility you have into the launch timeline? How confident are you in that 2025 launch timeline that you laid out?

Yeah. So I think that what you're referring to is that I have often told the analyst community and our investors that by the time that I get to a couple thousand proteins, we're most of the way through building the affinity reagents we need to get to the finish line and ship a platform. And that's because unlike other platforms where I have one antibody, I see one protein, I have ten, I see ten, we use a computational approach that pulls a molecule and gathers information. Until I've got half, roughly half of the antibodies done, I won't see any meaningful number of proteins being able to be identified out of a lysate. But after that halfway mark, there's an exponential curve in terms of how many I can see.

And so by the time I, you know, come to the Morgan Stanley conference and I say, "Hey, we've got 500, 1,000, 2,000 proteins," by that point, we have gotten our way through the vast majority of our development necessary on the affinity reagent side. And it also means that we've figured out the recipe exactly, right? In the past, you know, we have had to extend our timeline to first commercial launch because building these affinity reagents is complex.

We've got to develop an antibody that, that recognizes a short epitope, that has the right types of specifications and kinetics, that it can operate on our platform, and we have to be able to analyze what the molecule binds to with pretty good coverage of all the different things that it does, so that we have a good idea when it gets on the platform, what to expect out of it. And so we are still in the process of building those. By the time I come with that milestone, I'm halfway there. I'll have very good visibility into the remaining timeline, and I think that's w e've told investors that that's an opportunity for us likely to do an analyst day, perhaps, and talk to the investment community in much finer detail, what the finish line is, what the final specifications of the product are, and so forth.

Okay. And then you've also been prudent in your spending and recently extended your cash runway to second half 2026.

Yeah.

Including cost of development, initial build out of commercial organization ahead of launch. Please remind us of your cash position and provide examples of how you're managing costs and improving efficiencies.

Great, thanks for that. That's a good setup to the last question here. We ended last reporting period at $233 million of cash. That's 51% of the cash that we've raised since the inception of the company nearly eight years ago. For me, I take cash and running an efficient business very seriously. You know that I'm a CEO from the tech world, which is where I've spent the first part of my career, and as a public company CEO, we got that company to a positive 20% non-GAAP operating margin. While we were growing nearly almost triple- digits, we were at high double digits, year over year on the top line and doing it profitably.

That can only be done by running a really lean shop, acting like a startup, and really going and scrutinizing every dollar that you spend and figuring out what that investment is. We've done a very serious job of that. We spent $49 million of cash last year off of our balance sheet. The year before that, we spent $48 million, and you know, as we've talked about, we're going to spend more this year, but still in a very disciplined way. You know, Anna Mowry, who's sitting in our audience here, is our CFO at Nautilus. She was with me on the last journey as well. She's super familiar with how we run a tight ship, and we're gonna continue to do that so that we take every investor dollar and make sure it counts.

Great. Well, thank you so much.

Yeah.

For joining us today.

Thank you, Yuko. All right, thank you.

Powered by