I'm Dan Brennan, Life Science Tools and Diagnostics Analyst here at TD Cowen 44th Annual Health Care Conference . Really pleased to be joining here on the stage with management of Nautilus. We have Sujal Patel, who's co-founder and CEO, and we have Anna Mowry in the audience, Chief Financial Officer. So first off, Sujal, welcome.
Thank you, Dan, and appreciate the invite to the conference.
Excellent. So before, like, dig in, like, to the details and kind of rip through the story here, maybe just high level, you know, you guys just reported, maybe just kind of point out what you think of some of the, you know, progress that you made in 2023 and just kind of some high-level thoughts as we kind of look ahead to 2024.
Yeah, sounds good. I, I think that, maybe before I take that question, I'll just back up and just at the 30,000-foot level.
Sure.
I mean, Nautilus is a 7-year-old company, and we are focused on bringing a brand new approach to the market that will enable you to measure the proteome comprehensively from any sample, from any organism.
Yeah.
There's a huge, completely unmet need out there. Traditional techniques are largely based on the gold standard mass spectrometer. The workflow's complicated. It leads to very poor coverage of what proteins you can see. Dynamic range sensitivity's really poor. And the approach that we are taking is one that would enable you to really democratize access to the proteome, much like Illumina did for the genome. And that's a really important space. Most of our drug targets, 95% of them, are proteins. Most of my career, diagnostics still target proteins. And so it's a it's a really impactful space for our future customers: drug development companies, DX companies, academic nonprofit research organizations engaged in basic science research, translational research. So big market. We are, as a, you know, public name, we're one of the few pre-revenue companies. We are still building that first transformational product.
We are, as I mentioned, more than 7 years into the development. And really what we provided on our earnings call, which was 2 weeks ago here, was we really provided a update on our product development timeline, talked a little bit about our cash position and the expectations. And I think the big news there is that, we do anticipate launching our platform, which is launching instruments, consumables, and software in 2025. We've made a lot of progress over the course of the last couple of years, and even in the first couple of months here, I've just continued to make good progress towards that towards that goal. And that was really the, the headline message from a product perspective. The other thing as well, which is on top of most investors' minds in today's capital markets, is, is balance sheet.
You know, we've got a strong balance sheet. We have $264 million of cash on the balance sheet. That's north of 60% of dollars that we've raised since inception of the company. And so it's a significant amount on the balance sheet, especially relative to our burn, which the last 2 years have just been shy of $50 million for each year. And so even though we've managed to continue investing last year, growing our headcount by roughly 20%, still largely kept the cash burn flat.
Could you speak to the management team, yourself, and kind of what you know, the backgrounds and maybe how those backgrounds ideally really lend themselves to the opportunity you're kind of going after here?
Yeah, that's a, it's a really interesting question. If you think about what we're building, this isn't just a company that has biochemists. It's not a company that has just software engineers or AI engineers, which is the topical thing this week, right? We're a company where there's biologists and chemists. There's electrical engineers, mechanical engineers, single-molecule biophysicists, semiconductor engineers, software machine learning, all these disciplines inside of the company. Because what we're building is an integrated end-to-end instrument. It's got a lot of different pieces, a lot of parts. Our management team reflects a lot of those types of characteristics. We have a number of team members that are on my staff that came out of the tech world. For those that don't know my background, I was founder and CEO of the company in a tech space called Isilon, which was founded in January of 2001.
That was a great journey. 2006, we took it public. We grew it to about, like, just shy of $100 million booking quarter, the last quarter before we sold it for $2.6 billion. That was at the end of 2010. You know, a number of management team members were with me on that journey, Anna Mowry, our CFO, our head of operations, our head of people. They are joined by a bunch of folks who really came out of the biggest winners in the DX and tool space. So, for example, our Chief Business Officer was at Illumina for about a decade and a half through a wide range of commercial roles.
Our head of product, this guy named Subra Sankar, Subra was the person who was in charge of shipping the first NGS machine at Solexa, which is the company Illumina purchased in 2007 to get into the genomic space. He shipped the next 6, 7 platforms for Illumina after. So it's really a unique combination of folks that either have been with me for a long time or folks who have really done substantial things in the market.
Mm-hmm. Right. So I think you started off by saying, like, what you're doing is fundamentally different. And, you know, we get to the earnings calls, and we get an update on kind of what's happening. But maybe even kind of zooming out further, 'cause I think, you know, you've got a handful of proteomics publicly traded tools companies that are out there. You know, the affinity-based tools. You know, you've got the mass spec players, like you said, a lot of genomics technology. But what you're doing is fundamentally different, right? And sometimes it's hard for us to wrap our head around just, like, you know, what, what, what, you know, how it's going to work, and ultimately, if it does work, what the impact will be on the market.
Maybe just staying with the technology, if you don't mind, maybe just kind of give us a flavor for the key basics to the technology and kind of, you know, how de-risked do you think it is today, at this point in the development process?
Yeah, I mean, I think, the first kind of context I would add for investors is I think if you look at, you know, your wider coverage universe, for example, I think that there are some incredible franchises in there, Illumina, Thermo Fisher, kind of in the world that we're in, lots of really significant newer companies that were formed, 10x, Natera, Exact, others. But there's also a lot of companies which are working on sort of incremental innovation to what's there. And what you'll find in the marketplace in general is that those companies that are focused on incremental innovation don't have the opportunity to, you know, sell in a sales cycle, which can be cost-effective, and to really build substantial businesses. And we, as a company, have been very focused on building something that we think is absolutely transformational in proteomics.
You don't have to just hear it from us. We, at this point, brief hundreds of potential customers. We have, you know, at least 50 that are very close to us in various ways. Our product decisions, our specifications, our strategy in how we bring the scientific community along on our journey of developing this product, all that is informed by those conversations. What we hear from those potential customers is that the significant potential of what we're building is really exciting and that the specifications for that product are really, really dead on for what they need.
And so from our perspective, we, as a company, have just been really heads down, laser-focused on development, trying really hard to not get distracted, not spend extra money, get to a place where we can get the product out in the marketplace where we expect to be able to ramp revenue pretty rapidly.
Kind of, what makes your approach? I mean, there are other single-molecule protein sequences that are out there, in development. What, what, what makes your approach different?
Yep.
Yeah, if you speak to that.
Yeah. I'll first correct you in saying that there may be other companies that claim to market that they have single-molecule protein sequencers. There are no single-molecule protein sequencers in the market. There are companies that can deal with peptides, which are small fragments of protein. Some sequence them, like, some of the new entrants in the market. Some will go and weigh them, which is what mass spec essentially does, and infer the sequence based on the weight.
But all of those approaches are fundamentally very limited because they're peptide-based, which means you lose at least two orders of sensitivity right off the top, meaning that if you're a plasma sample and you're looking for a particular protein of interest that differentiates healthy and sick cells, you have to see at least 100 copies before you can say, "Hey, definitively, there's a difference here, and it's quantifiable." The second thing is that all of these other approaches analyze very, very few molecules within a sample. So the human body is an exceptionally complex animal, right? I mean, 37 trillion cells is the average human being. A plasma sample, of which there are tens of thousands of inside of every drug development program, typical sample's 100-1,000 cells. That's 10 billion protein molecules on average in one sample.
So if you really want to be able to look at those samples in depth to understand the cellular mechanics that are underlying disease mechanism, you have to be able to dig all the way down. We set out with a very bold proposition 7 years ago to be able to analyze 10 billion molecules to match the pharma sample of 100-1,000 cells. That is something along the lines of 4-5 orders of magnitude more than anything else out there, whether it's a new entrant or Thermo Fisher with the Astral. Ultimately, what that means is that if we achieve our goal of getting to comprehensive proteomic coverage, we will do so digging far, far deeper into these samples, which for customers means that you're going to have better toxicity models for a new drug.
You're going to understand the cellular mechanics of the compound that you think is a new therapeutic. You're going to be able to dig and find the biomarkers that are going to be the differences. Those are the big use cases that we think we enable that are highly differentiated from the others that are out there.
So, so you've had, you know, on the last call, timelines got, you know, extended. Similarly, like last year, there was one small extension. You guys felt it was by far and away the right strategy, for a variety of reasons. Maybe just speak to again, like, what are the key kind of plans, if you will, about the technology that, you know, that investors can kind of look at in terms of, "Here are the 3 or 4 things that we're really tracking to see where you are," and the decision to kind of push things out, like, you know, why did that occur, as we think about those different factors?
Yeah. I think to kind of get to answering that question, I'll back up and just kind of talk about the different pillars of our technology. In order to get this instrument to work, there are a number of key significant development areas that we've been focused on. And they're quite complex, right? The instrument's going to be easy to use, sample in, answer out. But the guts of it are, are quite complex. The first piece is that underlying our approach is a requirement that we immobilize single molecules. And we do that basically on a giant chessboard, on a semiconductor chip, and then package that into a flow cell. And that's part of our consumables. We immobilize 10 billion molecules onto those chips for one run of the instrument.
We lay the proteins out in a manner that enables you to have very high occupancy, meaning that you can be virtually guaranteed that there is one protein on each of those spots so that you can analyze it. 'Cause if there's 0 or there's more than one, it's dead spot. Can't use it. So that's one core piece of technology. The second core piece is that we have to build something along the lines of 300 proprietary antibodies that are a different class of antibodies that allow us to recognize just very small characteristics of proteins so that we can combine those data points together to come up with a very high confidence identification of every molecule on the chip.
Mm-hmm.
We have to take each of those antibodies and cycle them through our chip and flow cell one at a time or two at a time, actually, in our case, and essentially take a readout. That's the process that occurs during a day. From there, there has to be an instrument that's capable of doing that at extremely high speed. Think about the raw images coming off of this instrument are tens of terabytes per day. That's 100 times more than you would have seen in the genomics era in terms of data volumes. It's a huge amount that has to go into that instrument. The last is the software and bioinformatics, the data science, the machine learning that enables you to turn all that information into accurate protein quantification. The nice thing you mentioned, Dan, that, you know, we are a little bit delayed.
The nice thing about being a little bit delayed is that 3 of those 4 pillars have had a lot of time to mature. We at, you know, HUPO, for example, which is the big human proteome show next week, will show a number of pieces of data on different posters that clearly demonstrate that the hyperdense single-molecule array is working great. We have an instrument, and it has more airtime on it than we would have expected because its development's more mature at this point. We're using that with some of our collaborators like Genentech and Amgen running their experiments on that instrument, testing it out. And then the software's had more time to mature. So the answer to, well, where's the area that's slowing us down?
The building of those affinity reagents and that specific class of affinity reagents, getting them onto our platform, qualified, and understanding exactly what that mix is that allows us to do all that multi-cycling, that's the piece that, that's taking a little bit longer. And one of the things I mentioned to a few investors today is, you know, it was as recent as last year that we were making process changes in how we build those antibodies to improve their efficacy on the platform. We were dealing with some of the issues, dealing with the various differences in how these antibodies bind and how they aggregate, just working through the hard issues that you have to iterate through one after the other. And, you know, we feel good about the timeline that we just laid out of being able to launch next year.
You know, a lot of folks have asked us, "Hey, can you give us more specificity?" And I think we'll have more specificity as we knock down more of these challenges and build more of those antibodies through the year. But today, we feel good about it but don't have a lot more specificity.
In terms of building the antibodies yourself versus relying on one of the leading antibody vendors, I guess, obviously, you'd have a lot more control. But what's kind of unique about the antibodies that you're building? Was there ever a decision to, you know, leverage a third party to do it?
Yeah. So, everyone else looks for antibodies that are specific to a molecule. I have an EGFR molecule. I build an antibody that recognizes EGFR. And we're off to the races. That's an ELISA technology that's been around for decades at this point. The problem is antibodies are never specific. They're never just going to bind EGFR. They always have off-target effects. They bind to these other three proteins. And so if you look at the last generation of technologies, like in Olink, they use pairs of antibodies to different parts of the molecule to improve specificity. That makes it twice as hard to build a menu, but it makes it more specific. And these are more targeted types of applications. Our antibodies are very different in that they don't go after one type of protein molecule. In fact, they're meant to be intentionally cross-reactive.
Leveraging the fact that antibodies are all cross-reactive, we make them more cross-reactive. So any one antibody will bind thousands of proteins out of the human proteome. And so, you know, if you have one of those antibodies, it's useless because I can say it's one of these 3,000. But if I then have a second data point, a third, a fourth, and a fifth, I will get to the point where I can very, very quickly say, "Hey, I know with extremely high certainty that this is a molecule of BACE1 or EGFR or Tau or whatever protein molecule it is." And that's the general approach. Now, that class of antibodies is really unique. No one else has ever tried to build antibodies like that. And so it's taken us a lot of development to get there.
Then, after going public and adding about $350 million to our balance sheet, we took the strategy of bringing on external partners who were experts in antibody development as well as aftermarket development, which is synthetic DNA-based antibody, essentially. And what we found over the course of the next kind of 4-6 quarters is that while these companies were good at running one particular thing, they really weren't very good at figuring out exactly how we were going to do the very unique task that we had to go to. And so shortly after, figuring that out, we retrenched and we pulled back almost all of those partners other than Abcam, who we continue to work with in some fashion on the development side. We pulled back on it, and we started scaling our own internal initiatives.
You know, today, because antibodies are such an important part of our business, we have spent a lot of time scaling up the antibody development funnel to get to the point where we'll be able to get all these reagents built. You know, there's a lot of capacity there because we expect that, you know, the set that we have today is not our final set. Many of them will be swapped out between now and when we launch in 2025.
Then, you know, in terms of from now to that launch in 2025, I know you talked about it on the call.
Yep.
Just, what are the kind of key, you know, ongoing de-risking events that both customers and investors can look to to say, "Okay, here, here's step one, step two, step three," both, I guess, maybe from a publication standpoint and/or just development updates that you guys are going to provide?
Yeah. So kind of working backward from launch. So launch in 2025, and that means instruments, consumable, and software. And, you know, call me old-fashioned, but in my world, when you launch something, it's ready to go. It's ready to scale. It's going to work. It's going to be in a customer's hands. We have a 9- to 12-month period before that that we call our early access program. And the early access program is a service offering where the customer can send us a sample. We'll analyze it on our prototype equipment. We'll send them back the result. And the goal of those engagements is, one, to demonstrate to the scientific community through conference presentations and publications that the technology is really valuable and it works.
And two, it's to drive pre-order activity for our instrument and drive grant submission proposals and submission or grant submissions and those types of things. So that 6-9-month period will precede the launch of the instrument. During that 6-9 months, we'll also place a few physical data units out in the field to get feedback from some customers and iron out any last-minute issues, which, you know, today, our instruments are already in multiple geographies. It's likely that there's not going to be a lot of that given how carefully we've designed it. So that's kind of what that launch rollout looks like.
If you back up and you're like, "Okay, what's that start of that EAP, that early access program, look like?" We would launch the EAP just at around the time where we have the first type of data where we have a complex sample like cell lysate, and we can detect some reasonable number of proteins. Call it 500, 1000, 2000 . It doesn't matter. By the time we have data at a scientific conference showing that, we will have put together all of our components. We will be more than halfway through building our affinity reagents. And we'll have much better specificity on timeline, specifications, and price. All of that packaged together means we can launch an EAP, start getting customers, signing up for samples being run. We can start talking about what instrument pricing looks like in detail to them.
And so that's really the next big milestone that I think most investors that I talk to are looking at. And frankly, internally, we're looking at as well because that data is what's going to kick off our early access program launch.
And you know, for a lot of the affinity players, it's like plex. How many can you plex? And can you do proteoforms? And you just said 500-1,000 you launch with. But just what, you know, ultimately, when you do launch, when it's ready, what'll be like the commercial specs you think that you would like to launch with in terms of the key parameters of the instrument?
Yeah. So, when what I tell my team is that there's quite a few elements on how you measure a proteomics product. How many proteins can you quantify? What's your dynamic range? How's your sensitivity? What about ease of use? Like, what I tell the team is that I expect us to be best in class in every category. And I expect there to be a few categories where we absolutely knock the ball in the park and we're way better than anything else out there. And that is what we expect to launch as a one-out. In terms of, like, the number, the plex number, like, how many proteins can you identify? What we've heard time and time again from customers is that as long as it's more than half the proteome and you're scaling up from there, I don't want to wait anymore. I want access to that technology. I want to be running samples. I'll pay.
Mm-hmm. So more than 10, so 20,000?
Yeah.
So it's more than 10,000?
Yep.
Right. And then, as we spoke about on the call, like, different types of post-translational modifications—like, that's something that, you know, a lot of the affinity reagents can't do. Just speak to the benefit of what this technology is expected to do there.
Yeah. So it's post-translational modifications, just so everyone understands what they are, right? So unlike genomics, proteins are modified in many, many different ways with different events like phosphorylation and methylation and glycosylations. And these modifications have a profound impact on how a protein operates in a cell. They can change how it's degraded, where its distribution is, if it's in the nucleus or on the cell surface. It changes messaging. And so understanding them is almost as important as understanding what all the proteins in a sample are. Our platform has a kind of a crawl, walk, run approach, right? Today, we have two modes of operation for the platform. One is, "Tell me what all the gene-encoded proteins are," which is what we've talked about so far today.
The other mode is, "Let me take proteins that I'm really interested in," for example, the Tau protein, which Genentech is interested in because it's a key marker for neurological disorders. "And let me delve in in a great deal of detail into how the proteoforms affect the protein and where the modifications are." That's a use case of the platform that we would, of course, like, that if you looked at it and said, "Which would you commercialize first?" Of course, I'm going to commercialize this broad-scale proteomics. But our platform matured at different rates. And so we're able to run these protein modification experiments today and pull data out that is valuable to the customer and cannot be generated with any other platform out there.
And so that is work that we are doing with a number of collaborators, including, you know, Genentech, Amgen, MD Anderson, TGen, and others. We will continue doing that work as we move towards launch. It's still a little bit of an open question for us how much energy we will put behind that as a go-to-market motion in addition to what we're doing on the to gene-encoded protein detection side.
Like, in terms of the performance stats versus mass spec or versus Olink or versus some other, like, what would we look at or, like, the level that you said you want to invest in class? I mean, we don't have to go through all the metrics here, but whether it's sensitivity, dynamic range, also throughput's another huge factor, right? Huge factor, right? That's what, you know, some companies who here try to enable, like, a higher throughput for mass spec. Just talk a little bit on the performance attributes and also, like, the throughput that you think the, you know, the platform will be able to deliver.
Yeah. So, so in terms of, like, the specifications of the platform, ease of use is non-negotiable for us. We are building a sample-to-answer platform. Sample prep should be no more complicated than library prep for an Illumina sequencer. And when you drop it onto our instrument, you should hit the go button. And answer should be in the cloud in a day. That's that is what we're building, non-negotiable. We're also a single-molecule counter. When I say single-molecule, I mean single protein molecule, not peptide, which means that all of that rich information about the protein, the modifications, what, what protein quantification exactly looks like, that's all retained.
The dynamic range of our system, you know, if you had asked me a year ago, I would have said, "Hey, we're willing to launch with, you know, plenty of compromise there because we have a designed-in five orders of magnitude advantage relative to mass spec or other assays like Olink." But that technology has matured as well in the last year. And so it's doing quite well. Five orders of magnitude, we believe is easily the delta between what you can quantify in our assay versus the mass spec. And that leaves us with throughput, right? You mentioned, like, a throughput on the mass spec. And it's such a complex question when you ask, like, throughput. If you just said, "Go to the Astral," and you're like, "Well, what can you do?" They're going to tell you, "I can do hundreds of samples a day," right?
But what sample are you doing? If you're doing lysate or blood, which is like 98% of the market, there is a significant sample preparation process that's in front. There's fractionation and depletion. When you go down to, you know, any of these KOLs labs who are analyzing blood serum, they're chopping up their sample into many fractions and running it in separate mass spec injections and then bioinformatically compiling that data back up. It's hugely labor-intensive. It's really complicated. And even then, every mass spec study that's out there, every one of them, you can go look at them all. They will show you data that looks great, but they've aggregated it across thousands of samples. They don't talk about proteins. They talk about protein groups. Like, this is not quantitative data.
And so while our throughput will not match those, our throughput is initially going to be 12 samples per day. And you can do less if you want, but 12 samples per day is what the system is designed for. Our pitch to the customer is that this is going to be quantifiable protein data. It's going to be across the dynamic range spectrum. It's going to be more sensitive than anything you've seen. That data is far more valuable than what you're going to get running a 3,000-sample study and getting fuzzy results on the other side. And, you know, it's like 2% of the customer base, hard, hardcore mass spec KOLs, tough time with that pitch. 98%, they're like, "I want to see that data.
Mm-hmm. And will there be clinical applications down the line for the box? I mean, given the performance that you expect to achieve, certainly there could be. Just depends, I guess, on the form factor and kind of how you view the market opportunity.
Yeah. So, you're absolutely right. Yes, there are clinical applications. Yes, our form factor initially is really built for RUO as well as, I mean, that is what we're focused on initially. So the way this will work is that we're going to end up, let's use a DX customer, for example. We're going to sell the machine to a DX customer. They're going to go and find a combination of biomarkers that are indicative of disease or measure therapeutic response for a particular therapeutic regime. And they're going to say, "Oh my God, I made this discovery." And we're going to shake their hand and tell them to go take those biomarkers, make a high-throughput assay, go get their FDA cleared, and you're off to the races. The complexity is going to come because we are a single-molecule counter.
We can measure more analytes than is readily going to be buildable by some of these customers. And so at some point, someone's going to make a discovery that's too complicated for them to want to go and build an assay. And that will probably be the biggest catalyst for us to start down the clinical road. And really, if you have that kind of setup, you're pulling yourself through the FDA, not really pushing yourself. And so we think that those types of applications are coming. We would like to stick to the RUO space for kind of 5 years before we have to go down that road because you want to be at much, much more significant scale when you start down that road. But we'll see exactly what the timing is.
You know, in the long run, we think that there, there are really exciting applications there that enable this precision medicine wave that we've been talking about for decades but outside of oncology have not really made a dent.
In terms of, like, if you know, the box comes out, where do you see it being used initially in kind of pharma or academia? Like, what are the applications? It's obviously, you know, discovery, right? Maybe translational. But within that, there's a lot of different slices. Like, where do you see, like, you know, customers really saying, "This is where I definitely want to deploy it first because of"?
Yeah. Yeah. I mean, so you asked a great question. Before I answer the question, let me tell you one of the great things about this market.
Mm-hmm.
When Illumina brought NGS to the world in 2007, nobody knew what to do with it. Pharma companies certainly didn't know. And so all of those early customers were academic, nonprofit research. And they created a market. And it was still an incredibly valuable revenue ramp. And they built a ton of shareholder value. For us, not only do we get to grow the market, but there are existing significant applications inside of therapeutic development, DX, academic, nonprofit research. So on the therapeutic side, everyone who has a drug program needs to go through target discovery. They need to go through mechanism action studies to understand how compounds are interacting with the protein network. They need to go and do toxicity screening and profiling.
There is a wide range of these types of use cases in therapeutic development that today use the mass spec or they don't do, but they would like to do more proteomics work. That is squarely one of the first use cases that we want to go after. In fact, you know, we're in conversations with a large number of those folks, some of which are collaborators already. On the DX side, discovering biomarkers in blood and in tissue that are indicative of disease, that help monitor therapeutic response, that is a really big use case as well because most of the platforms out there don't have the coverage. They all don't have the ability to with the dynamic range and sensitivity to dig deep enough to be able to make a big meaningful improvement there.
And then on the academic and nonprofit research side, there are 1,500 labs across the world that are doing great work in basic science research, in translational research that all have significant proteomics capabilities. And all of them have a mass spec or more in their labs. You know, my co-founder, Parag Mallick, is Stanford faculty. His lab alone has six mass specs in it. Next to his lab is Sharon Pitteri, who's one of the well-known KOLs in the proteomics world. She has more. And so those are all potential targets for us.
Are there any predicates that we can look at and say, "You hit your specs, comes out in 2025, here is" even though it isn't, it is very novel, like, there will be a learning curve for a lot of customers to even, you know, the early adopters who've been working with you, obviously, will get it. But others, you know, there's always going to be some disbelief. We need to run it through its paces. So I don't know. How would we think about, again, it's putting the cart before the horse, but I'm just trying to think through, given how disruptive and differentiated you think the technology is, what would we look at as, like, a predicate launch curve? Do you think is there anything out there?
Look, I mean, I'll give you two examples, right? Example one is Illumina.
Mm-hmm.
From 2007 to 2012, we're a miraculous set of years, doubling revenue on NGS, growing significantly. That was in a world where they had to build a new market.
Mm-hmm.
Then, you know, I can use my own—I mean, my last company that I was founder and CEO of was Isilon. So it's a little bit different world. But from 2003 to 2010, our revenue went from $0 a quarter to just under $100 a quarter. And then we sold the business. And then 23 months later, it hit a $1 billion run rate. So, you know, myself and many of our team members are used to scaling businesses fast. And we think we have the opportunity here if we can fulfill the promise of this technology to do something really significant.
Great. Maybe final question would be, you've de-risked three elements. You have the final group, you know, the antibodies. Given some of the delays, I guess, how would you characterize kind of the risk profile to kind of get this fully over the goal line by 2025?
How would I characterize the risk profile? Well, investors in the room, that's what they do for a living. So I mean, largely, that's an exercise for them. How do I characterize the risk profile?
Yeah.
I think the risk profile is great. I think that we have the potential to do something incredible in the market. And I think that I think that I have all of the people, the talent, and the technology to go get it done. And we are heads down laser-focused on making that happen. 62% of our shares are owned at the board table. Not a single insider has sold a share into the public market since we've been public for three and a half years. And so we all believe.
Sorry. I actually didn't mean the stock. I meant, like, technologically, to get technologically.
We're a ripple of investors.
Technologically, to get it over the goal line by 2025. Like, could we be sitting here next year and it's delayed to 2026? There's always things that can happen. But, like, your comfort, you know, your.
This is done with the balance sheet that we have. I have a lot of confidence that it'll be 2025.
Got it. Great. Well, with that, thank you, Sujal.
Yeah.
Appreciate it. Thanks, everyone in the audience, for being here.