Good morning. Dan Brennan, TD Cowen Life Science Tools and Diagnostics Analyst. Day one of the 46th annual TD Cowen Healthcare Conference. Pleased to be joined here on the stage with Co-founder and CEO of Nautilus, Sujal Patel. Sujal, welcome and thank you.
Thanks, Dan, and I appreciate the invite to the conference.
Terrific. We have Anna Mowry in the audience, the Chief Financial Officer. Maybe just to start off, like zooming really far out, and then we'll go into your progress on technology. I thought it'd be interesting to understand from your perspective, you know, where Nautilus fits into the proteomics ecosystem, if you will, you know. You know, what do you consider some of the differentiators or things you're trying to solve for, if you will?
Great. That's a good place to kick off. Maybe I'll take a second and back up because I don't know how familiar the audience necessarily is with the proteomics space. You know, just in story form, one of the things humanity's conquered in the last couple of decades is we've conquered measuring the genome. I can take a drop of your blood, I can tell you what 99.9% of your DNA is. It's accurate, reproducible, and it's fast and cheap. The problem is your DNA doesn't really change from the day you're born to the day you die. It doesn't contain the real-time state of what's going on in your body, and because of that, it has limited utility in therapeutic development and precision medicine.
You know, it uses an example, 95% of our FDA-approved drugs target proteins, not genes. Measuring proteins is the next frontier. Proteins do all of the work in your body. They make up the vast majority of the functional parts of your cell. We, as a scientific community, do not understand proteins very well. Proteins have a lot of complexity. There's 20,000 different gene-encoded proteins. We don't have good instrumentation that can measure all those proteins sensitively in sample and reproducibly. More complex than that is that once a protein comes out and it's transcribed and it's in your body, it gets modified by lots of different chemical processes, picking up modifications in different forms. If you don't understand those forms, you don't understand biology well.
Nautilus is a company that is trying to develop a brand-new platform to comprehensively measure proteins in sample. That is, what is the gene-encoded protein? How is it modified? We're using an approach that is an approach that's developed by my co-founder, Parag Mallick, who's Stanford faculty, and is a very unique and different approach that hasn't been tried before. I'm sure as we continue our conversation, we'll get into it. Nautilus itself is about nine years old, and we are in the process of building a benchtop instrument that delivers easy-to-use proteomics to any biologist who wants to measure the proteome comprehensively from a sample. That's very different from the state-of-the-art in the proteomics space.
The gold standard in proteomics is a complex workflow that sits in front of the mass spectrometer, which is an instrument that is used all over in metallurgical analysis, food safety, chemical purity. It's used in this proteomics use case using a complex set of preparation steps ahead of it. We sell billions of dollars in mass specs every year into these protein discovery environments. That tool doesn't really provide reproducible, comprehensive results. We're out to build this platform that comprehensively measures the proteome.
Terrific. Maybe next, y ou know, we can keep going down that vernacular if you will. What proteomic applications will the platform enable or unlock today, both on the proteoform side and then on really the broadscale proteome side that maybe aren't possible? Again, speaking to, you kind of alluded to some of the drawbacks, but maybe go one level deeper of kind of, you know, what you'll seek to do with both of these technologies.
Yeah. You used the word applications. Let me discuss the word applications in two different ways. One, I'll describe our applications, which you mentioned, right? broadscale, which is what we call comprehensively measuring all the gene-encoded proteins in a sample or proteoforms, which is a specific use case. Those are cases.
Customers have their own use cases for therapy. Proteomics is used in a wide variety of drug development up front to take cells that are healthy, take cells that are sick. You want to understand it in a significant level of detail. What are the differences between them? What cell surface proteins are potentially biomarkers that are indicative of disease? What are my potential targets which I might be able to drug to have a positive impact on a disease? That target identification step and understanding what's going on, that step is already significantly hamstrung by existing technologies which can't see all of those biomarkers sensitively. They can't see the rare things that are differentiating healthy and sick cells. You know, next stage in drug development, once I've got compounds, there's a lot of work that goes into understanding the mechanism of action of those compounds.
What are the effects on the proteins in the cell when exposed to a compound? What are the secondary effects on other organs in the body? Toxicity, cross-reactivity types of applications. These are all very critical steps upfront in drug development that would have a massive impact, hopefully positively, if you could use a platform like ours to dramatically reduce the cost and efficacy of that drug development process. In diagnostics, that same sort of use case exists, right? How do I find a sensitive biomarker that's going to be indicative of disease or stage of disease? How can I monitor therapeutic response by looking at what's going on inside of the patient's proteins?
All of these types of applications are significant applications that our customers have identified as pain points because existing technologies are not adequate. When you think about what our technology does to map on top of that, right? The primary thing that our customers want to do in a lot of these types of applications is understand if I have a sample, maybe it's 100 to 1,000 cells, that's like a standard sample size in pharma. I wanna understand what are all the proteins in here, and what are the proteins, and how do they change as different disease states are present.
That primary use case is what we call broadscale. Our value proposition in that use case is that we have a instrument that's far more sensitive, that's far more reproducible than what's out there today, which means that you have more actionable results, and the results coming off our system are more comprehensive. You know, the mass spectrometer-based workflows and the other types of products that exist in the marketplace still really can't effectively see more than maybe a third to a half of proteins that are in the sample. They can't see its detection threshold to see 100-1,000 molecules quantify that the more prevalent. These are critical questions in biology that we can add platforms.
The other application of our platform is one that we have begun to take to early access this year, and this is an application that helps to zero in on proteins of interest and look at the modification landscape of those proteins. For example, in early access this year, we launched our Tau assay, which is capable of measuring 768 different forms of one single protein, the Tau protein. The Tau protein is a critical protein to study in neurodegenerative diseases like Alzheimer's disease, and we have an assay that is capable of measuring 768 different forms of it, which is revolutionary.
No one has ever seen all those forms of Tau, no one has understood, because they've never seen it, how is that related to your, your likelihood of getting Alzheimer's disease in the future, the disease progression? How is that related to the therapeutic programs that have been attempted and how we'd be able to impact that? This proteoform use case is really interesting because the data that comes off of it has never been seen by the world. It's a use case that's a little different than broadscale. Every protein I wanna go after, I have to build a new assay, that takes us some period of time. We did announce on our earnings call last week that we have a second marker that we're working on in oncology.
We announced earlier than that that Michael J. Fox Foundation and Weill Cornell Medicine-Qatar are collaborators, and Michael J. Fox is funding an initiative to study alpha-synuclein, which is the key biomarker in Parkinson's disease, so another neurodegenerative marker. We have more activity going on there as we build this portfolio of proteoforms, and we think in the long run, a single platform, which we showed for the first time last week to the scientific community, a single platform that we're going to release at the end of the year, is capable of running all these proteoform assays and our broadscale assays in one single machine.
That's a lot. Yeah, no, it is. If you're successful, it sounds like it's gonna be quite exciting. Maybe just go back to US HUPO. You presented some latest updates on the Nautilus platform and on the proteoform side. Just speak to some of the key takeaway from the presentations.
That's great. We, HUPO is the Human Proteome Organization conference. They do it twice per year. They do World HUPO, which is generally international, and then they do a U.S. version of it. The U.S. version was last week in St. Louis, and it was a really exciting opportunity for Nautilus because for the first time, we showed the instrument to the scientific community. On the earnings call, what we talked about was that we have a number of proteoform assays that are moving through early access to general availability this year. We have an instrument that will reach launch by the end of the year with generally available placements at the beginning of next year.
We announced that our broadscale capabilities, we expect to launch those in early access in the second half of this year, general availability first half of next year. We've got a lot of things to talk about. Scientific community got to see our instrument for the first time, which was really exciting to demonstrate that. It was a really important proof point because, you know, Nautilus is building something that is very difficult to build, very easy for the customer to use, but the task of building what we're building is very hard. For the scientific community, this was a massive tangible step where they see the instrument, they could use the touch screen and operate it. I think at 4:30.
Yep.
There's proteomics panel. Birgit Schilling is our, who's our PI at the Buck Institute and in the audience here, will be speaking. Birgit was at our event and saw the instrument. Now she has had the instrument in her lab in alpha form. Buck Institute has the only alpha of our instrument since April of last year, and so she has lots of information that she'll share. As well, Birgit at US HUPO presented some really exciting data using her biological samples and our instrumentation and her operators generating interesting biological data. I'll save that data for her to talk about, but really exciting progress.
Okay. maybe, you know, the data at HUPO as you just mentioned, was, you know, proteoforms, I believe it was proteoforms of Tau, and you kind of talked about, kind of several new biomarkers which may become available or in progress. Just could you speak, y ou already alluded to one, but just how do we think about we're not putting the cart before the horse? How do we think about, you know, kind of how quickly you might come out with additional biomarkers on the platform?
Yeah. So, let's just, like, just separate those two use cases. Right. broadscale is a use case where we build it once a nd sell to everybody. Proteoforms, we're building assay by assay. The criteria for building these assays today is, number one, an important biomarker where there are significant drug programs and drug development dollars behind it, and an area where the forms of proteins likely have a difference in terms of the protein's function in the cell, or its degradation, or its distribution, or any of the sorts of characteristics. Areas of interest are neurodegeneration, among a lot of other areas. Neurodegeneration, oncology, autoimmune, inflammatory, cardiac. What we've done is we've taken you know, a set of two or three hundred potential interesting markers. We've mapped on that availability from our partners for antibodies that target different, site-specific modifications so that we don't have to build those today, and we've stack ranked those.
We probably have 20 that are kind of on our hit list. I mentioned that we're gonna do an oncology marker next. Like, there's five markers that are all great markers. We don't actually even know yet which one we're gonna do. We're gonna test the antibodies, and whichever one is the fastest path is the one that we're gonna pick first, and then we'll probably tackle another oncology marker right behind it. You know, from there, between neurodegeneration and oncology, the, you know, alpha-synuclein will come out the other side. Then we may move to another area. We may continue to double down on those two areas. I think that when we think about this long term, we think about this as a steady roadmap of proteoform assays.
You know, in the long run, we think that having a large portfolio of proteoform assays plus an instrument that does broadscale makes a really compelling value proposition for the customer.
In terms of the first, you know, tau, 700 different variations of it, what's been the early interest? I would think that's such a hot area, and there's an established understanding and awareness, and there's a lot of pharma companies and researchers chasing that. I would think offering this, you would generate a lot of leads. Just any color you could provide on the funnel, what you've heard from customers on that front?
Yeah. That's, that's a great question. Well, once we started to show this data, which we started showing in a preprint last year, Birgit presented data at World HUPO last year as well, which was very early data off the platform. There's been a tremendous amount of interest from the scientific community.
A lot of that interest is in early research because this is data that no one's ever seen before. No one ever thought you could measure 768 proteoforms of Tau. There's been a raging debate for decades in the Alzheimer's disease research community. Is the pathology of Tau driven by random hyperphosphorylation, or is there a pattern of how kinases got you to particular forms? In our first datasets, we started to see evidence that there could be a pattern there. Like, incredibly exciting, but it's gonna take some time to develop, partially because some of these folks have to now apply for grants. You know, some early innovators, like The Michael J. Fox Foundation for Parkinson's Research, saw what we're doing and said, "Hey, I have to jump on and do this for α-Syn." It's beginning to build.
You know, as a company, we've been running very capital efficiently up until today. There was not a single salesperson in the company. We have one now. We're just now beginning to build that capacity. Building the funnel is basically from scratch at this point. As well, I think, you know, as I mentioned in the earnings call, we got to launching that tau assay into early access a little earlier than expected as well because it's performing incredibly well. With that, you know, we're a little behind on sales capacity, but we're just getting started on that. I think that this year we'll see some of those early projects build, and then we're going to move those projects into grant proposals and further funding.
One of the things I do wanna highlight, though, is that the move out of neurodegeneration to oncology is driven by the fact that we are in tau not because we did some great market research study and said, "This is the best place to go first." We're here because we started working with Genentech four years ago, and they really wanted to study this, and it was a great joint learning experience for us. tau might be a tiny step out of sync with where early drug program development is for the data that we're gonna put out, but we think oncology is a really great fit for the type of data that we're gonna get off of the platform and where drug development programs are that could be impacted by it.
I think I'm super excited about what's going on in neurodegeneration, but I'm maybe even incrementally more excited about oncology as we start to get to the second half of the year.
Can you just elaborate a little bit on that? Like, why you think the marriage between where the market is and what the technology enables, maybe oncology is even a, like a faster lane, if you will?
I think that there's kind of two parts of it that I would highlight, right? One is that on the neurodegeneration side, the disease biology is extremely complicated, and we don't yet have the capability to analyze biofluids, CSF, blood. We only are dealing with tissue samples. Tissue samples for a brain that's afflicted with AD means the patients die.
Right.
Samples are hard, and getting an impact out of what we're doing is gonna take a little bit more time, right? In oncology, there is a belief in the scientific community, at least folks that I've talked to and that I know our team has talked to, that the modification landscape and the proteoforms of some of these key markers is critical to understanding therapeutic response and biology of these diseases. Our system, this predominant sample type today that we're working with is cells and tissue. That's an easy sample type to get from a tumor biopsy. There's a lot of alignment on the sample side and a lot of alignment on the biology.
Remember if the sample is easy to get, and we are able to understand the proteoform landscape in a great detail. It's not just about drug development, it's understanding what therapy should I give the person based on what I'm seeing in terms of the proteoform that exists. Like, these sort of precision medicine use cases, I think are much more tangible earlier for us in oncology than in neurodegeneration.
In terms of the platform, whether the proteoform or the broad-scale, in terms of working with different matrices, is there any barrier towards working on different matrices over time? I mean, right now you're in tissue, but how will that evolve, do you think?
I mean, for our broadscale capabilities, we will have the capabilities to do cells and tissue. We'll have the capabilities to do blood. Over time, those capabilities will get more and more complicated. Some customers wanna do a preparation to only look at cell surface proteins. Some wanna, you know, do some sample preparation minimally on blood to reduce albumin or some of the proteins that are really abundant that take up space.
On a experiment that is unnecessary. That's the roadmap on the broadscale side. On the proteoform side, every marker has a little bit of a different sample preparation associated with it, when we think about a product for tau, a product for oncology marker number one, it's a combination of our assay and the sample preparation techniques that go and come together. For example, on tau, you know, our internal team has developed a protocol for sample preparation from frozen tissue that enables us to analyze these proteoforms of tau. Birgit's lab at the Buck Institute has been using that protocol. For the oncology marker, we'll have a similar sample prep that's kind of bundled up with it.
I got it. Okay. Maybe one more on the proteoform side, and then we'll kinda zoom out for the broadscale. On the proteoform side, you mentioned how well the technology early on is working. Like, how would you define, I guess even on the Tau product, how would you define success? You know, we'll ask Eric Verdin this later today, but like, what are the features? What are the measurements? Obviously, if there's new discoveries made, terrific, that'll take years. Just analytically, like from a, you know, quantitation standpoint maybe, what are the measurements that you know, the customers you think will look at to say, "Wow, this really is delivering what we thought, and it's very differentiated and unique"?
Yeah. Well, You said it in your thing b ut I'm just gonna say it out loud, right? Ultimately, our job is to enable our customers to make discoveries and positively impact human health. I don't know if that'll happen in tau, I don't know if that'll happen in the first oncology marker, but I am certain that out of the hundreds of markers that are out there, many of them will have relevant proteoforms that are significant discovery that positively impact human health. That's our end goal.
It's gonna take years.
Yes. That's the end goal. One of the things that gave us a lot of comfort around this tau assay was that we did a lot of validation studies, and we far exceeded our own metrics in terms of what we would want to get this thing into early access. First of all, if you look at the preprint that we had, we did a set of studies to build up to real biological samples, studying organoids, human mouse brains, looking at human control patients and AD afflicted patients. When we did those analyses, one of the things that we saw was we saw incredibly tight CVs. Very little variation, high reproducibility in the samples.
When we did spike-in studies that would take particular forms and increase their ratio in the mixture, we saw exquisitely accurate reproduction of what we expected coming in. You know, just to give you a sense, if you looked at our preprint and looked at the variation in our data across different operators, different reagent lots, different instruments, and different chips and flow cells, so change all the different things in our system, the variability is 5%. Our product management team, when they were building the spec for this product, set that at 25% because that's what everyone else can do. 25%'s like a norm. We accomplished five. It's the highest variation that you saw in the system.
Gives us a ton of confidence that there's really great data quality coming off of the system. For our customers, data quality is absolutely critical. You know, one of the things you see in proteomics from some other vendors is you see these 10,000, 20,000 cohort studies that are being done. A lot of those studies are done that way because the variation is so wide. You analyze the same sample twice and 30% of your IDs change, like you have to run a lot to go and get data. If you can deliver more accurate data, it's really transformative to a customer, and that's the most exciting thing that I think, not just I'm excited about, but as I was at HUPO last week and I talked to the scientific community, the things that they were excited about in our early results.
Terrific. Maybe just, w e have six minutes left. Maybe I'll ask one more, not big picture question, but talking about the broadscale discovery platform, which you've said throughout, you know, kind of the last year or two as we've, you know, followed the company, like that's really, you know, t he proteoform's exciting, but the broadscale really is where you think the, you know, the real massive opportunity is. You've talked a lot about what are the milestones ahead of feeling good on that launch, and now you've got that launch in the second half of the year. Just again, remind us in terms of what we need to see between then and now, what your level of confidence is on those timelines.
Yeah. Let me just, slightly modify your statement.
Sure.
I'm gonna tell you that that broadscale, I believe, is the inflection point for our top line because it unlocks a sale that is looking for additive data to the mass spec-based traditional workflow in a similar price point, accessing a similar budget pool. We think that's the revenue inflection point. When I was at HUPO, two-thirds of the people I was there said, "I love what you're doing on broadscale, but oh my God, I love even more what you're doing on proteoforms because this is data that is net new to the world no one's seen before." I think in the long run, the proteoform business, particularly when you combine the two, is gonna create an incredibly powerful and sticky business model for us. That's kind of, you know, setting the stage.
In terms of broadscale, there has been a ton of complexity over the course of nine years in getting to the point where we can get broadscale out in the marketplace. At the beginning of 2025, on that first earnings call, one of the things that we said was, "Hey, we're gonna need another year," because we had to go through a pretty significant asset configuration change, which was focused on getting more of the proprietary reagents that we've been building to function correctly on the platform. You know, for those that have listened in on our story, you know that broadscale depends on us building a set of three, maybe 350, 400 proprietary reagents that map each molecule.
These affinity reagents or antibodies that we call multi-affinity probes, they bind very short regions of the protein in a non-specific manner. It's a very specific class of antibodies that we have spent almost nine years developing techniques to build. Not enough of those antibodies work on our platform. We have thousands of candidates, very little yield. The reason was that the assay configuration needed to change to allow more of them to work. We went through a hard process in 2025 that took a little longer than we'd like to get through that assay configuration change. As Parag on the last two earnings call has talked about, we've begun to sort of move through validation steps in our new configuration. We've been able to decode simple mixtures of proteins, you know, 10 proteins, 15 proteins.
We've been able to identify proteins that are present in cell lysate. That's an important step. The next important step for us will be to be able to accurately quantify some reasonable number of proteins, 500, 1,000, 2,000 out of some complex sample like cell lysate. That's not an endpoint by any means. Our system combines these data points together computationally in an exponential manner. By the time that I have 2,000 or 3,000 proteins, 500,000 proteins, it doesn't matter what, more than half the work is done.
The asset configuration change will have been done and stable, and we are using that marker as kind of the benchmark for which we will say, "Okay, we're ready to get the early access program announced for broadscale, start signing up customers," and by the time that we are ready to analyze our first sample, we'll have a greater number of proteins ready to go. That's an important milestone for us and through investor conversations I know it's a milestone that a lot of investors are looking at as well. 'Cause that shows the whole thing's come together.
The goal of that or the plan for that, if there's a second half launch, second half could be December, could be August, sometime between before August or December, we would see this announcement, I guess.
Those are good bookends, yes.
Okay. Just in terms of, w e have two minutes left. How do investors contemplate then kind of the roadmap then for the company over the next couple of years? Like, you know, cash on the balance sheet, you're at this point, maybe speak a little bit how much you've spent to get here, and as you begin to unlock these opportunities, kinda what happens, how targeted do you go just to make sure things are on track? Like, how quickly can you ramp, things like that?
Yeah. Yeah, I mean, you know, you ask question how should investors think about it. I would take a more broad view, first of all. Like, I want investors to think about Nautilus as a company that is building something bold, hard, and disruptive, right? Those are companies that when they succeed, which I am confident we will, when they succeed, they have a transformative effect on markets, right? We're not an incremental sample prep system. We're not yet another assay that looks like Olink, which now Thermo Fisher owns. We are a net new approach that's doing something very different.
It takes a lot of capital and a lot of time to do that, and we have been very, very careful with our cash and our balance sheet and very careful with our development so that we have the capital on our balance sheet, which is there today. We ended with $156 million in cash at the end of last year. We have the capital that we need to finish building our broadscale capabilities, deliver on the entire roadmap I discussed earlier, build a commercial team, and launch. We have capital, as we've stated, through 2027, not into, but through 2027. We have what we think is a good plan forward for capitalizing the business as we continue to grow and move towards cash-flow positive after launch. That's kind of how I think about, you know, the important markers for investors.
Okay. Well, we've got just maybe a few seconds left here.
Yeah.
I mean, I don't know. How would you wrap it up in terms of. We've talked about key milestones, we've talked about products, we've just talked about kinda the future. I don't know. How would you like to wrap up from here, the Nautilus story?
I mean, I would just encourage investors who want to learn more to reach out my to me or our IR team, Anna Mowry, our CFO, who's in the audience. We'd love to talk to you about the company and count you among our shareholders. Thank you.
Terrific. Thanks, Sujal.
Thanks again.
You got it.
Yeah.
Thanks for being here. Thank you.