Hello, everyone. I'm Subbu Nambi, the Life Science Tools and Diagnostics Analyst here at Guggenheim. Thank you for joining us today for our inaugural Healthcare Innovation Conference, Day 3. It is my pleasure to be hosting Nautilus Biotechnology. Joining us is Anna, the CFO of the company. Thank you for being here. We'll start with a presentation from Anna and then move on to Q&A. Anna.
All right. Thanks, Subbu. Thanks for the introduction and the invitation to participate in this conference. We're really excited to be here. As Subbu mentioned, my name's Anna Mowry. I'm the CFO. I'm also a biochemist, or at least I was back in the day. That's actually been hugely helpful during my four years at Nautilus, where what we're building has the potential to change our understanding of biology. Before we get into the details, I'll remind you that we're using forward-looking statements in this presentation, which you can read about in more details on our website. What does Nautilus do? Nautilus was founded over seven years ago with a bold new approach to democratizing access to proteomics by bringing to market a new platform with the aim of comprehensively measuring the proteome from any sample from any organism.
Our platform is designed to measure single-molecule intact proteins with unprecedented sensitivity and scale using an approach that is completely differentiated from others in the market today. We believe that will allow us to tap into a very large market in proteomics, roughly $55 billion by 2027. But we'll first go after discovery proteomics, which is a few billion dollars primarily owned by the mass spectrometer. This is an area of the industry that is of particular interest to pharmaceutical companies and those doing drug development, as well as academic research organizations, a number of whom we have existing collaborations with today, like Genentech, Amgen, MD Anderson, and others. For the last two decades, those in the medical research community have been primarily focused on genomics.
Unfortunately, genomics doesn't actually tell you what's happening in your body because your DNA is the same in every cell in your body from the day you're born to the day you die, whereas proteins are actually key indicators of what's happening in a cell. Different protein expression might mean that you have a heart cell or a neuron, and it often is the difference between whether you're healthy or sick. Our ability to measure proteins today is extremely limited. With a mass spec, which is the current gold standard, you take specialized labs and skill sets, a few thousand dollars, sometimes weeks of analysis, and really, you can only see 8%- 30% of the proteins in the sample. Moreover, the mass spec works on peptides, not proteins. And so we lose the ability to see the potentially millions of different proteoforms that exist in the sample.
This is really important because 90% of drugs target proteins, and if we can't see what's actually happening to proteins, then we're missing a lot. Our platform was designed to meet these challenges head-on. From the beginning, we set out to measure virtually substantially all of the proteins in a sample, and our design gives us the sensitivity and dynamic range that allows us to see the least abundant proteins in the sample. We're designing a benchtop instrument that is push-button simple, sample in and sample out, which means that any proteomics research lab can potentially get access to this. This is really exciting because it has the ability to unlock more discoveries in proteomics, but it's also really great for us because that's a potentially market-expanding business opportunity. There are a few core components to our technology, and I'll get into these in more detail in the coming slides.
But first up, we have a single-molecule hyperdense nanofabricated array that's designed to measure 10 billion proteins across the surface of three flow cells. Once we have those proteins spread over the surface of our chip, we have instrumentation and reagents that are designed to repeatedly interrogate those protein molecules using image fluorescence-based imaging. From there, we take that binding data or lack of binding data and put it through our machine learning algorithms, which can be delivered to the customer in the cloud. I thought you might appreciate seeing what the surface of our chip looks like. This actually is a 100-nanometer cross-section of our flow cell. And the important thing to note is that each of these spots is a single protein. And this is really important because our customers, specifically in biomarker discovery, they're working with sample types of 100-1,000 cells.
Theoretically, they want to be able to see one protein molecule in that sample. That's where our 10 billion spot design really came from. Once we have the protein spread across the surface of our chip, there's two ways we can analyze those proteins. I'm going to get into each of these in more detail. First off, we have our broad-scale approach. This is where we're using our in-house affinity reagents combined with our identification method, which we call PRISM, to really determine which of the 20,000 gene-encoded proteins are present in the sample. On the right and the targeted side, this is where we use off-the-shelf, readily available antibodies developed by others to really dig into a particular protein of interest and the proteoforms that exist in that sample. As I've mentioned, we're developing what we call multi-affinity reagents.
These are primarily antibodies designed to measure short epitopes of amino acids, let's say three amino acids, that are present in thousands of proteins in the proteome. Then what we do is we run those antibodies one after the other, and we capture a significant amount of data that allows us to be shockingly specific in our protein identification. The other main benefit of this approach is that it's computational, and we believe we only need 300 reagents to see 95% of the proteins in the sample. This is a huge advantage over traditional affinity-based approaches, where you have to develop one or more antibodies for each of the 20,000 proteins in the sample, which, as you might expect, could take years or even decades if it's even feasible at all.
So, because we're developing antibodies that bind to short epitopes, I just wanted to show you an example of what we've seen with transferrin and glucose-6-phosphate. The red sections actually indicate sections of the protein where we've actually observed binding by our internally developed reagents. From there, what we do over dozens of samples in this case, we capture binding events or even non-binding events. And with increasing confidence, we can determine which molecule is at which spot in each well of the flow cell. And once we have that, we just count it up to determine what the quantity is. This is actually some of the data that we presented at World HUPO back in October. The last piece of data on our broad-scale approach, I thought you would appreciate seeing that through this multi-affinity approach, we can actually have increased confidence in what we're identifying.
And that allows us to go down to a much greater level of sensitivity. So we've shown actually transferrin at yoctomole levels. And I don't even know what a yoctomole is, but what I can tell you is that it's five orders of magnitude better than what we can see with the mass spectrometer today. That's really important because our customers tell us that the biomarkers they're discovering today are at the edge of what can be seen by the mass spectrometer. So if we can see deeper into the sample, the potential for additional discoveries is quite large. Switching gears to proteoforms. So we've got 20,000 gene-encoded proteins, but once you take into account isoforms and post-translational modifications, there's potentially millions of proteoforms that might exist in the sample. In this example, we've got two different post-translational modification combinations.
So I just wanted to bring this up because sometimes it can be hard to see that these two samples, when you chop a protein into peptides, you will be unable to differentiate between these two combinations. Whereas with our approach using intact proteins, we can tell with confidence the difference between these two samples. And I'll actually show you this data in a slide or two. These are exactly the types of questions that our collaborators want to ask for their protein of interest. Our work with Genentech has been going on the longest and is primarily focused on Tau, which is the focus of the data we presented at HUPO last month. So Tau is a protein that's highly associated with Alzheimer's disease and other tauopathies. It has six isoforms and is highly modified. And what we showed is that using our approach, we can use 11 reagents.
These are coming from our vendors or suppliers and use that to recognize various isoforms and site-specific modifications that allow us to recognize over 2,000 different proteoforms for tau alone. Now, because this is data that can't be seen through any other method on the planet, how do we know if we're right? And one of the ways that we prove that to ourselves upfront is by looking at model systems and control proteins in known mixtures. And so this is just our first example we showed along these lines. And you can see that we were able to identify seven different proteoforms of tau with a high degree of accuracy. From here, we showed this on our first biological samples that we presented for the very first time at HUPO in October. This is really exciting for a couple of reasons, and I'll tell you why in a second.
But using our reagents in simple model systems like mouse brain, organoid, and human brain, but these are true biological samples, we can see the prevalence of site-specific modifications as well as isoforms. What you haven't seen before is the ability to differentiate between the number of phosphorylations per molecule. In this case, we can see that organoids and human brains have a higher prevalence of multi-phosphorylations over mouse brains, which has typically between 0 and 1 phosphorylation. This is really exciting for a couple of reasons. I've said it before. You can't see this using any other method on the planet. But what makes this really exciting to me is that this is the first time that we're starting to shift how we talk about our platform from the technology and how it works to starting to show the first hints of what real biological data would look like.
We'll build on this with our collaborators as we get further into the coming year. As I said, this is a large market opportunity, $55 billion. We will start first in that discovery proteomics area, primarily owned by the mass spectrometer. That's billions of dollars per year. But the other thing to note is that once we understand or have the ability to better see proteins, there's potential market expansion that comes with it as we get into precision and personalized medicine, clinical, and diagnostics. We're still a pre-commercial company. We are in phase one. We are focused on finishing our development, generating data and publications, and bringing our scientific community along on the journey. As with the results that we're seeing with tau, what we've decided is that we will pull that forward into the first half of 2025.
This is where we'll work with our partners and collaborators to really prove out the value of this data and the market opportunity there, and then that will also lead into additional work with our broad-scale launching our early access program in the second half and ultimately lead into our platform launch in late 2025. The platform launch in late 2025 is when we expect our business model will shift into shipping instruments and consumables. We haven't released our detailed pricing yet, but what we've said is that we expect that initial instrument solution will cost roughly $1 million. That's right in line with where the mass spectrometers are today, and our consumables pricing will be a few thousand dollars per sample. With moderate usage of our instrument, we believe we can get to $1 million of pull-through per instrument within a reasonable period of time.
In the meantime, we've been very focused on maintaining our cash runway. We've ended Q3 with $221 million of cash on our balance sheet. This is two-thirds of the cash we raised as part of our process of going public over three years ago. So hopefully, that tells you exactly how disciplined we've been in managing our spend. Our cash burn in Q3, we reported $12 million of cash burn, which means if we needed to, we could give ourselves more time if we needed it. But in reality, we do expect that cash burn to increase, especially since we still have our commercial investments in front of us. So what we said publicly is that we anticipate having cash runway into 2027. Hopefully, that gives you a sense of our market opportunity and our differentiated approach. What I haven't talked about yet, though, is our really strong leadership team.
Sujal Patel is our CEO and co-founder, and this is actually the second company he's founded. His first company was in the tech space. I was with him on that journey. We took the company public in 2006, and by the time we sold that company for $2.6 billion in 2010, we had achieved that last quarter of roughly $100 million in business. It was a cash flow positive quarter, and we had a positive operating margin, so we know what it takes to build a company that can generate significant value. On the product side, our SVP of product is Subra Sankar. Subra was at Solexa at the time of the Illumina acquisition and stayed for a number of years after that at Illumina to help release the next few instances of the genomic sequencer.
So we have a team that knows how to build companies and deliver products to market. With that, I will hand it back to Subbu for questions.
Absolutely. Thank you so much for that fantastic overview, Anna. Your team recently returned from the HUPO World Congress and presented the latest proteoform analysis data that you shared with us today. Could you share some high-level takeaways that you and your team focused on from World HUPO and the feedback that you received from prospective customers?
Happy to. One of the things I would highlight first is we love the HUPOs. They're a really great opportunity to get in front of proteomic scientists and KOLs, and we consistently see a high degree of interest, and I'd say this HUPO was no different. We saw heavy booth traffic. We had folks coming to our posters. We had a packed lunch seminar, so it was a really solid opportunity for us to get our story out in front of a bunch of others. There's two takeaways that I would have from that, and one I already mentioned is that this was the first time at HUPO where we showed data from biological samples and is really an indicator that we're shifting more towards that commercial readiness point, especially for the proteoforms application of our platform.
The second takeaway that I would have is that Prag will tell you that he gets frequently approached by KOLs and even mass spec stalwarts, and those folks are going out of their way to express amazement in the type of data that they're generating, and sometimes people wonder if mass spec users are willing to change, and I think we've seen that interest at the conference.
That's fantastic. And just for the audience, Prag is the CSO of the company. Two weeks ago during the earnings call, you added some color and clarification to the launch timeline, which you touched on today. Most notably, now you plan to offer the proteoform analysis application of the instrument in an early access program first before the broad discovery. What drove this decision to add this program beginning?
Yeah. As you mentioned, we were really focused on our broad-scale application first. But as a reminder, we've actually been working on our Tau collaboration with Genentech for quite some time. And with the improvements we've made in our platform and the instrument and flow cell being fairly far along, we were able to combine that with the Tau reagents that are readily available. Two quarters ago, we realized that we had actually made significant progress in this area, which made it compelling enough that we felt it was a good idea to bring that forward. And also, I would say that Tau itself is a really big area of interest in the industry. It drives a lot of spend. It drives a significant amount of research. And our customers are pushing us to get access to this type of data that they can't get anywhere else.
Got it. And who are you targeting as a partner for this program in the early access program?
I would say there's two types of partners. First is that we are looking to work with those KOLs that are used to working with earlier stage technologies and ideally ones that have well-characterized samples that they've spent a lot of time with already so that when they send them to us, we can, to some extent, do some validation of what we're seeing on our platform. And also, those key opinion leaders are really great validation for us as we are developing something completely brand new. And then the second category is, of course, those pharma partners that have significant investments in tau and want to get access to this technology and then ultimately ones that have the ability to do more with us over time.
Perfect. And there are so many companies now pursuing the Tau detection. So perfectly makes sense and very timely. To clarify, will you be shipping beta access instruments to them, or will you be doing this in-house as a service?
Yeah. Our focus for both programs will be to start in-house. This is where we've got our in-house expertise and our in-house reagents that will give us more control over the program.
So it'll be done as a service?
Yes. I'm not using the word service because we're really talking about partnerships and collaborations. And we're not exactly sure what that looks like. But it will be certainly the work we'll do in-house in our own facilities as opposed to. I'm not speaking to the pricing model of whether it's a service or just a joint collaboration.
Given its early access, perfectly makes sense. How will that proteomics application inform your broad-based discovery application?
Yeah. This is one area that I'm particularly excited about because we believe that moving forward with the proteoform as an application will harden our ability to harden the abilities of our platform because we'll have to think more end-to-end in having our instrument be reliable in processing a significant number of samples or having our teams get used to delivering data to customers. And so in that way, it's a really great training ground for us ahead of a launch. The other way that it will serve us well is that we're bringing our KOLs and potential customers along on the journey. And so by the time we do launch, they're already very familiar with the type of data they get from our platform and might be willing to move forward. And then, of course, there's potentially overlap between our proteoform customers and our broad-scale customers.
And so we're starting those relationships earlier so that when we do launch broad-scale, we've already got those relationships started.
There is some customer loyalty then.
Right.
Okay. You have identified the key milestone that's getting the broad-scale discovery platform. Either it is the increasing number of protein identification from 500,000- 2,000. Could you explain to us what headwinds or tailwinds are influencing the progress of this milestone?
Yeah. I'm glad you asked it that way because it gives me an opportunity to talk about both sides. So I would say a tailwind is that we've already scaled up our antibody discovery pipelines a couple of times in the past couple of years. And so we actually have thousands of antibody candidates that we've demonstrated bind to short epitopes of amino acids. And then we take those candidates through a series of characterization and qualification steps. From a tailwind or, I guess, a headwind standpoint, we talked about on our last earnings call, we have seen greater fallout than we would like in shifting from off-platform to on-platform.
And all that really means is that we have work to do to optimize how our antibodies perform on the platform so that we can tap into or be able to leverage more of those candidates or more of the future candidates that we develop. But ultimately, we're really just trying to get to that 300 number.
Anna, since you're the CFO and we've been mainly focused on the product and the science, we have to get at least one financial question squeezed in here for you. In recent quarters, Nautilus has done a great job with the cost management, limiting cash burn. As you said, you still have preserved two-thirds of the cash that you raised during IPO. Could you tell us a bit more detail about these efforts in the recent quarters and where and when we might see spending start to pick up going into the commercial launch?
Yeah. Thanks for letting me reiterate that. I would say that the whole team has been really disciplined in how we spend. We've been solving problems through innovation versus growing our spend. The other thing I would say is that we've seen significantly better output this year than we did last year as a result of those efforts. And we've not significantly grown our headcount. Our headcount is flat, and our spend has stayed flat. So that's a really big testament to the effectiveness of the team. And the other thing I would add is that to answer your next question, we're looking for the same milestones that you are.
When we see 500 or 1,000 or 2,000 proteins, that's when we'll know that all elements of our platform have come together so that we have more clarity in our timelines and we can start to make those initial commercial investments.
Got it. With the last few minutes left, three years from now, what will investors wish they realized about Nautilus today?
Yeah. I'll make two comments there. Number one, I think investors have been trained that proteomics is a linear process. It takes a really long time. And one of the unique things about our approach is that we have an exponential curve using our computational approach to go from seeing nothing to seeing 10% to seeing 80% or even 95%. So once we see that first initial set of proteins, we should see an acceleration in our development to get to that 95%. So I don't want investors to be surprised at the pace of discovery than maybe is different than what they've done today. The other comment I would make is that it's not just how many proteins can you measure. It's also the single molecule sensitivity and dynamic range that makes the value of the quality of our data very unique and quite high.
We think that will be a really big market business driver over the long term, especially as we get into clinic and diagnostic.
Perfect. That's a great way to end the session. Thank you, Anna, for joining us. Thank you, guys, for joining us.
Thank you.