Great. Good morning, everyone. I'm Matt Sykes, the Life Science Tools and Diagnostics Analyst at Goldman Sachs, and I have the pleasure of having Sujal Patel, the CEO and Co-founder of Nautilus, here with me this morning. Thank you very much for joining.
Thank you. Thanks, Matt, and we appreciate the invite.
No problem. So maybe let's just start out on a high level and kind of where you stand with the launch, obviously targeting 2025, but maybe just talk through some of the moving pieces that you've completed so far and what you need to continue to work through prior to launch.
Yeah, great. So let me kind of walk through that. So, you know, as you mentioned, we intend to have a commercial launch of our platform next year. Maybe I'll take a step back and just kind of describe the platform in case there's anyone in the room who doesn't have all the knowledge that you and I have here. So we are building a platform that is focused on measuring the entire proteome from any sample from any organism. That is a largely unsolved problem in the industry today. The problem that is solved is the genomic side of it. If I take a drop of blood, I wanna figure out what the DNA is inside of there. That's a commodity today. I could do it for $500 in less than a day, and I can tell you what 99% of the sample is.
It's accurate, it's reproducible, it works great. If I wanna understand what the proteins are, that problem is unsolved today. So proteins are important because they're the functional unit of biology. They're the things that change from day to day. They're the target of almost all of our FDA-approved drugs. They're the target of most molecular diagnostics. So not understanding proteins well has a profound impact on the ability to build drugs effectively, on being able to build them in a way that's, that's, that's cost-effective. It really has hampered the delivery of precision and personalized medicine because we, we don't understand what's going on inside of cells, and cells do all the work in your body. So our platform, you asked the question of, well, what, what have you completed?
Our platform works by spatially separating molecules and billions and billions of molecules on a chip and analyzing them in a very unique method where we flow reagents over the sample over and over together. We collect minute data points together. We send them to the cloud, and we use a set of sophisticated algorithms and machine learning to combine all those data points to first identify all the molecules and then quantify the sample. The goal of that system is to have complete proteomic information, see all the proteins that are there, or virtually all the protein, to deliver it at very high dynamic range, meaning we can deeply look at the sample and deliver it with very high sensitivity, which means that even if there's one or two molecules of something that's important on a cell surface, we'll be able to pick that up.
Those are attributes that don't exist with the very complex workflows that exist around the mass spectrometer, which is the typical gold standard today for analyzing proteins. So what we've had to do over the last 7.5 years is build very novel technologies in different pillars to go and address all of the different parts of that workflow that I talked about. One is that we needed to have a method to spatially separate molecules. Essentially, imagine separating protein molecules on a giant chessboard where the chessboard has 10 billion spaces on it. That's a technology that took us almost the entire 7.5 years to perfect and is functioning today at the specifications we needed to.
The second was that we needed to create an instrument that had the capability to wash these reagents over the sample over and over again, to regenerate the flow cell so that we'd have the conditions for the next reagent, and do that at a speed that's acceptable for the customer, which is essentially a runtime of roughly a day. That's a process, that's a system today and an instrument that's very far along in its development. We have many prototypes of the final version, as well as we've got external manufacturing up and running for those instruments, and so things are going well on that front. The third big pillar is really the software and the algorithmic piece, and we've made a ton of progress over seven and a half years. That's good.
That takes us to the last pillar, which is the one that's, that's the long pole in the tent, if you will. That, that pillar is really building all of these, these reagents that we need. So our system relies on roughly 300 affinity reagents, which we're developing largely in-house. And these affinity reagents are very different than a typical antibody. They actually are built to not identify a particular molecule, but just to simply give us a little bit of information about that molecule. And then combining that information with hundreds of other data points, we identify what every molecule is, and we learn about all the characteristics of the molecule.
And so that process is a complicated process of, of antibody discovery, qualification, figuring out if the, if the reagent that we've created fits with the menu we have, has the right kinetics to work on our platform, works in the proper way. And so that's a pipeline that we, you know, since we went public about three years ago, have spent a lot of time and energy scaling up, and today are running as we continue to add reagents to the system. Once we have, call it roughly a half or so of the reagents, that's at the point where we'll be able to show some significant progress, and then we'll have pretty definitive timelines as opposed to this broad guidance of 2025.
Got it. Okay. Super helpful.
Yeah.
I wanna touch on one thing that you had mentioned, in your comments, and that is, mass spec, which has been sort of the traditional way to go about, doing proteomics. And your co-founder is, probably has a lot of experience with mass spec, and I think the genesis of this company was, "Let's make this.
Yeah.
Easier for people. Maybe talk a little bit about sort of the benefit of the Nautilus platform relative to mass spec. What, what challenges and obstacles are you solving for, and how can you communicate that to the mass spec users in order to convert them?
Yeah. Yeah. I mean, the mass spec, as you said, is the gold standard in this industry for proteins. If you, you know, many people don't know the history of it. Mass spec was originally built during the nuclear programs to understand the purity of uranium, and it has a ton of uses where it does a great job. You know, there's use cases in metallurgical analysis, in food safety, in looking at metabolites. You know, Amgen's got a big business looking at PFAS contamination in water. Those types of applications, it works incredibly well. Decades ago, what scientists realized was that you could start to use this to look at more complex samples like proteins, but it can't dig very deep into those samples. It can't see very far with these complex samples.
And over the course of, you know, the last couple of decades, we've continually made advances in the mass spec side trying to make that better. And, you know, Thermo Fisher, for example, has been leading the way in building mass specs in particular that are built for protein analysis and workflows around that. And, you know, you mentioned my co-founder, Parag. Parag is one of the key opinion leaders in mass spec-based proteomics. He's, his lab created the software that's used by 95% of the labs called ProteoWizard. He, 7.5 years ago, more like 8 years ago now, really was thinking about this problem and trying to figure out, well, how can I get to a place where genomics is?
How can we get to a place where it's push button simple to analyze the sample so any scientist in the world can analyze it, any biologist can get this type of high-quality proteomic data? And how do we make it so we can dig all the way into the sample and get 100% of the answer? Because that's really critical to be able to understand, understand diseases, understand how cells are functioning, understand what drugs are doing inside of cells over time. And so one of the things that he realized very quickly was that the mass spec just has a lot of fundamental limitations that are never gonna be able to enable it to dig deeply into blood serum and cell lysates and cell and look at surface proteins on cells.
He started thinking about other approaches, and it was literally one morning in 2016, he woke up with an idea that was the foundation of Nautilus. And it's a unique idea, you know, created by a unique guy because the idea is rooted just as much in computer and data science as it is in biochemistry. And you really have to be at the intersection of those two things, much like Parag is, to be able to come up with the idea behind the method that Nautilus uses.
Got it. Maybe talk a little bit about the sort of ease of use of your platform delivered, that your platform kind of delivers compared to sort of the traditional analysis. And do you think that the user experience being simpler, do you think that could help democratize proteomics in a way that through your platform and what you could, you know, sort of expand the market?
Yes. So I'm happy to talk about that. We fundamentally believe that ease of use enables democratization of this market, and that isn't just for us to be able to pick up share once we're shipping a product, but it's really enabling the expansion of this market. The mass spec-based proteomics workflow today is very complicated. You have to take a sample. There's particular upfront sample prep. You have to often fractionate it into multiple pieces. You have to take each of the proteins and digest them into protein, into peptides, essentially breaking them into pieces. You have to ionize them, shoot them through a mass spec, and then there's an extremely complicated, complicated data analysis on the other side to infer from the weights of peptides what proteins were on the other side.
While it is somewhat successful at telling you what proteins might be in the sample, it's quite unsuccessful at being able to quantify precisely, you know, eGFR is 3x overexpressed in this sample versus this sample over here. And because of that, biology is really fuzzy. And so there are a number of companies trying to figure out, well, how do I maybe make that sample prep a little bit easier? You know, Thermo Fisher Scientific have tried to do things on the front end. Seer is a public name that is trying to build sample prep to make it a little easier. But fundamentally, this is a very, very complex workflow. No matter how much, you know, sugar you try to sprinkle on it, it's still gonna be a really tough pill to swallow. And, you know, we know that ourselves, right?
Obviously, if we're gonna compete against a mass spec, we have to have one in-house. I can't even tell you how long it sat before we could get it up and running. It's so complicated. And so when we go out and we talk to, it's an interesting, like, it's interesting. When we go to HUPO, the Human Proteome Organization show, everyone there is a mass spec user. If I go to ASMS, the mass spectrometry show, they're all mass spec users. And they are super, super interested in what we're doing because they want a new type of data relative to what they're using today. When you go to AGBT or you go to a biology show or you go to a cancer-focused show, those are really different. Those users are like, "Oh my God, I don't even have access to this data today. It's too hard. It's too complicated.
I pull the transcript, but I know it's not correlated to cellular function really well." Those are a whole set of customers that aren't really in the proteomic space today that are all users that we think are unlocked when there is an easy, benchtop solution to get access to proteomic data that's high quality and reproducible, just like you can do with a genome today.
Got it. Maybe just talk a little bit about sort of what is your sizing of the proteomics market and sort of what do you see as the long-term growth rate of the proteomics market? I mean, we've been going on this journey from DNA to RNA to protein, and you can see it in the funding, that's increased going towards proteomics. So how are you sort of targeting the market, sizing the market, and the growth rate of the proteomics market in general? And then I think specifically within that, Nautilus's sort of potential share of that market.
Yeah. So if you look at the broad proteomics market, you know, the analyst firms have it roughly between $20 billion and $30 billion, growing at a 12% top-line CAGR. That's like the high, high-level number. If you looked at just the research-use-only markets that are served by the mass spectrometry community today, just the instrument sales are $3-$4 billion of instruments that are going out every single year. We think there's some 15,000-20,000 labs that are mass spectrometry-based proteomics or not labs, sorry, instruments that are out there in the field, in, proteomics labs that are supporting the proteomics workflows with mass spec today. We look at the, that first chunk of that market, those customers are academic and nonprofit research. They're biopharma. They're diagnostic companies.
If you take out applied, we think there's, you know, $3-$4 billion just in mass spec replacement in this world in just instruments, not even the consumable, revenue stream, which is probably equal to or more. Then in addition to that, we think there's other opportunities that are served with more traditional assays today that are also immediate opportunities. So you add that up, we think that, you know, $20-$30 billion might be the big number, but there's still some $5-$10 billion here, which is kind of that immediately accessible sweet spot that we're gonna go after initially.
Got it. I know on your last earnings call, you mentioned measuring 1-2,000 proteins in a reproducible way is sort of the key milestone, in de-risking the launch timeline. Can you talk through your progress on that and how much longer after achieving that goal should we expect to see that launch?
Yeah. And I'll try to describe this in a little bit of detail 'cause it's counterintuitive for those that aren't.
Yeah.
In the math and science side of how our system works. So in any traditional assay, Olink, SomaLogic, for example, which is now part of Standard BioTools, are two traditional public names that have traditional assays. In a traditional assay, if you have an affinity reagent or an antibody, if I have, you know, if I have n number of them, I can see n number of proteins. Or in, you know, the case of Olink, I need two antibodies per thing I can see. But there's a linear relationship there, and it's really easy to say, "I measure 500 things. Now I measure 1,000 things." Our system works in a completely different method. It works by taking a single molecule and probing it over and over again with many different antibodies.
We intend the launch version of our platform when we ship next year, we'll have roughly 300 of those antibodies. So we get a lot of data about that individual molecule. The approach is really powerful because when I combine 300 data points, with the right mix, I can get you almost everything in the proteome, like 95%+ of the proteome identified. But the disadvantage is that I can't really show anything at all until I've got more than half of the antibodies built for the platform because I need enough data points to differentiate the molecule from every other molecule that's in the sample.
And so Nautilus has taken the strategy over the course of the last few years of walking through the scientific community and the investors come along for the ride, walking them through our progress as we go and build those affinity reagents, and we're building the assay. And so if you went to the Human Proteome Organization's US show, which was just in Q1 here, one of the things that we showed at that show, which is new data, was that we showed deconvolution of simple mixtures of proteins.
So I don't have the options yet today to decode 100 proteins out of a complex sample like a cell lysate from a human, but I can take a simple mixture of five or ten proteins and I can pick out transferrin and, and PXDN and other proteins out of it that are proteins that I know I can see because of the type of mix I think, the mix of antibodies that I have available today. At some point, we will ramp up to enough of those antibodies that I don't need to have a simple mixture. I can just pull a human lysate and I can say, "Okay, I can see 1,000 proteins out of it. I can see 500 proteins out of it." When that data comes out, it's important because one, it's past the halfway point of building reagents for our system.
It will show the entire end-to-end method. For us internally, that's a big milestone because it's pretty deterministic from that point to where we would ship. We intend to narrow down our guidance in terms of our timeline. We'll likely, right around the time that we have sort of that 1,000, 2,000 protein mark, we'll likely come to the analysts and the investors, talk through what the final specifications of the product are, show off the product, and walk through the timing and the pricing in a more finalized form. Then for the scientific community, we will begin signing up customers for our early access program to get access to the platform to start running samples. We'd start to go and do some of those early pre-launch type of activities. For us, it's a really important milestone.
You know, the external world, you might say, "Hey, 1,000 proteins doesn't sound like a lot. I can go to Olink and get 4,000." It's really quite different because our approach is computational in nature. By the time that I show you 1,000, there's gonna be far more in the lab already because each incremental affinity reagent after that point brings in a lot more data to give us a lot more, access to the proteome.
Well, that's a good lead into my next question, which is the conversations with customers who might've expected more proteins at the outset of the launch, now talking about 1-2. But given the information content in those 1-2, like how have those conversations been with customers? And is there sort of a step down in their expectations or is it actually in line with what they were thinking they wanted?
Yeah. So you have to kind of think about it. It's, let me just kind of try to articulate what we intend to launch with and what launch is, right? So 1-2,000 is at the point where we're gonna come to the scientific community, to investors and say, "Hey, we're here and we know we're gonna be able to get to the rest of the way there shortly.
Yep.
By the time a customer, if I sign up a customer at 1-2,000, the expectation is that by the time I get around to running their sample in three or four months and I'm ready to go with our service offering, giving you early access, we'll be at about 5,000 or more.
Okay.
At 5,000 or more, the product is highly competitive because of the fact that it has a very wide dynamic range, meaning it can dig deeper into a sample than a mass spec-based workflow or a traditional assay, and it's much more sensitive. It has single molecule sensitivity. The world today only has one commercial single molecule sensitivity platform. That's Quanterix.
Mm-hmm.
Quanterix has moved that in a very specific clinical direction, which is great for their business, but it's not a type of technology that's readily available for users to look at any protein that they want. And so there's a ton of interest to get single molecule data at high dynamic range, even with just 5,000 proteins. As we move towards launch, you know, at one of our earnings calls a while ago, we said, "Hey, we'll launch, even if we're not quite all the way to, you know, the entire proteome, but we'd have to be at least through half." And once you're through half the proteome, meaning 10,000 proteins or more, you're just as competitive as you would be because the ramp from there is pretty steady and pretty quick.
Got it. Okay. You've talked about HUPO conference where you've released indeed in the past. You highlighted your attendance to the U.S. In October, you're gonna be going to the global HUPO conference. Maybe you can talk about what you're doing to prepare for the global HUPO, what you're looking to accomplish, and then sort of, by then, given the proximity to the launch.
Yeah. So, you know, all of these HUPO shows are a great opportunity to connect with the key opinion leaders in the proteomic space. And for us, the, the validation of our platform by these people is gonna be critical for the early adopters that are gonna be buying our platform in the first year. And so at HUPO, for the one that just passed and then the world HUPO that's coming up, we generally are showing two sets of data. And one set of data is, as I talked about earlier, is really related to our progress to be able to identify all the proteins in a sample. I mentioned that in Q1, we showed the data of deconvolution of simple mixtures with something along the lines of, you know, 50 affinity reagents. And in world HUPO, we will show more.
Are we gonna be able to show 1000 proteins from lysate? You know, we're working hard to do it. I don't know if that's gonna exactly line up from a timing standpoint 'cause we actually have to be ahead of that by quite a bit to be able to get the data into the show. But that's what we're working hard for. And if it's not that show, it's gonna be the US HUPO after. And, at least that's our intention. And so we're working hard on that. That's one set of data. The other is that, our platform, because, you know, we're 7.5 years old, many parts of our platform are quite mature today.
Because it is a single molecule analysis platform that can cycle multiple affinity reagents and gather information about a molecule, we have other use cases where there's a lot of customer value. One of those use cases is understanding the modification landscape of proteins, for example. And so we've worked with Genentech on the Tau protein, looking at the pattern of phosphorylation modifications because those modification patterns could be indicative of therapeutic response. They could themselves be a biomarker that would be therapeutically relevant. And so we are, we're spending a lot of time with customers like Genentech and Amgen and MD Anderson doing these proteoform landscape analyses. And that data is data that there is no other method to generate in the world. And so it's really interesting data for a show like HUPO because scientists have never laid their eyes on this kind of data before.
So we continue to do more work with all of those collaborators, and there's a few others as well. We'll continue to show more of that data at each of these HUPOs as well.
Can you, just building on the comments that you made on your work with Genentech, and you also mentioned Quanterix, which is sort of pivoted towards Alzheimer's and diagnostics, but neurology in general, particularly tau, and just given some of the types of proteins and biomarkers in that region are in low abundance. Given the sensitivity of your platform, how do you see neurology kind of playing into sort of, the potential for, for Nautilus in your platform?
Yeah. Neurology is a really important space, but I think it's important to just kind of talk about the wider spaces, right? The one area where the genomics revolution has made a dent is in oncology because cancer is a disease of the genome. And so if we could look for bits of foreign DNA that shouldn't be in the bloodstream, if we can profile a tumor, the DNA is relevant for neurology, cardiology, for autoimmune disorders, for pretty much everything else in the human body. The genome doesn't do anything really. And so you really have to see what's going on at the protein level. So you asked about, you know, about Alzheimer's. The Tau protein is a critical biomarker. It's a critical biomarker that you can get from cerebrospinal fluid. You can get it from blood serum.
You can look at the levels of it, the modifications of it, but there's also many, many other protein modifications that occur. And so if you really want to detect the disease early, I think that's a problem that we're starting to solve, right? We're able to figure out what are the patterns of modifications in the tau molecule, what are the different forms of it, and how is that relevant to future disease state. I think that's going really well. But being able to understand what are the biological mechanisms that underlie that disease and figure out what we might be able to do to interrupt it therapeutically, that's a completely unsolved problem today.
I think that what we hear from customers is that they need much more information about what's going on inside of the cellular machinery to be able to start to unpack that. And that's where we think that will be really relevant. Not to say that just on the, on the diagnostic side, being able to look at some of these molecules at higher sensitivity and higher dynamic range isn't important. That is important, but we as a company are not gonna be pushing into the clinical space anytime soon. That's a space that Quanterix is doing quite well in. But we think that upstream of that, in the discovery side, there's a ton of applications that will ultimately have clinical benefit. And that's, that's the area that we're focused today.
Maybe just on that discovery side, a lot of it is facing the academic end market, and it's an end market you guys are very familiar with. There's been a lot of debate over funding sources and sort of, sort of the flat NIH. And, and look, there's a lot of other things besides just NIH that go into the budgets of private institutional funding and things like that. But just could you give us sort of your sort of view on the academic funding environment and as you prepare for a 2025 launch, what that could look like?
Yeah. I think that, let me just kind of discuss the whole, the entire landscape, right? NIH funding, which funds the academics and the nonprofit research firms largely, goes up and down. It goes up and down with different administrations. It goes up and down with different priorities nationally. And yes, I think that we're seeing a little bit of a downturn, but these sort of vacillations up and down are 10%, 20%, maybe 30%. For technology like ours that's disruptive, that is bringing new data that the customer's never seen, that's incredibly valuable, there's always a place for it. But when budgets are tight, obviously there's more friction. It's harder and the fight takes a little bit longer.
We're planning for that in the form of perhaps maybe a little bit more elongated sales cycles when we get going, which just means that we need to have more sales cycles open at once to be able to yield what we want from a revenue perspective. On the other side, the global pharma budgets and the associated budgets on the DX side are pretty durable. They do go up and down a little bit, but in general, if you looked over a 30-year trend, the R&D budgets are very, very steady growing types of budgets.
And so the thing that we need to do is show why the product is valuable, what's the ROI for the customer, and we need to get some strategic installations that show off what the data potential is and then use that to go and land the next 10 and the next 10 after that.
Got it. Maybe just focusing on financials for a bit, just the cash runway. You guys have talked about second half of 2026 is where the cash runway, kind of gets you to. You've done an excellent job of managing, and I see Anna, you know, as of managing that cash runway. Maybe just talk about some of the factors that investors should be aware of in terms of spend as we get closer to launch.
Yeah.
Sort of some of the dynamics of that of the balance sheet as we get closer.
Yeah. Look, I will first start with my statement about us as a management team. You know, we are a management team that is extremely focused on OpEx discipline and cash. We've raised about $450 million since inception seven and a half years ago. We have more than half that cash sitting on the balance sheet. Last year, we burned $49 million. The year before, we burned $48 million. And, you know, obviously our burn is starting to go up as our OpEx is growing as we're getting closer to commercial launch. But Anna and I are very, very focused on making sure that budget is tight and we're spending appropriately.
You know, for me, this is my first time in biotech, but after 20 years in tech, including founder and CEO of a publicly traded company that we took from startup through profitability, 20% operating margin prior to selling that business, you know, profitability is something that's very important to us in the long run. So we manage a really tight ship here. We've said publicly that we intend to have the cash last into the second half of 2026. If you looked at what I just told you, that I have more than half my cash left and burned $49 million last year, that's implied in that is a significant increase in the burn rate. And that is what's baked into that guidance there.
And so, we feel really good about our position, relative to the cash balance and what we have to accomplish in the remaining development, the building of our commercial team and the launch. One of the questions I commonly get, I'll just ask it.
Yeah.
I'll just answer it since I have a second. I commonly get asked by investors, like, how big is the sales force build?
Yeah.
And.
That was my next question.
There you go.
Good.
So, the sales force build, it's a sales force build, but it is not perhaps as large as people think because of two reasons, right? One is that we anticipate that the value proposition relative to what we compete against is really significant. And when it's that significant, it means that sales cycle's a little bit smoother. It means that the it means that you need incrementally less effort and there's less friction there to go and get a sale done. The second thing is that the initial deal size for our product is $1 million. And we expect that, you know, with reasonable assumptions that pull through not immediately, but will ramp over the course of a couple of years to $1 million rough and tough per year as well per instrument.
When you have economics like that, you just don't need 40 salespeople to go out there and make a launch. You can have a much more modest group that can go and make a significant impact on the top line and help you bootstrap that next stage.
Got it. Okay. Maybe just, you know, in terms of, you mentioned the pricing on the instrument, and it's pretty much in line with some of the high-end proteomics instruments that are on the mass spec side that have come out from Thermo and, and Bruker lately. So I would think customer acceptance of that price point is probably already there. But as you're moving towards launch, could you talk about sort of the sales funnel? And you've got these early access programs that'll be coming up at some point, but how do you actively engage the customers during this time to keep them excited, to keep them engaged, to prepare for launch? And what does sort of the sales funnel look like right now for the product, if at all?
Yeah. So you asked, kind of a.
Like 3 questions in there.
Yeah. It's kind of like a couple of different questions there. So you asked first and foremost about the price point acceptance.
Yeah.
The high-end mass specs for discovery proteomics, like roughly $1 million, is the typical price point. Then you have this, Thermo Fisher Astral, which is their newest platform, which is pushing closer to $2 million when you look at the complete solution. So customers are very used to the price point. The price point is intentional. It's our, we built the pricing of our product so we can slot into the mass spec budgets. We can say to a customer, "Hey, don't buy the 9th mass spec. You already have eight of them." Parag's lab, for example, at Stanford, my co-founder, he has six in his Stanford lab. Don't buy that 7th one. Instead, go buy a Nautilus machine. It provides highly differentiated data. It's easier to use, and it's push button simple. That is a value proposition.
When we pressure test that with customers, really seems to be something that resonates with them. So our pricing is based on the market, not based on like a COGS Plus type model. So with that, we expect that gross margins for the instrument will be healthy as we get out the door as well. The second part of your question was really related to the sales funnel and what we do on that front. So, you know, since the inception of the company, we have spent a lot of time with customers initially understanding basic requirements, understanding research focuses, and where we can really make some significant improvements in the marketplace, then going through pricing, specifications, and so forth. Those conversations have evolved to varying degrees of sales-oriented types of conversations.
So if you kind of bucket the customers, there's, you know, half dozen of them roughly that are collaborators of ours today in proteoforms. Those are customers we stay really close to, and we look at all of them as potential prospects for buying an instrument at some point. There's another set of three that are part of our First Access Challenge. So we ran a challenge about a year ago, which was essentially an opportunity for scientists around the world to provide us proposals for what they would do with a platform like the one we were gonna bring to market. That was great 'cause we picked three winners that are in our sales funnel, but I also have a large number of proposals that are also in the sales funnel that didn't win, but are great opportunities that we're gonna pursue.
Behind that, there's a set of accounts that we stay very close to that's in the dozens of accounts. And we look at their, the size of the opportunity, the number of mass specs, and the work, workflows they have today, and those are great opportunities for us. And so, though that's how, how the segment, that's how the, the customers in the funnel are segmented. And roughly, if you kind of looked at everything, there's probably a couple hundred customers in there.
Okay.
The ones at the top, the top 50, we stay close to. We talk to them, we brief them, we see them at every show, and we continue to keep them warm, right? As long as you show so slow and steady progress to a scientist.
Mm-hmm.
They remain interested. It's if something stalls or goes backward or you say one thing and then it's another.
Yeah.
And so we have managed to keep a very open and honest dialogue with our potential future customers. Parag, being a KOL in the proteomics community, has a lot of goodwill with these people. They remain very, very excited about what we're doing. At U.S. HUPO, in Q1, if you went to Parag's talk, which is like a standard Nautilus like introduction talk, half the room was new and the room was completely packed. People were sitting on the floor in the aisle. The entire back was full of people standing. There's a ton of excitement about what we're doing.
If you had to kind of boil down some of the customer feedback at shows like HUPO that you get and narrow it down to a couple key features of what the Nautilus platform can unlock, what are they most excited about?
So, they're excited about on the broad scale, which is the word we use to describe showing all the proteins in a sample. They're excited about seeing all of the proteins at a wide dynamic range, meaning they can dig very deeply into a complex biological sample, which you can't do with mass spec. The other thing we hear is we're really excited about proteoforms, understanding the modification landscape of proteins, digging deeply in there where the mass spec is really not capable of doing that at all today in any meaningful depth.
Yeah. Got it. And with that, we're out of time. Mr. Patel, thank you so much for joining.