All right, well, good morning, everybody. Thanks for joining us here in the room. And for those who are online, we're very excited about our first-ever Investor and Analyst Day here in New York. As we kick off, just quickly forward-looking statements. If you have questions or want any additional information disclosures, please look at those online, so for the agenda, you know, this morning you saw there was a lot of news we put out with various partnerships or agreements that we have entered into in preparation for both this meeting and obviously where we're going. We'll touch on those throughout the day. Really, my focus today will be more to lay out the market, talk a bit about what it looks like today, how we see it unfolding, and therefore how we've tried to align our technology and partnership roadmap with the where that market is going.
And members of our senior technical leadership staff will take you through those aspects. And then I'll round it out with sort of what does all it mean and what would we like you to really take away from today and leave plenty of time for Q&A at the end. So just quickly, before we get into the market, just, you know, a reminder of why should we all care about the field of proteomics? Why should we care and focus on proteins? At the end of the day, proteins are really the measure, the marker of health and disease. They are real-time. They are dynamic. They are longitudinal. They are what we look for in biomarker tests and diagnostics. They're the things we target with therapeutics. And sometimes they're even the therapeutic, as in the case with, say, a monoclonal antibody. And the proteome is an extraordinarily complex thing.
You know, we talk a lot about, you know, over the last decade or more about the genome and sort of the 20,000 genes-25,000 genes. What makes it very complicated in proteomics is it's not just the proteins that are created from those genes. It's during the translation of those proteins that you get a significant amount of variation. And we call those proteoforms. And those number in the millions. Why does that happen? Well, when RNA is being translated into proteins, you can have variation that occurs. Could be a SNP, could be a post-translational modification like glycosylation or phosphorylation. And all of those in aggregate is what gets you to those millions of proteoforms. Now, why do we care about proteoforms and why do we care about proteins?
You see here a really good sort of chart from the Human Proteoform Project talking about proteins and then those proteoforms, those variations, and how they relate to disease. It's really the modified version of the protein. Take tau protein as an example, very well known in sort of our industry today with everything happening in Alzheimer's. Tau is the parent protein. Really what people are targeting with the therapeutic or with the diagnostic is phosphorylated tau, that PTM of the tau protein. Really you got to go beyond just the protein level and into these proteoforms and PTMs. If we just take a step back for a minute, you know, how big is this market? Where are we targeted today? Then we'll talk a little bit about how do we think this market's going to unfold?
That will lead into what are we doing then on a technology front to sort of capitalize on those opportunities. So this is very consistent with what we've historically said about the market. We're targeting that $20 billion research market today. We're focused in these three areas you see on the side, really capitalizing on what our technology is very good at, which is protein identification, expression and quantification, and then those proteoforms and PTMs. So what are our customers doing today? Platinum is in the market and the associated kits that get used with it. What do our customers do? And it really falls into those three areas. So turning that sort of pie chart in the prior slide into what are some specific applications that customers are running.
So in the protein identification space, sort of starting on the left-hand side, they might be characterizing an antibody or running on immunoprecipitation or a Co-IP and looking for which proteins are there and in what relative abundance. Proteoforms down the middle, a lot of focus you'll see on the following slide from our customers is usually looking at PTMs, looking at isoforms, things that are either very difficult to do or not capable of doing with a mass spec, so sort of a complementary analysis technique that customers use this for, and into the right-hand side, something more recent, we've been talking about this on our earnings calls. John will talk a little bit later today about a kit we're developing specifically for this, but really using protein barcodes to really fuel and power some research activities in academic, but largely in pharma and biotech.
John will go into that a little bit later today in his talk. So what are the customers doing? Give us some examples. Go from those applications down to very specific activities. This just shows you sort of a range of customers. Most of these customers up here are obviously academic centers. Liberate Bio up in the left-hand corner is a biotech. I can say that, you know, while we're not able to sort of mention specific names, our larger pharma and biotech customers are largely folks that are doing something in the protein barcoding space. So very popular in the sort of the in vivo screening aspect of drug development. So it would look sort of consistent with what you see there with Liberate Bio. But again, if we focus a little bit maybe on the proteoforms and PTMs. So you see Cedars-Sinai and University of Virginia.
The folks at University of Virginia just recently put out a paper. It's in preprint right now using our technology for peptide isoforms, again, complementary to mass spec, something that mass spec can't do, and they're using us to sort of resolve those. And similarly, Northwestern University recently presented some data at a conference using us in the area of PTMs, again, as a complement to their mass spec, so a pretty common paradigm in those really large proteomic core labs, so how's the market going to change? I think at an event like this, some of it's about what's happening today, but it really has to be about what do we see happening in the future? How do we think it's going to unfold?
Therefore, when you hear what we're doing on the technology front, you can sort of align with what was the underlying thinking that went into that. At the end of the day, if you go to any genomics conference, any proteomics conference, the concept of multi-omic analysis is sort of everywhere. Everyone wants to combine their genomics with their RNA sequencing, with the protein profiles. I think more and more starting to actually hear about metabolomics again. That's sort of had gone cold for a while. And it's, you know, certainly coming back to the forefront.
But if that's going to be a routine tool, and what I mean by routine is not, you know, just the premier institutes capable of doing this type of analysis, but I mean a routine academic center doing this type of work, we're going to have to have very easy-to-use, very sophisticated analysis tools. Those tools are going to be based upon artificial intelligence techniques. And you need a lot of data to train those. And I think when you unpack that concept of now we want to have these multi-omic tools, that's what's going to drive the types of studies and the types of activities we're going to see, right? These large-scale screening studies, we're starting to see those published now, often using either mass spec or an affinity-based platform, right? Really looking for what are the clinically relevant biomarkers in a given population of people.
We're going to have to go deeper than that, looking at the proteoforms, really characterizing within those biomarkers, are there important PTMs or important, you know, changes to those proteins that matter? At some point, we're going to need a healthy baseline. I think this was always a big question in the genomics field that started out looking at really variants and SNPs that were of disease importance. But how do you figure out what does a healthy population look like? I think that's more complex here because not only has it not really been defined well, the proteome is very dynamic. So it's going to need to be longitudinal in nature, which means a lot of these studies won't be able to be just one time.
They're going to have to follow people over time and really see how that sort of proteome or that protein profile of those individuals changes. And then finally, not to be sort of understated, the AI-driven drug development space, while promising, needs data. And I think John will talk a little bit in his presentation about how we apply AI to develop our actual recognizers for those amino acids and the aminopeptidase enzymes. And what we know from that work is it requires a lot of data to make those tools very powerful and highly predictive. So as more and more AI drug development happens, they're going to be looking for more data. So we think in totality, all of this just creates a lift in the amount of proteomics data that's going to need to be generated to support these types of activities.
If you're a customer, it's a pretty complex world today. Go into a large proteomics core lab who has mass spec, you know, with a Northwestern or UVA. They got a mass spec. They've got QSI. They've got gels. They've got all these different technologies. The reason they have that is there really isn't one platform that can address the full range of everything they want to do. What do I mean by that? One of the big arguments is, are you doing top-down or bottom-up? That means are you looking at proteins or are you digesting those into peptides and analyzing the peptides like you do with mass spec or with Quantum-Si? The biased versus unbiased, that has a lot to do with the method you're using. Is it an affinity-based platform, which is going to be biased in nature?
Or is it, you know, sequencing or a technology like ours that's unbiased? You have to make decisions on throughput and cost. You know, a high-end mass spec machine costs over $1 million. That's, you know, that's something that a large core lab can do. But certainly that's not going to get extended out into tens of thousands of labs over time. The breadth versus depth, do you want to get, you know, thousands of proteins or do you want to get really deep and see amino acids and see PTMs? So these are sort of the trade-offs if you go and sit down with customers that they'll talk to you about and the challenges they face today. And for the big core labs, they address it with a lot of platforms. And this is what ends up happening in a core lab.
They will own multiple technologies to address sort of the full scale of proteomic research, right? On the left-hand side, they might use mass spec or one of those affinity arrays from a company like Olink or SomaLogic looking at thousands of proteins per sample, right? The phrase is often plasma proteomics. They want to just go in and look and see what they see and what's there and in what relative abundance. When you want to really deeply interrogate or look for that needle in a haystack, often switching platforms now and going to an ultra-sensitive technology, Alamar or Quanterix would sort of come to mind for many investors using that really to make sure you're finding that needle in a haystack. And then as you want to dig deeper, you want to look at proteoforms. You want to look at PTMs.
You could certainly use legacy technologies like Edman degradation, obviously significant sort of scale and limitations with that. And then that's where people really start to plug in and use our technology with next-generation protein sequencing. So you can see sort of the dilemma for a customer. If they want to truly cover the whole proteomic space, they could end up owning three or four different platforms. So what are we doing about it? And what are we going to talk to you about today? We think, and we think we'll be able to show you today with the data that we have the core technology that can do both top-down and bottom-up analysis, right? We're applying today, obviously with protein sequencing, we're working bottom-up. We digest those proteins into the peptides.
But Brian will show you some data today that says we can use this technology in these other types of applications as well. From an architecture perspective, Todd's going to take you through what we're doing on the instrument and consumable side, right? If we want to scale to these, you know, large-scale studies, we want a de novo sequence, we're going to need billions of features, billions with a B. On our current architecture, that's not something we think is feasible. And Todd and his team have done a lot of work in this area. And he'll share, you know, the architecture change we're making with that device on the left called Proteus and how we see that enabling that scaling up into those large sort of screening studies or into de novo sequencing.
Brian will also touch on another component, which is if we want to do all this testing, if we want to test tens of thousands to hundreds of thousands of samples over time as an industry, you know, we're going to have to be able to do that, you know, with speed and with efficiency. What Brian's going to take you through a bit today is how fast do we think that can be? You know, today a sequencing run is close to 10 hours. Brian's going to take you through sort of what he thinks the art of possible is here, again, with data to show you that, you know, what we believe is possible and how that could then translate to our customers.
So finally, before I turn it over to Todd to take you through sort of the platform architecture, I wanted to comment on one of the PR you saw today, which was our agreement, our distribution agreement with Avantor. So obviously people probably know this company. They maybe know it by VWR, which is a brand that any of us in the lab space have probably purchased from at some point in time. We're really pleased with this. This gives us an opportunity to, you know, take advantage of their scale, their relationships across North America, U.S., and Canada. It's really a complementary group to our own direct sales force, very attractive to us given they have a life science specialist team that really focuses in this area.
It's a way for us to, obviously with training and some onboarding for them to get running, but it's a way for us to really achieve a level of scale that's just not feasible to do in any reasonable period of time on our own. We're really pleased to, you know, have been able to come to an agreement with them. We're looking forward to really enabling them as we move into 2025. With that, I want to turn it over to Todd, our Chief Technology Officer, and let him take us through the platform architecture. Todd.
Thank you, Jeff. And good morning, everyone. It's really, I'm pretty excited to deliver our technology update this morning. You know, the team and I have been working pretty hard on some of the things that you're going to see here for a very long time. And it's really, it's really an honor to be able to, you know, present it to a wider audience. I thought I'd start by looking at Quantum-Si's core technologies, you know, broadly across all the things that we've developed. We have a history of tackling complex problems across a wide range of disciplines and integrating that together into a seamless system. So on the left here, I have our instrument and our chip. Both of those are amazing things in their own right. You know, the instrument has this custom mode-locked laser that we developed for it.
And the chip is a custom CMOS image sensor that detects the lifetime of the dyes that we developed. And then we have, you know, our chemistry and our biomolecules in the middle, another amazing set of engineered molecules that we developed and a truly novel protein sequencing assay that we invented and developed in-house. That novel assay generates novel data. And so it required a very novel algorithm to be able to process it. So we have a ton of innovation on that side to be able to interpret the data that comes from the sequencing reaction and turn it into those value-added applications on the back end. Just a high-level overview of how the process in our system works today. Today, you know, the user takes their sample and they prepare it in a process called library prep. And then they load that into our chip.
And then they introduce the sequencing reagents and put the chip into the instrument. I just want to take a moment to point out that the instrument today contains no automation. So this is a manual workflow. It's a relatively simple workflow, but it is manual steps for the user. Then the instrument performs the sequencing reaction, which takes, as Jeff mentioned, you know, about 10 hours today. And that data is then processed offline to produce the results that our customers use. That prep process we call, you know, library prep. You know, proteins are fairly large biomolecules. As Jeff said, you know, our technology today is bottom-up technology. So we digest those proteins into short fragments called peptides. And then we immobilize those peptides at the bottom of tiny reaction chambers on our consumable device. Our consumable device today has about 2 million of those reaction chambers.
And then we have a series of waveguides in that device that allow us to deliver laser excitation to all those reaction chambers so we can interrogate what's happening in every one of those wells during the sequencing reaction. You know, during that reaction, we have really two classes of molecules in solution that we've engineered that perform the sequencing assay. We have a set of molecules we call recognizers or binders. These are engineered proteins that have affinity for the N-terminal amino acid. We have six of them in the kit today. And those six have recognition activity for 13 of the amino acids. We also have custom-engineered aminopeptidases in solution, or we sometimes call them cutters. Those digest the proteins from the N-terminus down.
These two work in concert at the same time to expose the amino acids in order from the N-terminus and produce what you see on the right, which is what we call a kinetic signature. That's a recognition activity for all the amino acids that we can detect one after another. It's the signature that our software takes and analyzes and aligns to a reference to determine which peptide was loaded into each well. Where do we go from here? As Jeff said, you know, we really see the need to scale this technology much higher. Of course, as I mentioned, our chip today only has 2 million wells. We could develop a new image sensor with more wells, but that is, it's quite expensive and it takes a long time. Those developments are not short developments. I know from experience, personal experience.
We're going to repartition the system. We're going to remove some of those complex elements from the consumable. This has several advantages. You know, one is that the consumable becomes much simpler and easier to manufacture and cheaper. Another advantage is that we can leverage optical magnification to pack the wells closer together. This will increase the density and the scale and further drive down cost. Then on those imaging components that we separate out, we don't have to roll our own anymore. We can leverage what already exists in the industry, making the whole development faster and easier to implement. I accidentally double-clicked. How do we realize this? We have to address all the different parts of the system that will work together in this new architecture. I lay them out here. We need the new chip.
That chip has surface functionalization that has to happen to it to make it work in our assay. We need the instrument that works with that chip. We need that to work with the kits and the sequencing assay that we have. And so I'm going to address these one by one. For the chip, this is where the biggest change is. And it's been a big focus of my effort for the last several months or year. The new device is a simple passive device. At its heart is a small glass die with four imaging regions, each containing about 20 million wells. So a single device comprising all four of those is about 80 million wells. So this is a pretty big scale up if you compare it to our 2 million well device today. We've heavily de-risked this.
We're using a lot of the same fabrication methods and materials we've used in the past. We have a lot of experience with it. We know what to expect. Very importantly, because we've kept some of the materials the same, it's compatible with our existing surface chemistry. That's, it's really important that getting the surface chemistry right to perform in this assay, do these single molecule assays was very challenging. We don't have to reinvent it. We get to reuse what we have already. That little die is made on a wafer, so we can build it at scale for low cost. This is something we have, you know, familiarity with doing with our previous product. We've developed a process flow for it at a production boundary. We've built prototype wafers already. We've tested them in-house. They work and perform in our assay.
This is, you know, compared to our existing consumable, is a relatively simple process that has a low-risk path to high-volume production. The process modules work. They produce the desired structure, the structure that we desired. And as I said, it performs well on our assay. And we do have a foundry partner for both the development and for the manufacturing. You may have seen the press release about SkyWater this morning. I'm, you know, I've been working very closely with them for both on the Platinum product in the past and now on this new product. And we have a great working relationship. And I'm really looking forward to working with them in the future. So that's the chip. Now we need the instrument that it works with. And of course, we push some of the functionality off into the instrument. We push that imaging responsibility onto the instrument.
We also are going to take this opportunity. If you remember, I mentioned the current product doesn't have any automation built into it, so we're going to take this opportunity to add some automation to the instrument. This will allow the workflow to be simplified, but it'll also enable more complex workflows that can have additional value, so we're putting more functionality into the instrument. The good news is all of that technology is existing technology that we can leverage today, and there's good partners to work with for them. You know, you also saw the press release for Planet Innovation. We've been working with them on Platinum Pro. I think they're going to be a great partner for the Proteus instrument.
We also get to leverage on the optic side, we get to leverage significant investments that have been made in high-performance optics over the last, you know, decade or so and next generation and next-gen sequencing. For that library prep and sequencing chemistry, the good news is it doesn't have to change, really. The existing chemistries that we have are directly portable to this new device and new chip. The new system is not going to use lifetime anymore. We're going to discriminate the dyes based on color. So there will be some new dye development that we have to do. However, this is the team that made a bunch of dyes we could discriminate in lifetime, making dyes we can discriminate in color is well within our wheelhouse. In fact, you know, we've already been working on this.
And we've already demonstrated, you know, this is just a proof of principle experiment where we're demonstrating distinguishing four different dyes from each other in color space on our simple glass consumable. So a good proof that we can make this work and we don't need to use lifetime anymore. The advantage here is that we just get to leverage, you know, off-the-shelf high-performance cameras instead of having to roll our own. And finally, we need the software to tie it all together. And similarly to all the reagents, you know, not much of the existing software has to change. Everything that we have already, all that back end is the same.
You know, we have the same pulse calling and, you know, identifying the regions of interest where we have different amino acids interacting with the recognizers and the alignment steps and the downstream, you know, protein inference and other applications. All that stays the same. You know, on our current chip, pixels have a one-to-one correspondence with the wells. So when we measure the signal from our chip, we know what the signal from a well is. In the new system, when we're imaging, that one-to-one correspondence goes away. So we're going to have images from a camera that we have to then reconstruct the well signal from it for all the wells. This is a fairly well-known problem, and we've demonstrated it in offline processing and doing it in real time as well within the capability of state-of-the-art hardware that exists today.
So we know we have a path here to a working product. So this all leads us to, you know, my privilege to show you the new instrument roadmap that we have. You know, the next instrument along that roadmap is a massive step up from where we are today. It's over two orders of magnitude increase in the throughput of the system. So a big step, but it's really just a step. As Jeff said, one of the advantages of this architecture is it puts us on a path to scaling to billions of reads. And that really unlocks, you know, shotgun proteomics of complex samples. And it puts us on a path to de novo sequencing in the future. So this has been a lot of effort by the team. And I'm really grateful for the opportunity to share it with you today. Thank you for your time. I'll turn it back over to Jeff, who I think is going to introduce Brian.
Yep. Thank you, Todd. I think maybe before I bring up Brian, maybe just putting context, you know, we often share milestones externally around kits and, you know, library prep kit or a new sequencing kit, the barcoding kit, which we'll talk about to give some, you know, visibility to investors for what we're developing, what's going to launch. Often you'll hear myself or our CFO, Jeff Keyes, say, you know, and other technology development initiatives. I think what Todd just showed you is what we've been able to accomplish in basically under a year. That's sort of the speed at which Todd and his team have been able to work at. That's what we mean when we're talking about technology development initiatives.
I think to build off of that, technology development initiatives aren't just hardware and consumables, but it's also really pressing on this application space in proteomics. For that, I'll let our head of research, Brian Reed, come up and tell you a bit about what he and his team have been working on in the space.
Thank you, Jeff. Today I want to talk about innovations from our R&D team at the level of the biochemistry that actually goes on the chip and how when we combine that with the really ambitious roadmap in terms of the platform, how that really enables what we think is the most complete and advanced set of discovery applications in proteomics. I'll just touch on four main areas. First is our path to complete coverage of the proteome.
Second is how we can use our real-time kinetic output to achieve ultra-sensitive detection of post-translational modifications. Third are our plans towards sequencing high-complexity biological samples. And finally, a really interesting and powerful application that sort of goes beyond sequencing to looking at panels of up to thousands of proteins in sort of a top-down fashion. So to start with proteome coverage, I just want to reiterate part of the sequencing process that Todd went over. So when we're sequencing proteins, we use amino acid recognizers that we've developed. And they bind on and off rapidly to N-terminal amino acids. And each recognizer can recognize between one and three different types of the 20 amino acids. The rapid on-and-off binding generates a pulsing pattern that you can see in the image where we acquire for each amino acid, we're acquiring tens to hundreds of these pulsing events.
That is an extremely information-rich output. So the fact is, every time that we sequence a new protein, we're getting enormous insight into the biochemistry of our recognizers. We really have at this point acquired what is probably the most, like the widest set of sort of biochemical insights from all the single molecule data into the function of our recognizers. We use that along with protein evolution and engineering methods to develop recognizers that expand our proteome coverage. Since our science paper was published at the end of 2022, we've. In that paper, we actually had three recognizers.
Since then, we've not only engineered new proteins using all of this, you know, rich information that we're getting from our sequencing out, but we've commercialized a number of new recognizers that have expanded our proteome coverage to now where we're seeing 13 out of the 20 amino acids. And we're already on track with new recognizers that will be released in the next version of the sequencing kit. So we're on a very rapid pace of development in terms of getting to complete coverage of the amino acids that make up the proteome. And this is really important for protein sequencing at a fundamental level, but it enables applications that are very difficult to do with current technology, namely applications that involve sequencing proteins without a reference or de novo sequencing. So that would be things like sequencing antibodies or cancer neoantigens.
Fundamentally, the technology that we've developed gives unprecedented insight into the biochemistry of binding. Binding is like the most fundamental interaction in biology. And so this ability to see at the single molecule level in real time proteins binding to biomolecules is extremely powerful because of the nature of the kinetic signatures that we acquire during the protein sequencing. But it also enables really interesting and powerful applications in other areas of proteomics. So how can we use that type of real-time output for ultra-sensitive PTM detection? So post-translational modifications or PTMs are chemical modifications of amino acids that, as Jeff pointed out, are extremely important in disease. There are over 400 types of PTMs. It's a very complicated space. And it's very difficult for researchers with current technology to understand what PTMs are present in their samples. Phosphorylation is by far the most abundant post-translational modification.
It's at the center of a number of diseases like cancer, for example, where actually cascades of phosphorylation control cell growth. What we've demonstrated is that we can use commercially available affinity reagents, so things like antibodies, that we can fluorescently label. And just like we do with our amino acid recognizers, we can use these affinity reagents to recognize in real time post-translational modifications like phosphorylation. And so when we immobilize a peptide that contains a PTM on the chip, we can add an antibody and actually see it bind on and off in real time rapidly. And instead of just seeing like one event, like the case with our recognizers, we get a beautiful pattern of pulsing, which is highly informative. Not only that, but we can use these reagents to recognize the PTMs anywhere in the peptide. So it's not restricted to the end-terminus.
These antibodies and other affinity reagents will bind wherever the PTM is located. So what does this look like in terms of an assay on the chip? It's a very simple two-step process. So we can generate a library of peptides and immobilize on the chip. And then in step one, we perform PTM recognition with affinity reagents that are labeled. So that's just a 30-minute process. And we'll observe on-and-off binding of the affinity reagent. Then we just wash that out. We're done with the recognition of the PTM process. And now we just proceed with a normal sequencing. So the first step tells you which peptides on the chip contain a given post-translational modification. The second step tells you what peptide you were sequencing.
One of the powerful aspects of this approach is that you can multiplex different PTM recognizers and actually combine the PTM recognition sort of segment of the assay with the kinetics, the kinetic signatures of the sequencing process, so that enables you to analyze the data in a way that you can pinpoint PTMs that are in complex configurations in a single peptide, and that's an important application and really one of the most difficult things to do with current technology when it comes to PTMs. Another thing about this approach is that it's extremely sensitive to PTM stoichiometry. Researchers are looking at their proteins of interest. They want to know what fraction of their protein is phosphorylated or has some other PTM. Again, that's difficult information to get with current technology.
But with this, because of the clarity of that on-and-off pattern, which is so simple for the software to recognize, we can really pinpoint which PTMs on the whole chip contain which peptides on the chip contain a PTM. In this example, we're demonstrating with a human protein called CDNF that we can identify just one out of a thousand actual peptide molecules that are on the chip, where one contains a phosphorylation phosphotyrosine, and the rest of them are unmodified, so this is extremely sensitive detection of the stoichiometry. Now, another very powerful aspect of this approach is that it can be applied in a bottom-up or top-down configuration, and in this slide, I'm demonstrating this with a human tau protein, which is a really important biomarker in Alzheimer's disease, so at the top of the slide is what it looks like in a bottom-up approach.
So the Tau protein is digested. The peptides are loaded on the chip, and we carry out that two-step process. In this example, we're using an affinity reagent that recognizes phosphothreonine, another really important PTM. So we have phosphothreonine detection in step one and then sequencing of the peptide. So we know what peptide, and by just looking at the sequence, you can see where that PTM is located. But with the real-time approach, we can look at proteins in another way, which is top-down, where you can take the full-length protein and immobilize it on the chip. And then with the same types of affinity reagents, you can probe what proteoforms of that protein are present. Those can be different sort of constellations of post-translational modifications, splice variants, and other things that constitute different proteoform versions of these important proteins.
So next, I want to talk about two areas that, combined with the advances in our platform, enable some extraordinary things in proteomics. The first is complex samples. So researchers and certainly our customers are interested in biological samples. And biological samples can contain anywhere from a mixture of tens of different proteins to actually many thousands of proteins. And they can span a wide dynamic range. What's important for researchers in many applications is that they get an unbiased view of the range of proteins that are in their samples. And this is very challenging and difficult to get consistent information. And it's not always accessible to even run these types of assays in a typical lab. Sequencing is uniquely powerful when it comes to complex samples because it gives you that unbiased information. It's not limited to predefined content.
And so it can give you information on the abundance of proteins and proteoforms that other methods are unable to access. With the Proteus chip architecture, platform architecture, combined with advances in the sequencing, we are on path to enable sequencing of these types of complex samples. So as we scale to Proteus, we're going to see shotgun sequencing of complex samples that contain thousands of proteins. And not only that, but with barcoding approaches that we've developed along with flow cell designs, we'll be able to run multiple samples simultaneously. And we'll combine those with innovative methods to fractionate proteins. This is an important aspect of the sample prep that goes on before you load your proteins on the chip. We can fractionate proteins, reduce sample complexity, and therefore enable researchers to get an even deeper look into their biological samples.
And then, as Jeff touched on, and what I'm really excited to present today is that we've developed through some, I think, really clever engineering of the chemistry that goes on the chip, some real advances in how fast we can perform these sequencing experiments. So with our standard chemistry that customers are using now, the typical workflow is 10 hours. And you can see just with an example peptide here, the sequencing process from beginning to end of a typical peptide, where it takes almost the whole 10 hours to get from the beginning to the end for this particular peptide. With the fast chemistry, and this is just a version one that we've developed, we have sped up that process to now where we can get the same information in just 90 minutes.
We're very confident that with just a bit of further development, we can get this down to less than 30 minutes for some applications. That has some really interesting and important implications for the capabilities of the system. One, by iteratively sequencing biological samples, we can get deeper and deeper information on the proteins in those samples. Another way to use this capability would be in clinical applications to get a really fast sample-to-answer time. The second area of innovation that ties together with the platform developments that I want to go over is essentially doing single-molecule sandwich assays on the chip to look at panels of proteins. Detection of fixed protein panels is an increasingly important area of proteomics that researchers are interested in.
And using, again, affinity reagents like antibodies, we've demonstrated that we can perform ultra-sensitive detection of protein biomarkers in real time using an approach where an antibody is immobilized on the chip. And then using a process that we invented called dye cycling that leverages our existing kits, we can convert the formation of an immune complex on the chip into a real-time kinetic pulsing pattern. And so using this approach, we can achieve very sensitive detection of biomarkers in the context of complex samples. So here's just a demonstration with three different proteins where we're able to detect 0.1-1 pg/mL of these proteins in serum on the chip. And we are on path with some further development to get this down to just 10 femtograms per mL, which is extremely sensitive for these types of biomarker assays.
The real-time nature of the output that we get from the chip enables us to do this in a multiplex format. So we can immobilize multiple antibodies on the chip and look for a panel of biomarker proteins. Here, I'm demonstrating this with human biomarker proteins IL-6 and TNF-alpha. Another thing about this approach is that we can combine this with PTM recognizers so that you can not only have this sort of single-molecule sandwich-based assay that quantifies a panel of protein biomarkers, but you can also get that top-down information to look at proteoforms. With the sensitivity on the Platinum instrument, this is going to be suitable for panels of up to about 10 proteins. But the panel size scales with the platform. And so we will be able to perform multiplex assays eventually with thousands of biomarker proteins.
Another thing that is worth emphasizing, because it's pretty extraordinary, is that the assay is done in serum on the chip. So the amount of sample prep that the customer puts in upfront is very limited. Essentially, a complex mix of proteins like serum is added to the chip, and the detection is sensitive enough to get these real-time pulsing events from the biomarker proteins without interference from the other proteins in the mix. And it's fast. So we will be able to deliver a sample-to-answer, in particular because of how simple the sample prep is, in about a two-hour time frame. So with that, I will turn it back to Jeff. Thank you.
Thanks, Brian. Maybe before we bring up John, again, Brian's team, if you asked him what's their core mission every day, their core mission every day is all of that evolution of those amino acid recognizers and the aminopeptidases, right? That's the core focus of that group. When we think about how much are we investing in research versus how much are we investing in development, that's their core focus. And John will come up here in a minute, and his team then takes those discoveries or those developments and actually turns them into products that we launch. But you can see behind the scenes, really an innovative team here pushing the envelope, pushing the edges of what the technology is capable of.
We're sharing all this today not because we're going to necessarily try to launch products that do every single one of these applications, but we wanted to demonstrate really that breadth of capabilities as we think about opportunities to partner with other groups in the future to bring these onto the platform. So really, the beginning stages of that work, but just, again, showing sort of the efficiency and effectiveness of our R&D organization and sort of the innovation. But it all has to become a product that customers use that performs reliably. And with that, I'll bring up John, our Chief Product Officer, who will tell you about how we're going to turn what Todd and Brian talked to into a roadmap that customers can use every day in their lab.
All right. Thank you for the introduction, Jeff. Yeah, so I'm excited to talk to you today about our platform roadmap and obviously bringing all the innovations that Todd and Brian were discussing to the market. So let me just show you sort of the view of what we've been doing over the past year or two. We've obviously been bringing all these innovations to market so that we can demonstrate the utility of next-generation protein sequencing and improve the performance of it in our customers' hands. So just this year alone, we've done two sequencing kit iterations, the V2 and V3 sequencing kit. Those are the binder development that Brian was referencing, new amino acid coverage, better performance, higher throughput out of the system with those sequencing kit iterations. In this quarter, we're also going to be releasing two more kits.
One is a barcoding application kit, which I'll go through in my slides, and then an improvement to the library preparation kit as well. We expect that innovation to continue into 2025. We have another application kit, Pro Mode, which gets to a lot of the binding assay work that Brian was referencing. We're trying to get that capability into our customers' hands, so that Pro Mode kit will become available on the Platinum Pro system, and then we have another iteration of the sequencing kit V4 that's planned with, again, additional amino acid coverage in that kit, and we're going to have another library preparation kit. We're very focused on getting protein input amounts down. We've achieved some of that with the library prep V2 kit, but we're going to have another iteration on that in 2025 as well.
So a lot of activity on kit development, obviously trying to bring all this innovation, keep the pace of improvement to the system coming to our customers. We've made a lot of advancements as well in the software space. So that's that bottom row. We've made new analysis workflows on protein inference. We have an AI-generated kinetic database that's critical to the analysis that we're doing. And I'll highlight some of that work as well. And then finally, on the platform side, we have the Platinum Pro and Proteus instruments that we're going to be bringing to market in 2025 and 2026. So really exciting roadmap. There's obviously a lot of different areas going on, and I'll try to go through each one of those a little bit. I won't dwell on this too much.
I think Todd and Brian both kind of covered it, but just to kind of remind you of the sequencing analysis software that's at the heart of all of this processing all of this data, and then really focus on what can we do with that? What are the applications we can derive from that rich information? So we have obviously everything starting with our N-terminal amino acid recognizers, the signal that we collect from that that gives that picture on the bottom left of those traces where we can assign which one of the binders is interacting with the peptide immobilized on the semiconductor surface. So those colors correspond to which binder it is. We get that from the fluorescence intensity and lifetime measurements that we're taking on the system. Then we get finally to amino acid identification using the kinetic information.
That kinetic signature, the actual details of the pulsing give us which one of the amino acids we're looking at. We get that kinetic signature, which then allows us to align to reference and do all the downstream analysis. What I wanted to highlight is what can you do with that? We have these software workflows that we've been deploying for our customers. There's two that I'm showing here. One is protein inference. That kinetic signature enables the inference of a sample protein or multiple proteins in a sample that's run on the Platinum system. This is an example showing that IL-4 is correctly inferred from sequencing on the Platinum system. It's the highest likelihood protein present in the sample. We can do that protein inference with analysis workflows in the kinetic signature. We've also been working on a variant calling workflow called ProteoView.
So this is really leveraging the power of next-generation protein sequencing in terms of getting down to single amino acid resolution, studying variants, studying post-translational modifications using that kinetic signature. This is an example we've shown where we've taken two peptides, mixed them together in a 10 to one ratio. There's a variant there at the sixth position in that peptide. And the software is able to utilize the kinetic signature to do the differentiation between those two different variants that are present in the sample. So again, that kinetic signature is extremely powerful from going from high-level protein identification information all the way down to single amino acid level resolution information to get deep interrogation of proteins. So I'll shift now to a little bit of the work we've been doing on artificial intelligence. We've integrated this heavily into R&D and into the products.
There's a lot of advancements that have been enabled. All that innovation roadmap has really been possible due to the adoption of a lot of these techniques within the organization and within R&D. As Brian was showing, we've made a lot of advancements to the binders. There's been binder development that's been occurring over every one of the sequencing kit iterations. We now have six different binders recognizing 13 different amino acids on the system. What that really enables is these traces like we're showing at the bottom where we're able to sequence all the way through a peptide. We get very high amino acid diversity. We see all the amino acids, right? That gives us that rich information we can then go and use in analysis workflows. Key to all of this is the binder development where we have been continually iterating.
We'll have new binders coming out in subsequent versions of the sequencing kit. Behind all that, we have been utilizing AI techniques for the recognizer design that allows us to do things at the backbone level, but as well as down to detailed amino acid changes that can be made. We're utilizing those AI techniques in the binder design as well as using them for orthogonal verification. In improvements that we see, we can then go use those methods to study them, get detailed information, and understand why those improvements are occurring. We have been utilizing NVIDIA GPUs on premises and in the cloud to execute that work. We also recently, or just this morning, just announced additional work we're going to be doing with NVIDIA on utilizing GPUs on the Proteus system for the downstream signal and image processing on that platform.
So very excited about that collaboration with them and the continued utilization of NVIDIA GPUs in our technology development. The other area where we've made significant advancements with artificial intelligence is in our pulse width database. So one of the aspects of the analysis workflow is that we need a kinetic database to use when we measure and get all that information from the system. And so we've undergone extensive training set generation. So we're sequencing more and more proteins, more and more peptides on the platform internally. That generates a large training set of kinetic signatures. We're then using that training set of kinetic signatures to train an AI-generated model to predict the 4.6 million pulse widths that we need for our analysis workflow. With all of that, we've been able to get significantly improved performance out of the system, higher accuracy, more throughput, better protein detection.
So this has been a really, within the past year, an avenue we've pursued in terms of getting really good predictions for the pulse widths, which are critical to the performance of the system. So I'll shift gears now from software and analysis workflows to the platform and to Platinum Pro and highlight some of the applications and kits that we're going to be bringing as part of the release of this platform. So before I get to Platinum Pro, though, let me stop on Platinum. Obviously, really remarkable instrument in terms of its capability. I mean, this is the instrument that brought next-generation protein sequencing to market, tremendous capability in terms of what it did in a small desktop footprint, low capital investment, really, really powerful platform.
Through the course of having that out in the market over the past two years, we've seen opportunities to improve that platform, one in particular around sort of the workflow and the user interface and the user experience, so we see ways that we could potentially improve that, which I'll show in Platinum Pro. Also, as Brian was mentioning, right, there's a lot of opportunity to advance beyond just protein and peptide sequencing with this device. It's a single molecule detection device. There's a lot that we can do with that, and currently, the functionality of Platinum is limited to protein and peptide sequencing, and then finally, there were certain segments of the market that wanted to utilize Platinum, particularly in government and pharma, that weren't able to leverage our very cloud-centric analysis workflow.
We've been able to enable them by putting another local analysis server alongside Platinum, but obviously, we'd like to have that in a more integrated fashion for them so there's not additional hardware that they have to leverage. So we took all of that feedback and that work on Platinum and basically leveraged that into the advancements that we're putting into Platinum Pro. So here's a picture of the new instrument. Obviously, completely changed the screen, the user interface, and I'll go through some of the workflow on the next slide, but reduced hands-on time. We expect this to be an overall better experience for users working and setting up sequencing runs on Platinum Pro. Pro Mode is the application which I'll also highlight, which is how we're enabling more of the single molecule detection device capability of the platform, moving beyond just peptide and protein sequencing.
You can now do onboard analysis or via the cloud in the same way that we had before, but now analysis can also be executed directly onto Platinum Pro without the need for an additional local server. So overall, really nice improvements that we're bringing to this, I think, addressing a lot of the opportunities that we saw with the Platinum instrument. I just wanted to show a couple of the screens and sort of what the user journey will look like on Platinum Pro. We spent a fair amount of time redesigning this, looking at opportunities to help customers with setting up sequencing runs, going through, and I'll just start with the first one on the left there, just sort of the landing page. This is where customers will go to start a run, get the sequencing run going.
There's again two different modes on the platform now, the sequencing mode and the Pro Mode. They can select between that. Platinum Pro also has the capability to do either whole chip. So the whole 2M chip can be running one sample, or you can run it in a split chip mode. So you can do two samples simultaneously in one sequencing run. We've put a lot of pictures and just improvements into the workflow feedback that we had received about setting up the run. So chip insertion, as we were saying before, this is a manual workflow. So there's steps where customers have to add reagents. And so we tried to clarify all that for them, give them feedback on timing of certain steps during the sequencing workflow. So this is all completely redefined workflow on Platinum Pro.
Let me get into two of the applications that we're going to be releasing on the Platinum Pro system, the first one being this Pro Mode. The idea is I can take a protein of interest, and this relates to a lot of the work that Brian was showing about, say, for example, phosphorylation studies. I can take a protein of interest. I can then we're going to be releasing a dye labeling kit. Those dyes are obviously compatible with the Platinum Pro system. You can label the protein of interest with that dye labeling kit and then put that on the system and study detailed binding kinetic information of the peptides immobilized onto that chip. Again, this is the kit and the platform that are going to enable detection of single molecule protein binding kinetics. We're excited to see too, I mean, how customers use this.
I think there's opportunity here to expand beyond just immobilized peptides on a surface into other biochemical entities that one would want to study single molecule binding for. So that's the direction we're moving with Pro Mode, which is available on Platinum Pro. And then, as we were mentioning, Jeff was mentioning, there's been a lot of interest in adoption that we've seen in protein barcodes. So the concept being that you can take your protein of interest in an engineered protein setting. You can put a barcode, a unique barcode onto that protein of interest. You can then isolate that protein, cleave off that barcode, and instead of having to sequence the whole protein, right, you can just sequence that barcode on the Platinum Pro system. So the greatest adoption we've seen of this is definitely in pharma, in both in vivo and in vitro studies.
In vivo, the concept is, for example, this example showing messenger RNA vaccine development, right? The messenger RNA is varying. They put a unique barcode into that messenger RNA that gets expressed in an animal model, isolated, cleave off the barcode, and now you can measure expression levels of different messenger RNA variants in an in vivo system. In vitro, we've seen customers utilizing it for lipid nanoparticle delivery systems. So where the lipid nanoparticle now is what's varying, they put a unique barcode into that LNP. In an in vitro setting, they can then introduce that, isolate again the protein from that cell, cleave off the barcode, and look at how much of that was expressed in that cell. And so great opportunities for particularly in pharma for measuring protein expression levels.
But I think there's broad applicability here for any kind of engineered protein environment looking at how much protein is expressed in the system. So this kit is also coming out this quarter. So this is a very near-term release that we're working on. And then the other kit that we have coming out this quarter is a V2 library preparation kit. We've seen opportunities for improvement in library preparation as well, simplifying that workflow. There were certain steps like buffer exchange that were found to be not always necessary. We've been able to reduce that, make it easier for customers to go through the prep. We've sequenced large numbers of proteins with this improved kit. Internally, we've gotten the performance up with 80% of those proteins successfully inferred against a whole human proteome. And this is an area we've been working on is reducing the protein input.
We've received comparable performance with the new V2 kit with a fivefold reduction in protein input amount. This is an area, like I said before, we're going to continue advancing on reducing the protein input amount in subsequent kit updates in 2025. And then I'll just end on the Proteus system. Obviously, we've kind of covered the opportunity there in Todd and Brian's presentations. I'll show a little bit of sort of how we're thinking about this from the product side. Obviously, this is a switch from semiconductor to optical architecture. That'll give us big advancements in throughput scalability. The system will have a liquid handling automation. So all of those manual workflow steps that are currently on Platinum and Platinum Pro will be automated. Customers are going to load sample, pattern array, reagents, hit go, and sequencing run will be executed.
We expect to get an order of magnitude throughput increase per sample at launch. That picture just shows sort of what we're planning on designing for the actual pattern array, the reagents, obviously all just loaded into the system, loaded in and hit go. The other advancement is that there's going to be two samples that are going to be run. Sorry, excuse me. You can run, sorry, one or two samples simultaneously on the system. There's going to be two of those pattern arrays that are going to be sequencing simultaneously. There's a picture there of the reagent cartridges that we're designing. Tips, samples, reagents, all loaded into that cartridge from a pack, placed onto the device, loaded in on a tray and drawn into the sequencer.
As Todd was saying, each one of those pattern arrays will be able to sequence four different samples, so over the course of one sequencing run, there'll be up to eight samples that can be sequenced, so very excited about Proteus, the improved throughput, the workflow automation, increasing the number of samples. Excited to see what customers are going to do with this, and I'll hand it back over to Jeff to round out on the proteomics lab of the future.
All right. Well, I think before I jump into my final presentation, I just, you know, I hope today one of the things you take away sort of hearing Todd and Brian and John present is sort of just the caliber of scientific and technical leadership in the company. I think we're innovating across every single aspect of the proteomics workflow, and we're not a company with 5,000, 10,000 employees, with 1,500, 2,000 of those in R&D, right? We're talking about a much smaller organization than that and a very complex problem. You can imagine sort of the expertise you have to have to do everything that we're doing, and I think if you're wondering why we've been able to execute so efficiently the last year on our roadmap, I think you see sort of the reason why the people sitting behind these gentlemen are also equally capable and creative.
I think you see that showing up in the roadmap. So what do we want you to take away today, right? I think this last presentation for me, it's not very long. It's really more focused on what do we want you to have sort of taken away from today, what matters, because we shared a lot of information. I think the number one thing is what we presented today, while many of this information is new, it's a wide range of sort of applications or methods applied to our core tech. What we're working on is heavily de-risked compared to many of the other proteomics companies.
And the main reason for that is, as Todd talked about in his presentation, and you can see some of it even in John's, you can see the rework of the UI and the UX experience for a customer, how there was thought about how that might apply to the next platform so we won't have to redo that again. So really, when you're building upon a commercially available technology, when you're leveraging many pieces that work, your risk of when you add now maybe one new thing or you swap something out is much lower as you think about timelines and getting to market. I'd be willing to put our sort of protein enzyme engineering with the folks also doing the computational side between John and Brian's presentation. I'd be glad to put that group up against any group in our industry. We're operating at scale.
We have been operating at scale. We have very high success rates of the things we engineer turning up into a kit and getting into the market. So there's no reason to believe we won't be able to maintain those success rates and continue to scale the coverage of the proteome. And the other piece that gets lost often in technology-oriented days like this is how do you actually bring it to market? I think you got to be able to manufacture it. You have to be able to produce that chip. How do you QC that chip? How do you get those instruments from the manufacturer to the customer and have them work and work reliably? We often talk about getting to the point of launch is like the beginning of the marathon. Then you actually have to deliver it to the market.
Again, a lot of what you saw today is leveraging technologies, fabrication methods, partners that we've worked with in terms of SkyWater and Planet Innovation. These are folks that today we're making Platinum and making Platinum Pro, and they're just going to come along and move with us as we bring out the next thing. This is not a whole new infrastructure. This is really us building upon what we have and those partnerships that we've built and just deepening those partnerships as we move forward. I think the partnerships is a key part of our story and a part of our strategy, right? It's not sort of feasible to take on a problem like this in proteomics by yourself.
We even see all of us that follow the industry. Even the largest companies in our industry tend to acquire businesses to put together this type of a strategy to go execute. We don't think we can do it all ourselves. We think there are a lot of very capable people out there across both technology development and manufacturing and obviously commercially with Avantor. I think I mentioned it in between Brian and Todd's, or sorry, Brian and John's talks. Some of these other capabilities might very well lead to another logo up here. We don't have to bring everything to market that we showed today. We wanted to prove out the applicability of the technology, prove out how broadly capable it was, have that fundamental platform that can do it.
And whether or not every single kit comes from us or there are other partners who maybe bring kits onto our platform so that user gets the full experience, we're very open to those things. We'll continue to look at those opportunities and would welcome those types of conversations. So why do we think we're best positioned to bring in this new paradigm in proteomics? One is the architecture you saw today, right? What Todd presented to you in terms of the consumable, in terms of the instrumentation, we're now on a path where we can scale that output. That output at launch, as Todd described, that initial consumable will have those four spots, each spot at around 20 million, so about 80 million features on a single consumable compared to 2 million today. But there's no reason that we can't continue to scale that output.
The technology we're using for manufacturing can be used to scale those features and leveraging all those sort of great optics sort of development that's happened over the past decade or so in next-generation DNA sequencing. There's just a lot of commercially available tools out there that can be tapped into to do that scaling. But fundamentally, we're able to do this off of a core tech that we have, that we invented, that we have patent protected, and that we're using every day today. Again, not just doing it in development, but able to manufacture these unique components and deliver these in kits and do that reliably. I think what you saw from Brian was really, we believe this is the only tech that really can do top-down and bottom-up and do it at the single molecule level.
We think that could be a significant breakthrough as we look to the future in the field. Speed. Brian talked a bit about speed. I think we go to conferences, we talk about a 10-hour sequencing time. There's no customer who's saying to us today that that's a problem because of where the industry is today and how the technology gets used. This is us thinking ahead. This is us going back to that presentation where I said, here's how we think the research market's going to change over time. We're thinking about many, many, many more samples wanting to be processed. We're thinking about longitudinal studies. We're thinking about clinical applications. If we want to do those things, we're going to need to be ready when the market's ready.
This is a good example of us looking out and thinking about where is the market going to be and what do we think we need to enable when we get there. And then take all these things together. We shared a lot today. We thought this was about as much as any one person can sort of reasonably digest. We at times, we're in it every day, so it's a little bit easier to digest all of this. But I can tell you there are other technology development initiatives as well that we think combined with everything you saw today is why we're so confident we have that clear path to de novo, right? We shared the things we were ready to share, but we know what we're capable of. We know what we're working on. We know how these things would come together.
We feel very confident in that statement. How does that translate to the market? One thing we wanted to do is sort of break down the market into various segments from discovery. That's that sort of plasma proteomics. I want to look for hundreds to thousands of proteins in one sample. Research and translational being much more. I want to look at this biomarker. I want to see its abundance. I want to look at a PTM. Production and QC really being those applications of using our technology to quality control components that are being created. These could be antibodies. These could be therapeutics. And then ultimately, obviously, clinical diagnostics.
With what we shared today, the kit releases that John talked about, the Platinum Pro machine coming out in the first half of 2025, we think we get deeper into some of those areas where we're already in today. We start to get access to new segments of the market like discovery. And then really with the Proteus platform, which we would expect to launch in the second half of 2026, we get into the rest of these markets. We really get fully penetrated into some of these segments, begin to gain access or more access into others. And then as we continue to stack up the innovation on that in the future, really ending up with that platform that has a broad market segment applicability. So again, if we just reflect back to where we started today in this meeting, right?
The proteomics lab today has to own all these different platforms, right? You have to have your mass spec or your affinity platform for that screening. You have your QSI to do that deep interrogation. This challenge today is absolutely throttling this industry. It's slowing it down. It makes it so that it's a highly centralized, highly consolidated market. The technical trade-offs, right? You can't have both high sensitivity and a lot of proteins and see them deeply. You have to make these decisions. What am I trying to accomplish in this study? Therefore, what technology am I going to pick? And on top of that, you're relating that to, and what platforms do I have in my lab? So sometimes what you want to do, you might not be even able to do because you don't have the platform or you have to send it out.
If you're lucky to work at one of the few places that have this sort of capital you need, you can have all these platforms. Many of these platforms individually are $500,000-$1 million, let alone if you need to own many of these platforms to accomplish that. This is what we see. We see this new technology architecture and the Proteus platform as being that platform that can address the broadest range of applications. Again, next-generation protein sequencing is the area we're focused the most on and really evolving that technology. We believe we're the only ones capable of doing that because it's novel. It's unique to us. We invented it. Those other applications that Brian spoke about, right? The PTM discovery, the top-down work, ultrasensitive protein detection. Some of that work he showed was with commercially available antibodies.
There's no reason that we couldn't work with those types of companies who work in those spaces to bring those methods onto this platform. Because what we really want in the end is the customer not to have to own three, four, five platforms, but own one platform and then pick the kit they're running based on the type of application, right? And over time, then ultimately you're able to integrate more and more of these together and you're doing things like de novo sequencing, so maybe you need less total methods. But in the short term, you can be very thoughtful about the application you're picking, the type of approach you're going to take. But again, you're doing it on one platform.
Obviously, a platform like this won't be as inexpensive as our Platinum platform, but we still think with the architecture we're building, with some of the things we're doing, we'll still be able to make this a much more affordable platform than what others are sort of offering in the space, and if you think about the aggregation of all the equipment coming into the one, there'll be a far more affordable path, and then finally, the automation, I think it sometimes can get lost when you're talking about product development and bringing a product to market. If you noticed anything in John's presentation, the thing I would have you take away is we're moving the technology from that sort of innovators, sort of very manual, very high-skilled, high-technically sort of qualified person more and more towards something that anybody can run.
You see that in small changes we're making along the way, but I think you see that in a big way in Proteus with the automation of the workflow, the simple consumable designs, the sort of load-and-go method. That's going to be a continued focus for us. Make this feel more and more like a clinical device, and that really opens up the number of people who can go and do this every day. With that, we'll stop there. Thank you again for attending here or online, and we're glad to take questions. Okay, go ahead.
Thank you very much for this. If I got this right, Platinum Pro that you'll be introducing next year, should we think of it more of an extension of what you have now and more for the quote-unquote academic lab sort of situation, but when you get to Proteus, because it's going to be flying at a Concorde speed, you would be using more of that for the industry set of things?
Yeah, I would think about Platinum Pro sort of in two ways. One is the only way for us to really enable either customers or partners to pursue some of these other applications with our tech was to open up a channel, right? And we call that Pro Mode. So, I think one of the things about Platinum Pro is now that channel's open, there's the kits you need to be able to apply these applications to our technology. I see Platinum Pro as that sort of extension of the Platinum sort of line. I think it will be impactful as Platinum is today in both academic, but also pharma and biotech, because I think that has more to do with the application kits we're developing, like barcoding. I think as we bring out the low-input library prep kit in the back half of next year, as John talked about, that will open up more applications as well. But I see it serving those broader markets.
What I see when we get to Proteus is now we can start to go straight into the really big core labs and not be a complementary technology, but be the workhorse technology, where today we're largely complementary. I see Proteus as sort of opening up that opportunity to be the workhorse in that big core lab.
The reason I was trying to ask the question that way was concerned if people would just wait for Proteus rather than go for Platinum and Platinum Pro. If I'm new to the market, should I wait for Proteus or should I just start off now, get used to the technology, and when Proteus comes out, I'm ready for it? I'm just trying to figure out how the consumer adoption would go from here.
Yeah, I'm not overly worried about that sort of concern. My reasons for it are a couple. One is, obviously, we're still at the very early days of adoption of Platinum and next-generation protein sequencing, so I think if we were by ourselves with our existing direct team out trying to drive it, maybe we'd run into a little bit of that because our reach isn't as big, but again, part of why we entered into the agreement with Avantor was to open up the number of places we're going to be able to go talk to, so I think given the breadth of the access points we're going to have, I think we'll find more than enough users who are really interested in the technology. It's also a pretty traditional onboarding if you think about it. The earliest publications on our tech, the earliest presentations are all coming from big mass spec core labs, and that's because we have some unique capabilities and we're very complementary.
I think that is sometimes a good way to introduce a market to a technology rather than you come in head-on and it's sort of like you pick them or you pick us. That can be a very challenging way to bring a new tech to market. I see this as keep looking for those people who see the value in what it does, be okay with being complementary, build up that sort of reputation and database sort of of evidence, and then show up with the new platform and look to start to take more of the work in that lab. I think there'll be other labs too that we just won't get on board today maybe because of workflow, but we'll be able to get on board with the Proteus. So I see it sort of as a pretty natural progression of the business, especially given sort of the early stages we're at and sort of the amount of upside that's here in terms of opportunity.
One last question from me before. I'm not trying to hog the time here. On Avantor relationship, a couple of questions there. One, what are you trying to do with Avantor? Of course, we all know Avantor is across the industry.
Sure.
And the reason the way I asked that is, do you have a minimum that they need to do in terms of sales? Or what's the economics that you're looking for with Avantor?
Sure. So I think the goal with Avantor was really there's two ways to sort of get to scale, right? I mean, at the end of the day, all of us newer or more startup proteomic companies, one of the things you always struggle with in the early days is hiring great salespeople, getting them trained, getting in front of the right technical buyers, the right economic buyers, the purchasing and procurement people, contracting. These things take time and they can really drive up the OpEx in a very fast way. I think we had some great conversations with the team over there. They have this dedicated group of folks that they call life science specialists, right, who are selling technologies and genomics and proteomics today. These folks are in role, in territory, with relationships across both technical and economic buyers.
Really, now what we can do is come in and train them and get some of that access and that lift and acceleration that if we tried to hire to build up to the size of team they have could take us a very long period of time on it. It's a relationship where obviously we're sharing in the economics with them, but we think it makes a ton of sense given what each sort of company brings to the table and the potential to sort of really get that reach much faster than we could get on our own if we tried to build out the level of infrastructure they already have in the U.S. and Canadian markets.
Kyle Mikson, Canaccord Genuity . Thanks for doing the data that will be helpful. I have a lot of questions about the architecture change to kind of optical for the Proteus. So we'll start with the wells and all that. So I guess right now, 2 million wells, but after Poisson, you've kind of cut that off in terms of where protein molecule actually gets inserted into, gets mobilized into. So maybe now with this Proteus, you would have 16 million or so wells with the protein in there. Is that the right number? How do you kind of think about all that, I guess?
I could answer it, but why not have Todd take a crack at maybe Todd. What you could do is talk a little bit about sort of, remind us again on the features sort of per spot versus consumable, and then talk a little bit about both current loading and sort of your thoughts on super Poisson and other things over time. Maybe just give us a little bit of that flavor.
Sure. Yeah. So you kind of have the numbers right. So Platinum product today is 2 million wells. Of course, we don't claim you get 2 million reads from it. It really, if you rely on Poisson loading, you can get about a third of that loaded with single molecules. So you can get about 600,000 single molecule reads that are independent in the wells.
And then some of the wells will have doubles and some will be empty when you do that. The same would be true with Proteus platform at launch. But we do have other technology development that we're working on. We're not ready to present all of it today. But something that we definitely can do in the future is develop methods to get Super Poisson loading where we can deliver single peptides to the majority of the wells. It's not something we're ready to announce today, but something we are working on.
Okay. And then IP comes up a bunch in this industry, this sector. I mean, now they're moving to optical. It's closer and closer to biosequencing maybe for DNA. I mean, is there any kind of IP issues potentially to talk about through that maybe?
Yeah. So I mean, first on PacBio right there on a semiconductor chip today, that's probably more similar to our current chip than what we'll be on with Proteus. Again, though, they're doing sort of fluorescence and intensity. We're doing more lifetime on that side of the sort of in the Platinum and Platinum Pro architecture. We're obviously aware of IP in the space. A lot of optical technology is just in the public domain. It's high NA, high numerical aperture, high resolution, broad field of view optics. These things sort of exist in our industry and are widely available. So that's not really unique in terms of sort of the consumable we're making and some of that and how we're making that. We're really leveraging a lot of the materials and fabrication methods that are in our existing technology where there's protection.
So we obviously care a lot about IP. We patent things. People know we have a large estate and we take it seriously. We also are very diligent to take other people's IP seriously. Right now, we don't have any reason to believe that what we're working on is going to be a problem with anybody else, but we would, of course, continue to monitor that and make sure we're doing everything we should to bring that product to market the right way.
And then in terms of, I think this was kind of touched on earlier, but the multi-product portfolio potential, I guess. So you would basically offer Platinum Pro and then Proteus 1 and 2 altogether, or is Proteus just the end game kind of for now?
No, I could see us offering multiple platforms, whether that be Platinum Pro and Proteus, a future Proteus platform. There's no reason to believe we couldn't take a lot of the Proteus architecture and apply it to a desktop machine if we wanted to. So I think there's a lot of ways we could go in the future with the architecture. We don't have a burning desire to just try to get to one instrument if the market really needs to see multiple. I think the question longer term will be if that desktop market is really an attractive one, do we stay with the Platinum Pro architecture another five years from now, or do we think about, hey, let's build a desktop version and leverage a bunch of the capabilities and advantages of the Proteus sort of architecture? I think that's a decision for many, many days and many years in the future, and it really just would be how the market unfolds.
But we don't. We think these instruments are sort of addressing different segments of the market. Okay. Makes sense. So no cannibalization, no kind of overlap, right? Yeah, we don't see it right now. I think we'll monitor that and see how that unfolds over time. But today, I think the first person to want to buy a Proteus is probably thinking about more of that workhorse than they are those dedicated applications that are complementary to their workhorse tool today. So I think we're probably talking about different people. All right. And just one technical question too. So on the anti-PTM antibodies and affinity reagents that was touched on one of the slides, I think Platinum was supposed to sort of eliminate the need for affinity reagents and those methods, I guess, those approaches.
Maybe just talk about why you would need that, I guess, because this was thought of as an unbiased approach at one point. So can you still do de novo sequencing one day? What's that like?
Yeah. So I think I'll maybe start and then have Brian come in. I think two reasons we showed that data today. One is when you get to de novo sequencing, you're getting every amino acid from every peptide, right? Today, we don't necessarily, as Brian showed, we don't necessarily on a very long peptide, let's say a peptide is 50 amino acids long, we might not get all the way to the very end of that peptide. So the concept of marrying sequencing with an affinity reagent like a PTM recognizer is someone can go in and see that PTM no matter where it is in the peptide.
So you could have instances where the affinity reagent saw it and we sequence it and see it, but in other instances where we're able to recognize a PTM that's, say, very deep in a peptide that we can't get to today. Yes, over time, as you get closer and closer to de novo, then you wouldn't necessarily need that type of combined capability. The other reason to show it was also to help people understand, given the focus in certain areas of our industry on ultra-sensitive protein detection, we want people to know that there are other technologies that can be applied to what we have to address those sort of markets. If you want single molecule and you want high ultra-sensitive, we're showing you examples of pulling antibodies off the shelf and applying it and doing that type of work.
That was really to just help people understand the art of possible in our tech and stimulate whether it's customers wanting us to help them develop that or whether that's a partner who calls up and says, "Hey, we'd love to enable ultra-sensitive protein detection on a commercially available platform." We wanted to just get it into the sort of the public domain and sort of see how it unfolds and take advantage of the fact that we can do it and this platform is in the market, we can manufacture it, it can be delivered. That could give our customers or a partner very quick access to a rapidly growing segment of the market. I don't know, Brian. Would you?
He agrees.
Write that down.
Yeah, that was great. Yeah, last maybe final one. Publications are important to drive adoption. There's been a few with Platinum. There are even new boxes being discussed today and everything. What's the roadmap in terms of getting evidence out there to show people, show everyone that's going to buy these things, what this can do and just drive confidence, really?
Yeah, great question, so as I mentioned in my opening, we just got University of Virginia who submitted their paper for publication. It's available now on preprint. We're expecting some other customers to do a similar path here over the sort of the next sort of two or three months, and a little earlier this year, sort of around middle of the year, we did sort of increase our investment on the scientific affairs side to really put together more structured collaborations to really make sure that customers just don't do their experiment and move on, but actually publish and put that data out.
So we have a much bigger emphasis on that here in the last half of the year. I think you're seeing some of that with this first paper, but I think you'll see more of that as we move into 2025 and throughout. And we'll continue to bring data to conferences via posters and customers presenting. So we agree with you. It's a key part of the growing awareness, and it's something we'll continue to invest in.
Thanks very much. This is Doug McPherson from H.C. Wainwright. A couple of things I wanted to confirm. So when you're moving from the color detection rather than the lifetime detection, that won't occur with the Platinum Pro, right? That's not until Proteus?
That's correct.
Cool. Then for the workflow for the color sequencing, do you pre-label every amino acid prior to sequencing, or is it still the dynamic binding where one at a time on the N-terminal and then gets cleaved away? Or is it like, "Hey, they're all labeled. This one's green on the N-terminal, it gets cut. Then red on the N-terminal, it gets cut."
Yeah, I understand the question. No, we're not changing how the assay works fundamentally, right? So the assay's function is you put an unlabeled peptide onto the chip, and we observe the rapid kinetic binding and unbinding of our recognizer with the peptide. So the only thing that's changing here is that those recognizers are labeled with a dye so that we can detect them. In the Platinum instrument, we used fluorescence lifetime to be able to discriminate those dyes from each other.
So we have six binders in the kit today. We have six different dye labels, and we can see them independently in that lifetime intensity space on the Platinum system. But we probably build the best lifetime imager in the world in the Platinum instrument that you throw away with every experiment. So building a larger scale version of that would be a significant R&D investment. So in the Proteus program, we're moving away from lifetime, moving to color. And all that means is we're replacing the label that we have on the recognizer with a different dye label that we discriminate, and we discriminate the six from each other in color instead of lifetime. Does that make sense?
Yes, it does. So in that case, two parts to this now. Do the recognizers fluoresce upon binding? So now you have the signal, or is the light collecting aperture so perfectly positioned on the N- terminus it just sees what color is there?
No, and this is, again, the same between the Platinum instrument and the Proteus instrument. When the recognizer moves and binds to the N- terminus of the peptide, it moves into a region where the laser is exciting the well. And so it receives that fluorescence excitation, and it emits photons that we can detect.
Great. And does there still need to be an element of lifetime assessment? So for example, now, if there's one recognizer and it recognizes the three aromatic amino acids, you can still distinguish between those three based on fluorescence lifetime. But now, if it's based on color, do you still need some lifetime input, or are you just going to develop a new recognizer for every amino acid with its own distinct color?
No, it is a little confusing. We have two different timings in the system. One is fluorescence lifetime, which is at the nanosecond scale. So we have this pulse laser that's running at megahertz, and we're measuring the fluorescence decay of every molecule after every laser pulse. That's something that's happening very, very quickly. That's what the Platinum instrument does, and it uses that to measure the lifetime of the dye molecule, and that's how we discriminate the dyes from each other. At the same time, in the system, we're measuring signal in all of the wells in parallel, and we're seeing those recognizers diffuse in and bind to the peptide and then release and diffuse away.
And when that happens, you get the signal come and go. And that happens on a seconds timescale, like a half a second or a second or something like that. So it's a completely different timescale. That timing is just measured by the rate at which we take images from the chip. So in the case of Platinum, we have an image sensor built in. We're running it with frames sort of in the 15 frames per second rate. We're taking those images at that speed so that we can see those binding and unbinding events as they occur in real time. We'll do the same thing in the Proteus instrument. So none of that changes. In the Proteus instrument, that camera is going to be running, collecting video real time, will be processing it and extracting that kinetic information across all the wells.
Yeah, Doug, if I could add to that. So the concept really to focus in on is what we call a kinetic signature, right? The core part of that is that on-off binding rate that, I mean, Todd initially explained, and I think Brian went into it in a little more detail. We'll still have that attribute of the data. We're just now attaching a dye, we're going to discriminate via color versus discriminate via its lifetime. So the kinetic signature will still exist. We'll just have a different type of a label on the actual recognizer itself.
Cool. Thank you very much. All right, those are my technicals. Really appreciate it.
Thanks.
Thank you. Scott Henry with AGP. One of the things that jumped out at me today was the barcoding of proteins. As investors, we're always trying to look for things that are perhaps bigger levers out there. Can you talk about that, and could that be one of the bigger levers with regards to volume?
Yeah, it's a good question. I think everyone's always looking for, we like to say in the industry, what's the killer app is sort of the phrase a lot of folks like to ask us. I think barcoding is probably a good example of one of the first ones we think we've uncovered. I think John talked about some of the different ways that customers are using that. Obviously, it has some research tilt to it, but a very strong pharma biotech and drug development sort of field to it. And that's certainly where we're seeing a lot of the pull. And even the idea to develop a dedicated barcoding kit really came from some interactions that John's team had technically with some of those pharma biotech customers.
So I think the good news is, if it performs the way we want and they continue to be happy, it could be a very attractive sort of revenue opportunity in terms of the amount of consumables they might use. The part that you have to wait a little bit for is those big customers don't move quite as fast as a KOL in academia. So you get onboarded, you get in there, you start to work with them, you scale up their work. So it's a bit slower sales cycle, but it is the draw for many pharma and biotech to evaluate this technology upfront. Multiplexing, especially these in vivo studies when you're in animal models, is a potentially huge time and cost savings for them. So we think it's a great fit.
And now we just have to really do that work and onboard them and get them working at volume. But if you're looking for the killer app, it's sort of our first thing that has that sort of draw and continue to look for other applications like that for the tech as we go forward.
Okay, great. Thank you for that color. As well, I hate to ask the question because it's almost like a dot-com question, which is dating myself. But you did put NVIDIA up there on the slide. Can you talk about your relationship with NVIDIA and whether it can be a competitive advantage? I know those chips are not easy to acquire. Any thoughts on that?
Yeah, maybe John, you want to come up and speak a little bit to the sort of interactions we're having. We definitely see, I mean, they're very active in the genomic space. Maybe I'll start and have worked with some of the sequencing companies, both on processing and algorithms and the speed of those things, so they certainly have some expertise in our space, and maybe I'll let John talk a little bit about how we see applying that to protein sequencing.
Yeah, I mean, excuse me. We've obviously been using NVIDIA GPUs in development, using them for the protein design aspects. What I've seen in our discussions with them and collaborations, I mean, they're very engaged from a software development perspective as well, so the model is we've presented the problem to them. They have expertise coming from genomics and very similar domains. They've been very active in collaborating on software development, partnering with us, delivering that back to us, helping us to do GPU selection.
So they've already been very, very highly engaged and already helped to advance our software development activities related to Proteus. So moving into the signal and image processing domain with them, putting GPUs onboard the device, that's the direction we're going. And they've been very motivated and been a great collaborator so far.
No second questions, okay? Sorry.
This is certainly the last. No, it's okay. So when you're talking about identification of proteoforms, does the technology allow for just trying to identify the position of the proteoform? And also, can you see all proteoforms out there or just the most common ones, just like the phosphorylation that you're talking about? What's the breadth of that process?
I'm going to phone a friend and let him help.
So for proteoform detection, it's based on affinity reagents. It comes down to what you have affinity reagents for. If you have one for phosphorylation, then you can look at phosphorylated proteoforms. If you have it for other PTMs, you can do those. You might also have affinity reagents that recognize a particular isoform, like a splice variant of a protein that recognizes the different sort of structure of those different proteoforms. It just comes down to your availability of affinity reagents.