Hello, my name is Yuko Oku, and I'm on the Life Science Tools and Diagnostic team here at Morgan Stanley. Before we begin, for important disclosures, please see Morgan Stanley Research Disclosure website at www.morganstanley.com/researchdisclosures. If you have any questions, please reach out to your Morgan Stanley sales rep. It's my pleasure to host here today, and speaking on behalf of the company, we have CEO, Omid Farokhzad, and CFO, David Horn. Thanks for joining us today.
Thanks, Yuko.
Thank you, Yuko.
Maybe to start, could you provide an overview of the Proteograph and the benefits versus the existing proteomics platforms today?
Of course. So the Proteograph product suite is an enrichment platform that sits upstream to a detector. Detector of choice for us is a mass spec. And what the Proteograph does is, it allows you to start with a biological sample that is very complex, not just in terms of the protein structures, but also protein quantity. And by compressing that dynamic range in terms of quantity of the proteins from the most abundant to least abundant, it puts the total package of proteins in a bite-size unit that the detector, i.e., mass spec can then digest it.
And by doing that, it lets you get rapidly deep quantitatively in a robust way, and for the first time, you can then do proteomic studies in an untargeted, unbiased way using a mass spec at a scale and speed that was previously just not possible. So how it's differentiated from the targeted approaches are that targeted approaches inherently work with an analyte-specific reagent, a ligand. Platforms can use antibodies, they can be monoclonal or polyclonal, or they can be aptamers, like nucleic acid ligands. And those approaches scale very easily to thousands or even tens of thousands of subjects, but they have one shortcoming, which is that the human protein, on the average, is about four hundred and seventy-two amino acid long, whereas the binding epitope of an analyte-specific reagent is typically five to eight amino acid long.
And the consequence of that is that if you have a variant of a protein, and genetically, we're all very different from each other, or a protein may be different isoforms, or it may be a different post-translational modification of the same protein, those changes at the amino acid level or at the post-translational modification level can change protein structure in terms of its three-dimensional conformation. And then once it changes, that binding epitope may change, and so you lose the ability to detect protein variants because of epitope effect. And so doing an untargeted proteomics is really desirable, and mass spec is the gold standard for that, but it could never scale.
Seer solved the problem that the mass spec had, which was the lack of scalability, by allowing you to do unbiased proteomics at the same speed scale that you could then do targeted proteomics, and so that's the difference, Yuko.
Great. Thank you for that overview. Not unlike a lot of the other companies that we're hearing from today, you're also seeing cautious spending from customers, especially for new technologies. To help address that, you've introduced SIP to help customers that do not necessarily have the upfront CapEx to still have access to the Proteograph suite. How successful has the program been in converting to permanent instrument placements over time, and how has the mix between SIP placements and straight-up purchases evolved through the year?
David?
Sure. So the SIP is our Strategic Instrument Placement program, and the concept behind that is that really trying to, again, lower barriers to adoption, and to get the equipment in without a CapEx purchase. Initially, it's essentially a loaner, with an upfront consumables purchase, so there's a minimum number of consumable kits that customers need to buy, and then we place the instrument as a loaner with the option to purchase. Generally, the timeframe is anywhere from nine months to a year. And it's been a great program in terms of folks that do have the intention to buy, so it's not just anyone who wants an instrument, but people who've expressed an interest to wanna bring the technology in-house.
And so I'd say, you know, of the placements we've done, anticipate doing this year and over the last twelve months, I'd say it's just under 50% of our placements have been SIP-related, and we have had some good conversions of those. Generally, you know, to be fair, customers like to just use up the loaner period, because why wouldn't you? But generally, at the end of that period, we've had good conversion on that, as well. Obviously, some have just been placed recently, so we're still in that kind of phase. But it's been a great way to... for, on the commercial side, for customers to then get the budget for it and do that.
And on the academic side, it allows them time to write it into a grant or the like. So it's been a good way to get people up and running because it helps us, especially with the academics, to generate data, publish papers a lot faster than they would if they had to just come out of pocket initially.
Got it. And then following up on that, are you seeing the macro also contributing to soft consumable spending, or is it largely restricted to instruments at this time?
I'd say it's definitely impacted instruments significantly. I mean, we certainly saw that through the first two quarters. The capital budgets were just essentially almost non-existent. I'd say it's impacting consumables, but less so. I mean, I think if we look at it, people are just maybe not doing as big of projects as they would have done otherwise. But, you know, if we look at just pull-through, it's been pretty consistent the last few quarters. Our pull-through's not necessarily increasing over the last couple quarters, but it's holding pretty steady, and I think that speaks to. It's just, it's a relatively soft environment for consumables as well.
And let me add by saying that, yeah, I see two vectors. One is the headwind, which is the macro picture. But and that's been, frankly, the longest frigid winter and a lot longer than I had expected it to be. But there is also another vector, which is the tailwind, and the tailwind for us has been the increased validation of what the platform does in the hands of customers, and really an increased velocity in terms of customer publications coming. And progressively, the tailwind is becoming a stronger vector for us than the headwind, which is the macro picture. And so the conversations are getting a lot easier for us, despite the macro picture.
And as David said, once the customer gets used to what the Proteograph does, the pull-throughs have actually been quite consistent despite the macro picture for us.
Okay. Before we get into some of those tailwinds, you're talking about, in terms of who opted for SIP versus outright purchase, is it skewed towards academic or biopharma? Are you seeing more soft spending in one segment versus another?
I would say that if I look at the pipeline and what the conversion has been in terms of the pipeline, we've had a stronger adoption from commercial entities versus academic. And I think it's a reflection of the fact that academic sites often need grant funding, grant cycle takes time to submit a proposal. So there's been a maybe a 60-40 split in terms of customer adoption being tilted toward commercial. If I look at then within that 60-40, where has the SIP played? Then it's more 50-50, 'cause the academics like the SIP for the reasons that, you know, they don't have grant funding, and it's easy for them to pick it up, where the data can then support them get grant funding.
But I don't think, Yuko, it's not a big difference, so it's a 60-40 customer base and a 50-50 SIP uptake.
Okay. Then, you also implemented STAC to provide access to mass spec, and thus lowering the barrier for access to the Proteograph. Obviously, you've seen a lot of interest from current mass spec users, but since you've established STAC, are you also seeing greater interest for the Proteograph from non-mass spec users, such as those more involved in cell biology or genetics?
I actually think that the proteomic market, as the market matures, there's gonna be a gravitational force pulling in the "non-mass spec" users to become a consumer of the kind of content that comes from the Proteograph and the mass spec. And I think that's actually quite a strong force. We recently had a prototypical example of that. Karsten Suhre's lab, which is really a genomic lab, published an important paper looking at proteogenomics and pQTL. That is not a mass spec user profile at all.
But what is happening with the kind of data that Karsten generates is that the likes of Karsten are now wondering: "Well, if we want to access this, how do we do it?" And so we try to lower the barrier for those customers by saying: "Look, you can work with our STAC, number one, or two, you can have a Proteograph in-house where you run your own Proteograph. You send your peptides, we'll run your mass spec for you, and then ultimately, we'll collaborate with your own core facility to really help you do it." I think this isn't gonna happen in the next six months or one or two years, but I do think that the consequence of these genomic customers seeing the value of the proteomic content is actually also gonna drive mass spec demand as well.
So there is going to be a significant adoption among these customers, and we're seeing that already.
Okay. So going to the publications, this is another area you've focused on growing, is the number of scientific publication that demonstrate the utility of the platform. You made tremendous progress along this front with eleven peer-reviewed publications and ten in preprint stage, and that's as of 2Q, so feel free to correct me if it's changed. Can you share the feedback you heard from customers on the papers presentation that impressed them the most? What features of the platform did they specifically? What did this specific study highlight?
By the way, the velocity of publications, it really picked up. So the numbers that you just quoted, let me give you updated number as of now, which is that there is a total of 18 publications, and 9 that are on preprint. Almost a lion's share of that is just in the last 8 months of 2024, which is wonderful to see. If I look at what the customers are saying in terms of what the platform does for them, it's an easy, automated workflow in a fluid handler that a customer is very, very comfortable using. 30 minutes of hands-on time, seven and a half hours of an automated workflow, and what comes out is peptide that goes into a mass spec.
It's robust, meaning if you run a Proteograph in New York, and somebody else runs the same samples in Copenhagen or in China or in San Francisco, and then you put it in the same detector, you'll get very, very similar results. There is very little batch-to-batch variability in terms of the consumables that are produced, so if you run a longitudinal study, you run samples today, and then and your study is continuing over months, and you then buy another lot of product from me, the R square of those batches are very, very high, and so you can run large-scale studies without batch-to-batch variability, so it's very robust.
It's quantitative, it's reproducible, it's easy to use, but more importantly, and I think actually, probably most importantly, the types of biological content that comes from it is very differentiated compared to anything else that you can get, which is scalable, which is a targeted approach. And then, if you then say, "Well, what could I do if I was to use an alternative approach in an untargeted approach other than the Seer platform?" And the answer is: well, that just doesn't scale, and so you can't run studies that are powered enough to do biomarker discovery or really see a biological changes between health and disease. So the customer feedback, Yuko, has been very positive.
When the customers are using it, they become adopter of the technology, but scientists are inherently skeptical, and so the publications are helping us tremendously.
Okay. You know, talking of large-scale study, PrognomIQ just completed a large 2,840-subject plasma biomarker study to look at novel markers of cancer, particularly novel markers for early detection of lung cancer. Using this study as an example, can you elaborate on why unbiased approach rather than an affinity-based method like Olink or SomaLogic is better suited for this type of application?
Yeah. In fact, I would actually use two examples that are similar because of their scale. One is the one that you just highlighted, Yuko, the early detection of lung cancer. The other one is Alzheimer's, and looking at biomarkers in a longitudinal study of subjects that were followed over ten years to see, are there markers of cognitive decline? And so let's look at both of those studies. If you look at what drives the classification for early-stage lung versus healthy or comorbid controls that are non-cancer subjects. And remember, the PrognomIQ study was a multi-omics study, not just proteomic study. But if you look at the features that goes into the classifier, there are RNA features that goes in, but as well as proteomic features that goes in.
If you look at the classifier proteomics that are in the top classifier, they're across the entire dynamic range in terms of concentration, but many of them, you would never known to have gone to look for in terms of being used for that application. The only way you found them is when you looked at it in a hypothesis-free way, and then you did it at scale and depth, and the only way such study is possible is using the Seer Proteograph. Now, let's look at the Alzheimer's one. The Alzheimer's one, they looked at 1,800 samples over a ten-year span to see cognitive decline.
Out of about a hundred and forty or so proteins that were modulated, either up or down-regulated in terms of cognitive decline, about two-thirds of those proteins are not on the Olink panel. So you could have never even identified those using a panel that way because those proteins are not part of those panels. And then, if you then look at of the proteins that were identified, how many of them were known to be AD proteins? About a third of them were known to be associated with AD, the rest of them were novel targets that are putative biomarkers of AD. So again, the discovery power, when you look at content in an untargeted and unbiased way at the scale and depth to let you look at discovery, is quite powerful, and the Proteograph really enables that.
Great. You also highlighted a recent manuscript submission to bioRxiv. The study found epitope effects may have compromised pQTLs identified from affinity-based GWAS studies and used the Proteograph coupled with mass spec to confirm the previously identified pQTLs. And mark previous pQTLs as potential artifacts that induced by epitope effects. Now, maybe you can start with just sort of digesting that for us and tell us what that means. And then, we'll start there.
First of all, for me, it's probably one of the most powerful studies that a customer has published. And it's got an enormous amount of implication scientifically. So an average human protein, if my fist was a protein, is four hundred and seventy amino acid long, four hundred and seventy-two to be exact. And a typical ligand binds to an epitope that is five to eight amino acid long, meaning changes anywhere else in the protein would not have been seen by this ligand. Similarly, if a change happens in the protein, like a post-translational modification or a SNP at the genomic level that changes an amino acid within the protein and changes the conformation of the protein such that the ligand can no longer bind, that can lead you to false conclusions in terms of pQTL.
That's a phenomenon that the genomic folks call the epitope effect. What the study found was, if you look at the previous published work, that was the UKBB with Olink or the deCODE study with SomaScan, and you look at the pQTLs that were high-conviction pQTLs, meaning they had a P value that was reasonably high, and then if you say, "Okay, well, of those, let's take a look at the pQTL that Seer sees, and are those pQTLs correct or incorrect?" The observation was actually very consistent for both Olink and Soma, in that about a third of the time, their pQTL is exactly correct.
About a third of the time, the pQTL is equivocal, meaning we need a larger study to conclude whether those pQTLs are correct or incorrect, and about a third of the time, those pQTLs are actually false pQTLs, meaning a wrong conclusion was reached to call those pQTLs. Now, it's a perfectly reasonable point of view to take that, "Wait a minute, maybe the mass spec is wrong. Why do we think Olink or Soma are wrong?" Well, if you then take this, the pQTLs that either platform identified and say, which pQTLs do they share, meaning they both agree on, in those cases, the mass spec is in 100% agreement, meaning if both platforms agree on the pQTL, so does the mass spec.
If they don't agree, about a third of the time, either of them is wrong, and one is no better than the other. So the implication here is that we need to begin to do really population-scale pQTL studies using an untargeted approach, something that just two years ago would sound insane to even suggest, but with the Proteograph, it is absolutely possible to do that.
So maybe my follow-up question is: How widespread is these pQTL study using these affinity-based methods? And then following the publication of this or of the manuscript submission, are you seeing increased traction from people who are basically wanting to confirm their previous results?
So we saw Yuko incoming emails from folks that were involved in the UKBB study saying, "Boy, guys, congratulations. This is a really meaningful conclusion, and it's got a lot of implication, and we need to do these in much, much larger studies." And then two additional investigators reached out. One pharma says, "We wanna do a large-scale studies now." So it actually had meaningful incoming interest and facilitated the dialogue that were going on for months, that suddenly the evidence really supports it. Again, I think if you think of where Seer is in its lifespan, we're really at our infancy in terms of where the company is. We've been around seven years. Commercially, the Proteograph has been around since, you know, the beginning of twenty twenty-one. Just a year ago, customer publications were almost non-existent.
Most of them happened this year. The evidence is coming, so I'm hoping, Yuko, that the tailwind that I alluded to earlier becomes stronger and stronger, and as these scaled studies get done and published, they'll support customer adoption, not just among the pQTL crowd, but just broadly interested in the biological insight, because these lighthouse accounts have a lot of gravitational force in terms of driving adoption, and so you know, the number of folks that publish pQTL studies is not very much, and so if that was your own market segment, you're not gonna make a lot of money, but they are a very powerful group that a lot of other folks follow.
The UKBB study, that Nature paper, was an important paper to catalyze adoption for Olink, and as was the deCODE paper for Soma. So the implication of what comes with these studies is actually quite large for the companies that are developing these products.
Got it, and, you know, obviously, you know, mass spec is gonna be an important component of this, but why is Proteograph specifically well suited to address these questions?
Because if I look at the last approximately four years, where the Proteograph has been in the hands of customers. We shipped the first one at the end of twenty twenty. The mass specs have gotten better and better. I mean, the Orbitrap Astral is an incredible instrument. Four years ago, the best mass spec would have done half as much as what an Orbitrap Astral does, and when you would put a Proteograph upstream to a mass spec back then, you would get to a depth of coverage of maybe twenty-five hundred or three thousand proteins, let's say, in plasma, and that seemed like a huge number. That's because the mass spec in plasma would have detected about maybe three hundred, three hundred and fifty proteins, you know, and the Proteograph would bring that to twenty-five hundred, three thousand proteins.
Today, the Orbitrap Astral, on its own, does about 750 proteins in plasma. But you put in a Proteograph upstream to that, you get to about 8,000 proteins in plasma. So the reason the Proteograph matters is because the problem the Proteograph solves is different than the problem the mass spec solves. The Proteograph compresses the dynamic range. As the sensitivity of the mass spec's getting better and better, the Proteograph drives a much, much higher number of protein coverage. So our value add to any mass spec has remained fairly constant over the last four years, despite the improvements in the mass specs.
Okay, great. You also highlighted oncology, cardiometabolic, and neurology fields as key areas of growth for the Proteograph. Please elaborate on why you highlighted these specific areas, where Proteograph can offer a differentiated value proposition over-
David, you want to take that?
Sure, sure. I think where we've you know gotten a lot of traction is in the areas you mentioned, and I think specifically in the neurodegenerative area, it's really areas where there hasn't been a quote "easy genomic answer," right? So people have looked at it, used the genomic tools available, and realized they just aren't able to get as much insight, and then but when you take the genomic information, you pair it with the proteomic information, people have gotten some phenomenal results, had some really interesting insights. As Omid mentioned, the Alzheimer's study that was done with Mass General you know that was the. I think it's the largest Alzheimer's cohort that's been done in a deep proteomic study. They're able to see that classifier, they're able to see that separation.
They're able to discover, quite frankly, new markers that are the basis for the disease. So I think the more complex disease states, such as we mentioned: neurodegenerative, oncology, cardiometabolic. People are just looking for tools and technologies that are gonna allow them to understand the biology better, and proteomics is a key piece of that puzzle, and so we've seen a lot of interest and uptake in those areas.
Okay. Could you also share timing of key publication over the next twelve months? What are some of the things that we should be watching for?
Look, I think the Karsten Suhre bioRxiv paper that got a lot of attention as a preprint should be published, hopefully sometime during the balance of twenty twenty-four. Keep an eye out on other customer publications that are coming in various different areas. The PrognomIQ study may actually be published over the course of the next few months. Whether it's the balance of twenty twenty-four into twenty twenty-five, I can't predict. But what's great right now, actually, so which is what I would have hoped, is that we used to know when customer publications were coming because there were so few in the pipeline that our technical team and our FAS team and our salespeople were interacting with the customers closely enough that we knew it was coming.
Over the course of the past few months, there's been at least two or three times where we saw a customer publication get published, and we didn't even know about it, and we just saw it after it was published. So that's really great, and frankly, that's the way it should be. So my expectation is that the velocity of customer papers will continue to build. Keep in mind that a customer needs to have a Proteograph in their hand for a couple of years in order to be able to do a study of scale and go through a review process for that paper to then get published. But that gap is now behind us, and we're now building on a foundation that I suspect is gonna get stronger and stronger going forward.
And then thinking about your pipeline, you know, with the release of XT more than a year now, you've significantly increased the throughput of the platform. How are you thinking about cadence of product releases as you think about other performance improvements on the Proteograph? What is the next area of focus for you?
David?
Yeah. So I think it's been. The customer feedback to us has remained pretty constant over both with the launch of XT and now as we think about our and look at our current product pipeline, really along several vectors. One, they're always wanting deeper, you know, deeper into the proteome. They're wanting higher throughput, just because now they can do these big studies, they wanna do even bigger studies. And then the third is lower sample volume, so you know, samples are precious, I wanna use less of it. And then the fourth is really with our Proteograph Analysis Suite, our software suite of, you know. Especially with the geneticists and the biologists, is just kind of give me the answer, right?
Right now, it's a tool that you can get really deep, and a lot of people just want, you know, show me the insight, which you can do, but again, the easier, the better for some of these biologists. So if you think about those four vectors, those are the areas where we are focused on, you know, continue to make step function changes along those vectors as we look at our product pipeline. And so, you know, the beauty of our software is that it's cloud-based. We can. And we did a new update in June, a new user interface, you know, much cleaner, easier. We're improving functionality all the time on that.
And then you can expect we're, you know, we're working in the pipeline on those other three vectors we talked about to continue to have a step function change above where we are now, in our next product. So you can expect continued efforts on those vectors.
At ASMS Seer, PrognomIQ, and PreOmics introduced P2 plasma enrichment and Enrich Plus kits, which enriches proteins via nanoparticles in a sample to increase protein identification via mass spec. Do you know how Seer's offering compares to these products?
Yeah, I mean, I think. Again, as an example of a study that we did not know was coming, a customer in Colombia published a paper comparing actually Seer to one of these offerings. And in terms of depth of coverage, Seer was substantially higher depth of coverage. In terms of reproducibility and CV, Seer was substantially lower CV, so much more reproducible. But I think there is a reason for that, which is, we've invested north of $240 million developing this product in the last seven years. And some of these other companies are buying commercial particles off the shelf and putting it as part of an assay, which we publish our workflow, and replicating it.
I've spent twenty years of my life building, engineering particles and doing this, and this is very complicated work, and to do it robustly and reproducibly, it's very challenging. I think the differences are that those products are coming to the market at a price point that's very inexpensive, and you know, Seer is a premium product. But Seer delivers the kind of data that, to someone who values their samples very much, it's actually quite important. Because if your samples are the most valuable commodity you have as a scientist, then you don't wanna risk that with the kind of product that if you ran a study today, you would reach a different conclusion than if you ran it with a different lot six months later. We will see how that goes.
I guess the one conclusion I would make is that clearly, if I was standing on an island by myself, maybe I was wrong, but the fact that others are now copying a method that we pioneered and we invented and we developed over the course of the last few years, that is the ultimate source of flattery, if you would, imitation. And that's great. But I think ultimately from a customer perspective, what they need is to be able to rely on a product and a firm, and a company that can stand behind this product and continue to innovate in those products.
I think the Proteograph XT is an exceptional product, but my expectation is that, if and when anybody else catches up to XT, which I don't see how that's in the realm of possibility anytime soon, my next product will smoke away XT, and so they'll have to catch up with my next product, which I'll launch when I launch. So, but I think these are great for the field, Yuko, because it really validates this approach and validates the need for unbiased proteomics to really scale, from a customer perspective, and that's where we're headed.
Okay. That's all the time we have. So thank you so much again.
Thank you so much.
Thanks, Yuko. Appreciate it.