All right. Good afternoon, ladies and gentlemen. Please take a seat. We're about to get started. I'm Julia Qin, Lead analyst covering life science tools and diagnostics at JPMorgan. It's my great pleasure to introduce you to our next company presentation by Seer. With that, let me turn it over to Omid. Thank you.
Hi, everyone. Julia, thanks so much to the JPMorgan team for invitation to be here. I'm Omid Farokhzad, CEO of Seer, a company I founded in 2017, December of 2017, and took public in December of 2020. In January of 2021, we brought in our flagship product, the Proteograph Product Suite, in limited release to customers.
A year ago this time, we brought the product into a broad commercial release. Today I'm gonna take you through some of the progress we've made over the past year, tell you about some of the commercial traction, what the customers are saying, and then share with you more about why I believe Seer is well-positioned to open up a new gateway to the proteome and really open up the next frontier in biology.
My presentation will follow a Q&A with David Horn and myself. David is our CFO. Note our safe harbor disclosure, which indicates that this presentation is gonna have some forward-looking statements. At Seer, our vision is to redefine what's possible by imagining and pioneering new ways to decode the secrets of the proteome to improve human health.
Proteomics is the next frontier in biology, and we're focused on developing transformative products to really unpack the power of the proteome that until now has remained largely locked. This enormously powerful content is undiscovered, and Seer aspires to change that, and every day we're more excited about the opportunities that's in front of us. We believe Seer is in an excellent position to change the trajectory of proteomics and open up this frontier in biology and human health.
To start, we have a technology that's broadly accessible and enables the scientific and the clinical community to access unbiased proteomic with a combination of depth, speed, breadth, scale that can potentially open up entire ecosystems. I believe the potential of unbiased proteomic at scale may actually be larger than other omics today. Our Proteograph Product Suite is highly differentiated, and it's also broadly accessible. I say this because not only is it easy to use, it takes the black art out of the use of the unbiased proteomic, but it also fits within a researcher's budget. The entire workflow sits upstream to a large install base of a detector.
This is a mass spec, of which there's about 16,000 of them installed globally, that does proteomic work, and that is growing at about an 8% CAGR. We believe the combination of these attributes will accelerate adoption of our system, and importantly, not just by proteomic folks, but by the broader scientific community that is looking to add this molecular information to the types of studies that they're doing and the problems that they're looking to solve.
There has been a tremendous, a very large mismatch between our access to genomic and proteomic. This was really one of the key drivers to start Seer. Access to this detailed and complex information is really key to understanding biology.
Virtually every function in the body happens by an action of a single protein or a combination of proteins coming together to form a machinery, and work together to achieve that. With large-scale access to deep and unbiased genomic information over the last 15 years, we've now sequenced over 1 million genome, over 10 million exomes.
Across the population, that's resulted in identifying over 1.1 billion genetic variants. Today, we don't know much at all, a tiny fraction, about the functional context of this information, at the protein level, at the functional level. Part of this reason is there's a gap in biology. Biology is a dynamic, complex, purposeful matrix of interaction across molecules, and that complexity goes even deeper than we had previously imagined.
As you move from the left side of the slide toward the right, you go from 20,000 genes. There's a log increase in terms of the number of transcripts that you make. All the difference in RNA processing, splicing, you end up with 200,000 + transcripts, and by the time you end up with proteins, you're another log increase, and there's millions of variants of proteins. These variants of proteins originate from the same 20,000 genes, but they can have vastly different functions. Population-scale proteomics is needed to decode the complexity of proteome and to really annotate the function of the genetic variants, meaningfully advancing our biological insight.
There was a recent paper published in Nature that looked at exome data from approximately 450,000 individuals in the U.K. Biobank database, the findings are really intriguing. They found a remarkable number of potential protein variants within each sample. The table summarizes the data. It underscores how little we know about the complexity of proteome. On a per individual basis, there were about 3,000 variants of proteins that are potentially deleterious. In approximately 200 of those cases, those variants can cause complete loss of protein function. If you look at the entire 455,000 subjects that were sequenced, then the number of potentially deleterious protein variant is more than 6 million. This paper unequivocally underscores the unmet need to understand protein variant at peptide and amino acid level resolution.
If we do this, I believe that a massive impact will be made on diagnosis, treatment, monitoring of disease. Now, plasma is the most accessible biosample for population scale studies. Prior to Seer, deep unbiased plasma proteomic at scale was impractical with the largest unbiased deep study comprising only 48 samples. The deepest study published by researchers at the Broad Institute was 5,300 proteins. Now, Seer entered limited release in the Proteograph Product Suite in December of 2020, broad commercial release a year later in January of 2022. In this short time, we've seen customers scale their studies by orders of magnitude with multiple studies of over 1,000 samples completed to date, achieving unprecedented depth of coverage, something that was fundamentally, completely impractical just two years ago.
We're at a watershed moment in proteomics, the likes of which we saw in mid-2000s in genomic, where access to novel content progressively became possible at larger and larger scale, and new markets were created or end markets were expanded. The dynamic range and complexity of proteins in biological sample necessitates cumbersome workflows that take a lot of equipment, manual labor, expertise, and time that are not readily accessible to most labs and most scientists. These conventional approaches are fundamentally limited in their scalability, and hence why prior to Seer, the largest deep unbiased proteomic was 48 samples. Seer unequivocally solved this problem. Our technology removes the complexity and enables access to proteomic content at scale, speed, depth, breadth, previously not possible. Seer uses proprietary engineered nanoparticles, bringing together key attributes and unbiased approach, deep proteomic interrogation, a rapid automated protocol, and scale that was previously not possible.
The dynamic range issue solved as a result of these four attributes working together, our customers are now able to obtain highly accurate, reproducible, quantitative measurement at proteome across the entire dynamic range. Importantly, these measurements are always reported with a 1% false discovery rate. This is an important point.
When we report we see something, 99% of the time, what we report to see is accurate. By the way, as we change the false discovery rate to a higher number, the number of protein IDs goes up. In addition to our technology, it's applicable to a wide range of sample types and will work with any species, including model organisms typically used in medical research and drug development. I believe Seer is enabling unprecedented proteomic access and novel biological insight. We're well-positioned to become the definitive tool leaders in proteomic.
Over the past year, we've more than doubled our installed base of the Proteograph Product Suite in the U.S., Europe, and Asia. We've enabled our customers to drive unique insights with a range of biological research in translational applications, current diseases such as cancer, diabetes, cognitive impairment, while empowering them to do studies at scale previously not possible.
Seer's market opportunity is large, is growing. The Proteograph Product Suite can be used to accelerate our understanding of biology and human health across both the proteomics and the genomic market, driving demand and expansion across both. We envision a future in which entire ecosystem and end markets could be created or expanded with customer using the Proteograph Product Suite across doing deep, unbiased, rapid, scalable proteomics for a myriad of applications. Now, as I described earlier, protein variants play a critical role in biological function.
They also impact protein structure and surface conformation. This inherent biology creates a key limitation for affinity-based or targeted proteomic methods that rely on affinity reagents for detection. Affinity-based approaches involve a ligand, such as an antibody or an aptamer, that binds to an epitope on a specific protein. A typical epitope is about five to eight amino acid long, and a typical human protein is 472 amino acids long.
On the left side of this slide, two examples of antibodies binding to three variants of the same protein are shown. The antibody that binds to the left side of these three protein variants binds to an epitope that is conserved among the three different protein variants and does not distinguish the three from each other. Conversely, the antibody that binds to the right side of these three protein variants may have its binding epitope disrupted.
In the top, the protein variant where the epitope is intact, the antibody readily binds normally. In the middle protein variant, a post-transition or modification or an amino acid substitution, schematically represented there with that red circle, alters the conformation of the epitope and disrupts the antibody binding. In the bottom example, that protein variant, it's missing an entire domain from RNA splicing, for example, and the epitope would not be present, and the antibody cannot bind at all.
In principle, targeted approaches are unable to distinguish protein variants and may inadvertently result in false conclusions. The middle panel in this slide, Pietzner et al from University of Cambridge, published a paper about a year ago in Nature Communications and experimentally demonstrated for you what I schematically shown on the left, the challenges to affinity-based approaches. This is shown in their figure.
The figure under, on the middle is from their paper. On average, the correlation between two commercially available affinity-based methods was 0.38. By the way, the shape of correlation is bimodal. You can have complete correlation or a total lack of correlation on the average of a 0.38.
It's largely because of variants of the same protein interact differently with the different affinity-based methods. The paper really underscores the importance of looking at proteins at the amino acid and peptide level resolution with a technology that is quantitatively robust to protein variants. Now, multiple variants of the same protein can arise from a single gene, as I said, this through RNA processing, and these are called protein splice forms, which you can think of really as protein variants.
This is schematically shown there for you in splice variant one and splice variant two in the slide. The majority of human genes produce more than one protein splice form. In fact, it's estimated that about 70,000 splice form are created from our 20,000 genes. That number is grossly underestimated because at a population level, a much larger number of protein isoforms exist because of genetic variants that alter RNA processing. In short, the biological complexity is huge. Affinity-based approaches are inherently unable to recognize the different splice form from each other, shown there on the upper part of that on the slide. Unbiased approaches, on the other hand, survey proteins at the peptide level or the amino acid level, and can distinguish different splice form from each other.
To illustrate why this is critically important, let's look at a cohort of cancer and healthy subjects, where you have two different splice form. It's possible that if you looked at the protein at the whole protein level using these affinity-based approaches, you would actually miss that those two different splice forms are different between healthy and cancer.
If you had the resolution to see the different splice form from each other, you may notice that one of them is upregulated in cancer, but the other splice form is downregulated in cancer. I show this to you schematically here, but let me share with you actual data. The importance point is illustrated here from data from our non-small cell lung cancer study that we published in Nature Communications. Here we show four examples of protein.
Each have at least two isoforms, and in each of these four cases, one of the isoforms is more abundant in cancer and the other more abundant in healthy, providing potentially important biomarker information. If you had looked at this information, these examples, at the whole protein level, you would not have been able to see the abundance of each protein variant separately, missing this absolutely important biological insight. As I showed in the previous slide, unbiased approaches at the peptide level allow you to uncover protein variants and discover potential biomarkers.
Each time you run a sample, you generate tens of thousands of data point at the peptide level. Rare protein variants, post-transition or modification, protein-protein interactions can all be interrogated. The more samples you run, the more new content you discover, and the amount of insight that can be gained is enormous.
Analogous to that 455,000 study published in Nature from UKBB, where information in one subject showed only 3,000 protein variant, across the 455,000, you saw 6 million of them. Until recently, proteomic and genomic have been largely distinct fields that rarely intersected. In recent years, though, the field of proteogenomics has emerged with the desire to bring proteomic and genomic information together in large cohort studies, really enabling connecting genotype to phenotype. The Proteograph Product Suite is uniquely well-positioned to bridge the gap between proteomics and genomics, accelerate the impact of proteogenomics, and really contribute to connecting genotype to phenotype. 1 of the key goals of proteogenomics is to identify genomic variants that control protein abundance level.
These variants are called protein quantitative trait loci or pQTL, because the protein abundance level is seen as a, quote, quantitative trait, unquote. These studies are powerful in understanding disease mechanism and finding suitable targets for drug discovery. To conduct a pQTL analysis, genetic variants are identified, and then protein abundance levels are measured to identify those proteins whose abundance correlates with the presence or absence of a genetic variant. When affinity-based approaches are used for pQTL analysis, these ligands bind to specific epitope of the protein shown schematically on the left side. Assume that gene is being transcribed and translated to a protein, that antibody would bind to that epitope from that exon that is schematically shown there.
As shown in the middle panel, variants of the proteins can alter ligand binding 'cause you can also have a genetic variant in the same exon where that protein would bind. This altered binding is often falsely interpreted as a pQTL and can be huge unrecognized problem in a study. With Seer technology, large-scale proteogenomic studies can be undertaken at scale in a highly accurate way.
This is schematically shown on the right side of the slide, the variant peptides in red and non-variant peptides in blue. Because we quantify protein abundance at the peptide-level resolution, we can accurately identify pQTLs, allowing a better understanding of disease mechanisms and more successful drug discovery efforts when this is done at the protein variant level. Affinity-based approaches essentially are required to have a tool, a hook, a ligand for each protein.
When you look at the complexity of the proteome, this is completely impractical. Unbiased approaches grab a basket of proteins in an unbiased way, where you look at the totality of content accurately, quantitatively, reproducibly, and at scale of what is available in the biological system. Looking back over the last year, I'm extremely proud of the Seer team that did exceptional work and the incredible work that they accomplished together.
It was our first year of broad commercial release, and it's great to see the Proteograph Product Suite performing exceptionally well in the hands of our customers. The quality of the data being produced is fantastic. We have demonstrated the power and versatility of our platform and its use across different sample types and model organisms. We launched our Proteograph Analysis Suite 2.0 to enable proteogenomic analysis.
Commercially, we more than doubled our revenue and installed base year-over-year and placed our next product in the hands of early access customers late last year. We remain extremely well-capitalized with approximately $440 million of cash on the balance sheet, no debt, which can fund our growth for years to come. The most important is we continue to attract world-class talent and drive continued growth of our business.
As we have delivered on these milestones, we've seen our customers expand their use of the Proteograph. We now have customers spanning academic research, translational, commercial, including pharma CROs, even applied markets. We're seeing the pursuit of a broad range of applications from cataloging protein variants to proteogenomics, to multi-omics, disease detection, biomarker discovery, and clinical studies. We're excited to see what kind of studies they do next.
Another key benefit of our platform is that the technology is inherently extensible in that it is species agnostic and is able to analyze not only plasma and serum, but also other biofluids across human and model organisms. We measured the performance of the Proteograph Product Suite workflow across model organisms, urine, cerebrospinal fluid, conditioned media, and compared the performance to neat biological samples.
We observed gains in protein coverage across all different sample types. Importantly, in each sample, we measured tens of thousands of data points at the peptide level, providing information on thousands of proteins. We have continued to push on the next set of innovation on our software as well, expanding the capabilities of the Proteograph Product Suite.
In Q3 of last year, in August, we released the Proteograph Analysis Suite 2.0 or PAS 2.0, a first-of-its-kind proteogenomic workflow that maps peptide-level data to genomic data to identify personalized variant peptides not captured in canonical reference databases. Customers can more easily connect genomic data to proteomic data and assess peptide-level disease association. Moving forward, we plan to continue to extend our software feature set, streamline data management across our workflow, and lay the roadmap for larger scale population proteogenomic studies.
With the Proteograph Analysis Suite, we've increased the computational capability for unbiased proteomic analysis of large cohort by one or two orders of magnitude. Streamlining data analysis is a focus as we pave the way to more and more labs to adopt unbiased deep proteomic at scale. The feedback that we've received from our customers has been tremendously positive.
In 2022, I should say, we published 2 seminal papers characterizing our proprietary engineered nanoparticle technology and demonstrating how our Proteograph Product Suite workflow has superior performance in terms of precision, depth, throughput compared to conventional workflows. In addition to our seminal publications, we have seen 119 posters at conferences, 19 of which are from customers. 20 oral presentation. We received recognition from the Human Proteome Organization, or HUPO, for science and technology innovation.
We were ranked number four in the top 10 innovation for 2022 by the scientists. I'm also very excited about the traction we're gaining in the scientific community around our technology. Our customers and collaborators are able to redefine what's possible by leveraging unbiased approach. Josh Goodburg at Stanford has been able to discover novel biomarkers for Batten disease using pig as a model organism.
Josh Coon at University of Wisconsin is onboarding the Proteograph Product Suite to redefine his mass spec methods for translational research. Neil Kelleher is collaborating with us on a new method to enable cataloging of human proteoform. Jenny Van Eyck has used a Proteograph to quantify clinically relevant markers in diabetes, recently presenting her work at the NIDDK TAMIR 2022 meeting.
There's much more to come. I have visibility to customer manuscripts being submitted for peer review, and some of the data that we have seen is just tremendously reassuring and gratifying as a scientist to see. As you may remember, a year ago this time at the JP Morgan Healthcare Conference, we announced the launch of the Proteogenomics Consortium in partnership with Discovery Life Sciences and SCIEX.
I'm thrilled to announce that Discovery is now up and running, having set up a new facility in Boston with multiple Proteographs and mass specs that support the Proteogenomics Consortium. They've announced last month that they're ready to receive customer samples. We look forward to supporting them as they scale their business in 2023. Yesterday, our customer, PrognomiQ, which is a private multi-omics liquid biopsy company in the Bay Area that was spun out of Seer 2.5 years ago, right before our IPO, reported that they have launched a 15,000 perspective clinical study, a multi-omic program, core to which is deep unbiased proteomic. Of course, this is for detection of early-stage lung cancer and follows the completion of their current study, which was a 1,031 subject, largest deep multi-omic study undertaken to date by any organization.
That is deep unbiased proteomics together with metabolomics, lipidomics, fragmentomics, copy number variation, methylation, and transcriptome. They will share their data themselves. Let me just give you a brief look at the level of differences that they see between healthy and cancer across the different omics. They're expecting to present their data at the upcoming AACR and the ASCO conference. We're very excited about the publication of this paper and the further presentation and details of the data. I'm very optimistic that they may have a leading program for early stage detection of lung cancer. Looking forward to 2023, we will continue to drive execution against our core strategies, enabling breakthrough discoveries with the Proteograph Product Suite, demonstrating its power, catalyzing new applications and markets, and continuing to build an industry-leading team.
We are very much at the onset of this journey, the very beginning, and while much work remains, we're excited and inspired by the opportunity that lays in front of us. I'm incredibly proud of our team for the progress we made in such a short amount of time. It's been five short years since we started this company. I'm humbled to lead this organization, this amazing team. I'm inspired by their passion, their hard work, their dedication, that's allowed us to commercialize such a transformative product. In summary, I believe we have the technology, the team, the strategy to bring the next phase in omics to labs all around the globe. Thank you, I'll turn it back to Julia.
Great. Thank you, Omid, for the great overview. Let's welcome David Horn to join the Q&A. I can get us started, but if the audience, if you have a question, feel free to raise your hand.
No, no.
You showed a lot of great examples of some of the early discoveries that's enabled by the Proteograph platform. Maybe, you know, just starting from the big picture, you know, where do they stand in terms of, you know, being progressed to the next step, right? Over what kind of timeframe can we see that, you know, unbiased or highplex proteomics really moving on to the clinical applications?
I think the perfect poster child is looking at an organization like PrognomiQ. PrognomiQ was spun out of Seer. Seer founder, Philip Ma, who obviously knew the platform well, is the CEO of PrognomiQ. In his case, there was not a need to test and become a believer. He understood the platform well, so he hit the ground running from day one. They're a large-scale organization pursuing multi-omics, core to which is deep unbiased. Very quickly, they completed multiple studies at scale, the study of 1,030 samples is a good example of that. They're now starting a 15,000 deep unbiased proteomic multi-omic study as well. If you look at other organizations other than PrognomiQ and say, what is the slope or the velocity of them scaling up?
A typical customer takes about 9 months to go from where the Proteograph comes into the lab until they're ready to kinda do any studies of any scale. They typically do a small scale study in the tens of samples, and then the data for that then supports doing studies that are multiples of hundreds of samples. We've had now multiple customers complete studies of 1,000+ samples. PrognomiQ is the first that has now announced this study of 15,000 samples. I suspect, for example, Proteogenomics Consortium, that is a service provider, will likely this year will do in aggregate studies that will far exceed what PrognomiQ has done.
As we think about your customers continue to scale up their studies to, you know, those 1,000 samples or even, you know, even greater studies, are there any bottlenecks, you know, in terms of workflow, cost, informatics, or?
Data analysis was a challenge. If you look at the workflow for unbiased proteomics, the bottleneck was always the upstream workflow. Seer unequivocally crushed through that. The next set of bottleneck is then a massive amount of data that you generate and how do you process all of that. PAS 2.0 took a big step forward in addressing that. Mass specs themselves are getting faster and faster together with continued innovation on the Proteograph as well. We should be able to do studies in the tens of thousands or potentially even hundreds of thousands, unbiased deep proteomics, leveraging platform like the Proteograph.
Right. As we think about Proteograph adoption, you mentioned some upcoming publications that should really, you know, serve as external validation for the platform. Curious, what kind of performance metrics will these publications validate? Can you just talk about how meaningful an impact those publications will generate?
I have visibility to some of these work. One particular customer shared data with us at our recent scientific advisory meeting that we had with some KOLs. This is the a customer that would be considered a thought leader in the proteomic space, a user of the Proteograph. Previously that individual had published the deepest unbiased paper in a paper in Nature Protocols. That was a small study of about 16 samples that that paper got that investigated to a depth of about 5,300 proteins across his 16-sample study or 4,300 in any one subject. He reported to us that with the most recent study that he ran, a 300-sample study that he ran with the Proteograph, he's reaching a depth of 6,200 proteins.
In his words, this is unprecedented, both in terms of the scale that he's able to do and the depth of protein coverage that he's able to do. That, to me, is an extreme validation of the scalability of the platform, both in terms of depth, speed, size of the studies. We also have investigators that are looking at precision and accuracy and quantitative nature of what the Proteograph produces. Again, Proteograph puts out data with the level of precision, accuracy that in the studies as small as 200 samples that are powered adequately to see a 50% change in a biomarker concentration, and a typical biological change in a biomarker is actually much more than that.
The study is-- it can be used for clinical utility, and we're gonna see publications like that emerge from customers as well.
Great. Now shifting to some more near-term dynamics. You noted on your recent earnings calls that, you know, obviously in light of the macro, there's some, you know, prolonged decision-making or sales cycle with your customers. Can you maybe, you know, give us more color on, you know, what the current sales cycle is like? How does it compare to historical? In light of that, are you having any customer conversations regarding, you know, maybe alternative models instead of capital placements or alternative pricing arrangements?
David, do you wanna take that?
Sure. So in terms of the macro and the elongated sales cycle, I think what we're seeing is an interesting dynamic. Certainly the macro headwinds have caused people to, you know, just be a little more cautious. You know, there's a bit of a barbell approach here. I think, I think for, you know, pharma is certainly still spending, but for, you know, proven technologies and things. I think for kind of newer technologies such as Seer's, there is again, a little more willingness to want to kind of test and see the data, right? See that third-party validation that you've been talking about, Julia. We end up running what we call proof of principle studies, so our POP studies.
They'll send us, you know, tens of samples, 30, 40 samples, for a fee, and we'll run the POP study, and then we present them with that data. We are able to use that and understand the performance of the Proteograph. That just adds a little time, right, to the dynamic. I think if you layer on the macro to that just creates, you know, some more just conservative approach, I think folks have had, which we've seen. Again, with the publication of these papers this year and other presentations and just the continued growing body of evidence around the Proteograph, like that'll help with that. That's certainly some things we've seen in the near term.
The second part of your question around.
Any other like pricing discounts or?
Pricing discounts. Yeah. Really, what we've tried to do is take down the accessibility barrier. That's really why we again announced a year ago around the Proteomics Consortium. We do have some other centers of excellence that provide this it as a service model. Folks that won't necessarily wanna bring it in-house but do wanna access the technology. We've seen that. We've actually done some service projects internally at Seer. We don't wanna be a service business, but for some strategic projects and strategic customers, we will do a service project every now and then to do that, to help them understand what the platform can do and the power of that.
It's really just trying to knock down the barriers around some of those, some of those accessibility issues, in the near term.
Gotcha. More as a near-term bridge or lead generation.
Exactly.
Not a long-term kind of business model.
Yeah. Exactly. It's a near-term bridge for folks.
Gotcha. In terms of the customer scaleup, earlier you mentioned, what type of customers are, you know, more ready to do larger scale studies? academic or, you know, commercial? In terms of the sales cycle, are you seeing a different pattern between the two groups?
Commercial entities are more positioned to pursue larger scale studies quicker than academic entities are. If I look at our pipeline is about 50%, 50/50 split between academic and commercial entities. If I look at our closing, deals getting closed is closer to 60/40 tilted towards commercial, maybe 2/3, 1/3 towards commercial. Academic labs, their grant cycle funding just is longer to get to the funding, and it's also harder for them to get the kind of funding to do very large scale studies. We're seeing the studies that are 1,000+ or multi-thousands being planned is tilted toward commercial entities. We are seeing academic entities pursue large scale studies. In fact, the first 1,000 sample study that was done was actually done by Oregon Health as an academic center that did that.
That is an unusual case. We're seeing more commercial entities doing larger studies. This particular, top-tier academic lab doing a 300 sample study, we're seeing multi-hundred sample by many labs now. The thousands and the many thousands are tilted toward commercial.
Gotcha. Within your commercial mix, you know, is it more, you know, biopharma or is it more, you know, clinical labs? What kind of applications are they using these large scale studies for?
I think what I'm surprised most is. Remember, this is a very early stage technology and early commercial launch, yet a lion's share of our company, sorry, customers are working with patient sample, looking at a clinically relevant research using the Proteograph. If I look at where the lion's share is occurring, certainly service providers like DLS are gonna grow to be a very large customer, they're catering to a large number of pharma customers and biotech and others. Diagnostic players, liquid biopsy companies, we now have several of those customers that are at Seer.
Again, the kinds of studies that they look at is, augment the kinds of work that they do together with proteomic to add additional power to the classification of the different sample that they're looking at. We're also seeing drug developers looking at proteomics, both in terms of identifying the types of patients that may be suitable for a particular therapeutic approach, or potentially a response to a drug that may come, and changes that may happen in the proteome as that. I would say, Julia, the breadth of customers' interest in looking at the proteins is actually quite broad.
That's great to hear. This is somewhat of a side question, but curious to hear your thoughts. Guardant Health, one of the liquid biopsy companies in their recent ECLIPSE trial readout, they noted that protein markers did not add to performance of that assay, which is contrary to a lot of people's expectations. Do you think that has any implication or impact on how people perceive the value of protein markers or multi-omic approaches?
I think if anything, it's a double down for why unbiased approaches are needed. In other words, if you just look at the literature and look at a certain number of markers that you may think are biologically relevant to your disease, I find that to be an exercise with low probability of success. If you look at majority of the liquid biopsy companies who've looked at nucleic acid approaches, methylation and others, to drive classification for their test, it's always looking at the totality of the information in an unbiased way. It has never been to go to the literature to dig up a number of genes that they find to be important and then develop a genomic test based on that. They look at everything in large scale studies, and then they hone in for signals that drive classification.
I think if Guardant and others who do multi-omic studies did the proteomic part of it in an unbiased way, without hypothesis, looking at variants of proteins, I actually think the probability of success would go way up.
That's a great point. back to that commercial versus academic mix. I know.
Time.
Okay, maybe just one last question. I know the current funnel is skewed towards commercial because of the macro. In the long term, what kind of mix do you think positions you best for long-term success?
David?
Sure. Look, as Omid said, we're 60/40 commercial, but the pipeline's 50/50. I think we'll continue to see that evolve over time and that we will get to that 50/50 and potentially more academics over time as well. I think kind of in that 60/40 band, you know, either way, but around 50/50 is gonna be probably where we end up.
We're out of time. Thank you so much to the Seer team.
Thank you, Julia.
Thank you.
I really appreciate it.