Hello, everyone. Good afternoon. This is Rachel Vatnsdal with the Life Science Tools and Diagnostics team here at J.P. Morgan. On stage, I've got the Seer management team with me. This will be a 40-minute session, as we typically do. It's going to be roughly 20 minutes of a presentation, followed by 20 minutes of Q&A. If any of you in the room do have questions, please feel free to either submit them online or you can send them to me directly. With that, I will pass it off to Omid.
Okay, thank you so much. Good afternoon, everyone, and thanks so much to Rachel and the J.P. Morgan team for the opportunity to present today. I'm Omid Farokhzad, CEO of Seer, a company I founded back in 2017 with the mission to bring the power of unbiased deep proteomics to researchers around the globe. We launched our flagship product, the Proteograph Product Suite, for limited release customers in January of 2021, and entered broad commercial release in January of 2022, and since then, we've made remarkable progress against our mission.
Today, I'm excited to share some of that incredible data that our customers are producing using the Proteograph Product Suite that validates why we believe Seer is well-positioned to open up a new gateway to the proteome and to enable the next frontier in biology to be explored. Just note our safe harbor disclosure, which indicates that this presentation may include forward-looking statements. At Seer, our vision is to redefine what's possible by pioneering new ways to decode the biology of the proteome to improve human health. Similar to how next-generation sequencing transformed the world of biology, we believe proteomics is the next frontier in understanding biology that will have an even more profound impact. Seer's technology is uniquely positioned to overcome the technological limitations that previously prevented the scientific community from discovering the biological insight embedded in the proteome.
Over the last 20 years, there have been significant advances with large-scale access to deep genomic information, resulting in the identification of over 1.3 billion genetic variants across these studies. Yet, we still don't have functional context for the majority of this genomic information at the protein level. Biology is complex, and as you move from the left side, from approximately 20,000 genes, toward the proteome on the right, it's estimated that we'll encounter millions of different protein variants with distinct functions that originate from the same 20,000 genes. Variants of proteins originating from the same gene can have vastly different functions and in some cases, even opposite functions. These have been reported in large and growing body of peer-reviewed publications, exemplified by some of the papers that I have listed there for you on the right.
Protein variants range from single amino acid differences, to alterations in entire protein domains, to post-translational modifications, and we believe the best way to discover the biology embedded in the proteome is through deep, unbiased proteomics at scale. Large-scale access to deep, unbiased proteomics will help decode the complexity of the proteome and annotate the function of these genetic variants that have been discovered to date, meaningfully advancing our biological insight. Now, Seer is uniquely positioned to power deep, unbiased proteomics. Plasma is the most accessible BioSample for population-scale studies. Prior to Seer, deep, unbiased plasma proteomics at a population scales was simply impractical. In fact, the largest study done on plasma in a deep way was 48 samples, and the deepest study was published by researchers at the Broad Institute, and that went to a depth of 5,300 proteins.
That is just immediately before the launch of the Proteograph. Since the launch of the Proteograph, we've now seen exponential progress in the size and depth of studies that our customers are running. We shipped our first Proteograph to a customer in late 2020. By 2022, just two years after, we were seeing multiple studies of over 1,000 samples completed, and the deepest customer study that was run was over 6,000 proteins, and we were not done yet. Now, in 2023, our customer studies delivered an average of 7,400 proteins per study, with over 10,600 proteins observed across studies. The scale of studies was also beginning to rise on a sharp slope with one of our customers, PrognomiQ, initiating a study of 15,000 samples.
Importantly, the number of proteins that our customers are seeing largely represents the number of protein groups in their studies, the technical mass spec, meaning that in large part, they're not accounting for protein variants arising from single amino acid or small insertion deletions and PTM variants. If you accounted for these, then the number would be much, much larger than what I just stated, exactly as we saw with genomic variants. We believe we're well-positioned to become a definitive tool leader in the proteomic space. Our Proteograph Product Suite, including the recently launched product, Proteograph XT Assay, enables nearly any lab to take on large-scale proteomics through a fully automated solution for deep, unbiased proteomic studies at scale.
The Proteograph XT assay increases sample throughput by 2.5 times from our previous assay without compromising performance, all with less than 1 hour of mass spec time, further enabling at-scale studies. A single technician can now run hundreds of samples per week with minimal hands-on time.... Researchers can analyze 10,000 samples per year with 1 Proteograph XT and 1 leading mass spec, enabling large-scale deep unbiased proteomics. Seer solves some of the key challenges that conventional deep unbiased proteomic platform face in addressing the wide dynamic range and complexity of proteins in biological samples. These included limitations of scalability, cumbersome workflows, need for a lot of equipment, manual labor, and of course, time. Seer has uniquely solved these problems through our proprietary engineered nanoparticles by removing complexity and enabling access to proteomic content at a scale, speed, and depth that was previously not possible.
As a result of these four attributes working together, our customers can obtain highly accurate, reproducible, quantitative measurements of the proteome across a broad dynamic range. Importantly, these measurements are made at a 1% FDR. Our technology is applicable to a wide range of sample types. Importantly, it works with any species sample, including the model organisms typically used in medical research and drug development. The Proteograph enables differentiated biological insight, including protein isoforms, protein variants, and more robust pQTLs, and frankly, and also a range of applications such as biomarker and drug target discovery, model organisms, and QC of volume manufacturing. We're at the beginning of large-scale proteomics, and these types of studies are just at the tip of the iceberg of what our technology will enable. Seer's market is large and growing.
The Proteograph Product Suite can be used to accelerate our understanding of biology and human health across the proteomic and genomic markets, driving demand and expansion across both. With discovery of novel proteomics content uniquely enabled by the Proteograph and a demonstration of the biological value of this content by our customers, we expect entire ecosystem and end markets could be created or expanded. Looking back over the last year, I'm extremely proud of the Seer team and the incredible work we've accomplished as we navigated through a challenging macro environment, and in just 12 short months, we have enhanced our technology by launching the Proteograph XT assay and made updates to the Proteograph Analysis Suite. Removed barriers to enhance access by launching the Seer Technology Access Center, launched our Seer Strategic Instrument Placement program, and expanded our Centers of Excellence program.
Finally, we've expanded our commercial reach and validation by onboarding 4 new distributors, adding publications and preprints, and receiving 2 important ISO certifications ahead of schedule. I believe that these accomplishments are setting a solid foundation for growth in 2024, and I sincerely want to thank my team for their hard work. In June of 2023, we launched the Proteograph XT Assay Kit at the American Society for Mass Spectrometry. The feedback for this product has been fantastic, and we've already upgraded approximately half of our installed base to run the new assay kit. Proteograph XT improves throughput by 2.5x compared to our first product, while concurrently reducing the mass spec time per sample by an equivalent factor.
A portion of the data on the right was presented by Thermo Fisher Scientific at the ASMS conference, where they showed an impressive performance with their newly launched Orbitrap Astral, detecting approximately 760 proteins in neat plasma. The same plasma sample, however, when processed in conjunction with the Proteograph XT, results in the identification of over 6,000 proteins, representing an 8x improvement with equivalent protein abundance CVs with and without the Proteograph, meaning and confirming a high degree of reproducibility for our workflow. Notably, the approximately 6,000 identification of those proteins is on a per sample basis. When you look at proteins observed across, for example, a sample study of 2,500 samples, well over 10,000 proteins in plasma groups can be detected using the Proteograph XT, working with the Orbitrap Astral.
It's exciting to see how the Proteograph consistently improves mass spec performance. Over the years, the performance of the mass spec platforms in terms of depth of protein coverage have been improving, and I expect will continue to do so over time. The dark blue bars in this chart represent the performance improvement across generations of mass specs from Bruker and Thermo Fisher Scientific. The teal color in this chart represents the performance improvement of these mass specs with the addition of the Proteograph Product Suite. First, you see that the fold improvement in performance, the addition of the Proteograph Product Suite provides, is preserved across the mass spec generations. And second, you see that the lion's share of the overall performance in terms of depth of protein coverage can be attributed to the Proteograph Product Suite.
With our own improvements to the Proteograph Product Suite, these additional gains in protein identification and quantification that are attributed to the Proteograph Product Suite are also coming at a significantly higher throughput and lower sample volume requirements. These elements combine synergistically to provide even more value for our customers and enable large-scale, deep, unbiased proteomics. Here you can see visual representation of the potential for biological insight when we compare the coverage of the Proteograph XT to a commercially available high-plex, affinity-based panel. As I mentioned earlier, over 10,000 proteins are detectable in plasma within about 2,500 samples. These proteins comprise over 150,000 peptides measurements, covering over 1,900 Reactome biological pathways. These 1,900 pathways map to 29 categories shown on this slide.
When we look at the coverage of these categories by the Proteograph in teal and compare it to the coverage of the high-plex, affinity-based panel, we would basically see, that's shown there in dark blue, we would see that the Proteograph covers these pathways far more completely. I'm also pleased to announce that this week we have placed our Protein Discovery Catalog on our website, as a fully accessible and searchable catalog. Our commercial strategy for Protein Discovery Catalog is simple yet powerful. It's remove barrier to access, empower researchers, and drive new lead generation. With over 10,000 proteins across 1,900 pathways, this catalog provides our first published index of an unprecedented depth of empirically observed proteins captured by our nanoparticle technology to date. This web tool offers versatile searches by disease associations, Reactome pathways, protein names, UniProt keywords, and more.
The catalog includes proteins linked to 150,000+ peptides that can serve as data points for potential biomarkers, including proteins not associated with diseases as of today, making it a rich source of biological insight, powered by data from the Seer Proteograph XT and the Thermo Fisher Orbitrap Astral mass spec. Our catalog of more than 10,000 proteins is just the beginning. It will continue to expand over time as more samples are interrogated by the Proteograph platform and more mass spec-based proteomic data becomes available. In a 2021 Nature publication, scientists looked at exome data from approximately 455,000 individuals in the UK Biobank database and found a remarkable number of protein variants within each sample. This table summarizes their data.
They found that in these 455,000 individuals, the number of potentially deleterious protein variants was more than 6 million, with over 3 million variants potentially causing changes to the structure and the binding of an affinity reagent. All of these protein variants speak to the complexity of the proteome at a population level. Affinity-based methods that rely on specific structures or binding motifs may be disrupted by these variants, making it challenging to extract biological insight. This paper dramatically demonstrates the scale of protein variants in a population and underscores the unmet need to measure the activity of protein variants of a peptide and amino acid-level resolution. If we do this, it will have a massive impact on the way we diagnose, treat, and monitor disease.
As I showed in the previous slide, it is important to understand protein at the peptide-level resolution, allowing to measure protein variants to discover biomarkers, something that today is only served by mass spec-based methods and only enabled at scale with the Proteograph Product Suite. Each time you run a sample, you generate tens of thousands of data points at the peptide level, rare protein variants, post-translational modifications, protein-protein interactions can all be interrogated. The more samples you study, the more content you can discover. In 2023, we announced the launch of our STAC program, where we provide access to the newly launched Proteograph XT and Thermo Fisher Scientific Astral. We've been seeing a strong demand for the STAC, and by the way, that is a Seer Technology Access Center, further exemplifying the power of the Proteograph XT and accelerating adoption.
We've now served 48 organizations, including 6, 6 large pharma, with a strong pipeline of opportunity going into 2024. Additionally, we have an average of 7,410 protein groups per plasma study and a 6.8x average fold improvement over neat plasma. STAC was an important step to enhance accessibility of the Proteograph product suite. Since the beginning of 2022, Seer has published several high-impact peer-reviewed papers, elucidating the mechanism that our proprietary engineered nanoparticle technology enables the Proteograph product suite. These publications demonstrate how our Proteograph product suite workflow has superior performance in terms of precision, depth, and throughput compared to conventional workflows, and identifying undiscovered links between protein variants and lung cancer progression, for example. In addition to our publications. We're aware of 180 public presentations to date and 48 posters presentation by customers.
These are now 8 manuscripts in bioRxiv and 4 peer-reviewed publications. I look forward to seeing the number grow throughout the year. On this slide, I'm focusing specifically on customer publications. This has been an exciting year, with 6 third-party manuscripts submitted to preprint servers to date, and a handful of additional papers under peer review that were not submitted to preprint. These publications cover a range of applications, including analyzing pQTLs, understanding the body's response to spaceflight, finding new biomarker signatures of Batten disease, validating and quantitating the precise data enabled by the Proteograph. In October, 2 new papers were also added to bioRxiv, and most recently in January, another paper from PrognomiQ was added to medRxiv, which I will share more detail shortly.
Shifting now to our financials, we ended the first nine months of 2023 with $12.2 million in revenue, growing 12% over the prior year. We ended the third quarter of 2023 with $381 million in cash, cash equivalents, and investments, and no debt. We have a strong balance sheet and are well-capitalized to capture the large opportunity ahead. Changing the status quo is an enormous undertaking, and when we started, there was a huge mountain to climb in front of us. Over the last few years, we've been enabling the discovery of content and scaling our studies and throughput, making important progress climbing this mountain.
Now, we believe we're on the cusp of an inflection point for widespread adoption and revenue growth as our customers around the globe begin to demonstrate and exemplify the powerful biological insight that is uniquely possible by leveraging the Proteograph Product Suite. Looking ahead this year, we will continue to drive execution against our core strategies of driving evidence and publications with the Proteograph Product Suite, continuing to enhance access, innovate with our products, and expand our applications. While much work remains, we're excited and inspired by the opportunity that lays in front of us. Now, I'd like to spend some time highlighting four distinct Seer and customer studies that were uniquely made possible by deep unbiased proteomics at scale using the Proteograph Product Suite. We expect these studies to be published in 2024.
This first slide highlights a study that Seer performed with Alzheimer's disease luminaries, Dr. Steven Arnold, who leads the AD Clinical and Translational Unit at Mass General, and Dr. Bradley Hyman, who leads AD Disease Research Unit at MassGeneral Institute for Neurodegenerative Diseases. Using a $2 million NIA SBIR grant to Seer and MGH, this study looked at 1,800 plasma sample, comprising both controls and individuals diagnosed with cognitive decline, including AD. The samples were collected over a 10-year period, allowing for us to investigate proteins associated with progression or protective against cognitive decline. Comparing the AD-affected individuals with controls, we identified 138 proteins that were up or down-regulated in AD individuals. Of these 138 proteins, only 44 have been previously associated with AD, and the remaining 94 represent putative biomarkers of AD, potentially highlighting new biological insight.
We used the clinical information to identify the point of significant cognitive decline and determine proteins that separated the population into fast or slow decliners. We identified 8 such significant proteins and are now investigating how these results may be advanced to develop a score indicating the likelihood of cognitive decline in a particular timeframe. The reason that I say such studies are uniquely made possible using deep unbiased proteomics at scale, using the Proteograph Product Suite, is because 55% of these 138 proteins are not present on a commercially available high-plex affinity panel. So researchers would not know to be looking for these proteins, and there is no other practical way to do deep unbiased proteomics study on 1,800 plasma samples other than leveraging the Proteograph Product Suite.
The manuscript describing these findings were made available on bioRxiv last week and will be submitted for peer review publication. It represents the largest, deep, unbiased AD proteomic study completed to date with a differentiated novel biological insight. This slide shows a study performed by scientists at PrognomiQ using deep unbiased proteomic as part of a multi-omic-based blood test for early-stage lung cancer detection. This study was a large case-control study that included those at a high risk for lung cancer. The compliance rate for the current low-dose CT cancer screening is only 5%-10%. PrognomiQ is developing a blood test to help close this compliance gap to identify patients for lung cancer and subsequent evaluation.
With samples from 2,513 individuals, they used Proteograph to measure over 8,300 proteins, which were then combined with over 2,000 RNA transcripts, 1,000 metabolite measurements, to drive a multi-omics classifier. As you can see on the left, the proteomic data performs extremely well with an AUC of 0.91. Now, adding other omics data increases the AUC to 0.96... The classifier represent the potentially best-in-class performance by achieving a sensitivity of 80% for stage one and 89% for all stages of lung cancer at 89% specificity. The middle of the slide shows the top features contributing to the classifier ranked by their importance. The top features are heavily enriched for those derived from the Proteograph assay, unbiased proteomics.
When we look at where these proteins fall in standard plasma concentration curve, the right-hand panel shows you that they are across the entire dynamic range. Therefore, the only way that is possible to do study is to do it with the Proteograph product suite, deep unbiased proteomic at scale, to drive this classifier. This slide highlights a study from Dr. Nate Basisty of the National Institute on Aging, using the Proteograph to uncover molecular signatures of aging using mouse models of aging. He conducted an experiment on 896 serum samples, with analysis currently underway. What's represented here is 20 microliter of serum per sample, using Seer's low volume protocol for the Proteograph platform. His team observed over 4,300 protein groups, which on the average, is 4.5 times improvement in depth of coverage over neat serum.
In a pilot program of 30 mice, he identified 64 proteins that were significantly differentially abundant in old and young mice. The pathways involved, shown on the right, include those associated with lipid and triglyceride transport and metabolism. A key feature of the Proteograph platform is that it is species-agnostic, and considering differences in protein epitopes across species currently available, affinity-based panels for mice cover only a very small set of proteins. In this case, none of the 64 proteins identified in this study were on the commercially available 96-plex affinity-based mouse panel. This final example, Dr. Brendan Keating and colleagues from NYU, leveraging the Proteograph in their pioneering research on the xenotransplantation of pig hearts into 2 brain-dead human recipients. The investigators collected a wide range of data from the recipient, including deep unbiased proteomic using the Proteograph platform.
Using the Proteograph XT and Orbitrap Astral mass spec workflow, they were able to measure over 8,700 proteins. This includes over 6,850 human proteins originating from the recipient, and from the same mass spec data file and the same samples, the simultaneous measurement of an additional 1,850 pig proteins originating from the transplanted organ. Here, we see in the top panels that the human ortholog of lactate dehydrogenase is increasing, a sign of increased anaerobic respiration, suggesting impaired oxygen perfusion and heart failure. The decrease in the pig ortholog of lactate dehydrogenase and decrease in both human and pig cardiac troponins, is consistent with the observed growing damage of the pig organ. The pig proteins first appear at time point 0 after a transplant.
These data, part of a pending publication, are providing a high-resolution view of changes at the molecular and cellular level, and will contribute to furthering our understanding of the intricacies of xenotransplantation. This is a truly amazing study demonstrating the power of the Proteograph and deep unbiased proteomics at scale. In order to demonstrate the breadth of our technology, we're continuing to make progress with our applications lab, which has expanded our demonstrated protocols and sample types and applications. We've seen success across a wide range of different sample types, including tissue homogenate, conditioned media, organ perfusate, biologics, CSF, and model organisms . We're also exploring a range of applications, including host cell proteins, which is important in QC of biomanufacturing. We're consistently and constantly working with our customers to see how the Proteograph Product Suite can deliver biological insights never before anticipated.
I'm incredibly excited by the year ahead as we continue to drive execution against our core objectives. I'm proud to lead such a talented team that has taken us so far in just a few short years. We have launched a transformative product that is truly unique, and our customers are producing even greater data than I can ever have imagined. I believe we have a technology, the team, and the strategy to bring the next phase in omics to labs all around the globe. And with that, thank you, and now I'll turn it back to Rachel for Q&A.
Perfect. Thank you so much for the presentation. So one of the first areas that I really wanted to ask on was this topic of the macroeconomic backdrop and how that's kind of impacted Seer and many of the other players in the industry's businesses this year. So can you comment on, you know, in terms of the macro-driven headwinds that you've encountered, would you say that the situation has improved or deteriorated as we've moved into 2024? And if the macro impact did play into Q4, how much of a magnitude was that as well?
David, do you want to take that?
... Sure. So certainly in 2023, I think we saw the macro backdrop impacting the business, and really on two different fronts is how we saw about it. First, on the academic side, we think just the uncertainty around the budget process and what's going on in Congress in terms of agreeing to budgets. We had the, you know, potential shutdown in September. I know just anecdotally, in the fourth quarter, we had an NIH customer who was gonna purchase, who just said, "I got to push the pause button on it." So we're seeing that kind of choppiness, not a desire of not gonna acquire the technology, but just the timing of that given the uncertainty of budgets.
On the commercial side, and primarily I'm speaking about large pharma, we have continued to see that they are spending, but for new technologies, I think the hurdles are just a little higher. So whereas someone who could have brought a new technology in-house at, say, the director level, is now kind of getting kicked up a level, and they're just being a lot more thoughtful and circumspect about bringing in a new technology like Seer. So again, it's a temporal kind of elongating the process, whereas, you know, I'd say prior to the macro environment turning, it was a little more expedited.
Got it. That's helpful. Then maybe just now that we're moving into 2024, could you kind of give us the range of outcomes on how we should be thinking about top line for 2024, given the number of moving pieces we've seen across customer budgets, China, the lengthening product cycle? You know, where could we land on the top line?
I think in terms of looking at this year, I think we're continuing to be relatively conservative in terms of the outlook. I think we feel like there are still some macro uncertainties out there, certainly. But also, as Omid mentioned, I think we're just in that phase of really demonstrating the biological insight. And I think until... you know, we're super excited about the 10+ papers we expect to be published this year. Those will be our first papers to really demonstrate the power of the technology we have. But until that really, that body of evidence really grows, I think we're gonna continue to be relatively modest in terms of growth. Certainly, we feel like once that body grows, that growth will accelerate.
But I think for 2024, we expect it to continue to be relatively modest.
Got it. That's helpful. Maybe can you just spend a minute talking about your visibility into that order book as you head into 2024? Can you spend a minute also talking about the pipeline? What does that customer composition look like within your funnel right now?
Okay, I'll take that. If I look at the pipeline, it's a mixture of academic, commercial, and I think that mixture is about 50/50. If I look at the way we have gotten traction in terms of going from an opportunity to a close, that ratio is closer to 60/40, or maybe even 2/3, 1/3, tilting toward commercial being the predominant types of customers. Pharma customers have been tough to crack the nut and to get in, in part because of the macro, as David highlighted, and we are, you know, a relatively new technology. You know, frankly, 2023 and April of 2023 specifically, is when we saw even the first customer publication come online with bioRxiv. Since then, about 10, the combination of bioRxiv from other papers.
So we're very early in terms of adoption, but the interesting part of it is, the pharma customers that have brought in the Proteograph, in fact, those are the most predictive, recurrent revenue customer because their use is actually quite consistent from quarter to quarter. So once a customer experiences the power of the platform and what it enables, we're actually seeing a consistent use among those customers, and pharma being a prototypical one, in terms of where you can have recurrent revenue and, and good visibility in terms of, what a pull-through may be in a customer. Academic customers can be more lumpy. They'll, you know, they'll buy some consumable, they'll run some studies, and then, you know, then they may go into hibernation as they're analyzing data.
A little bit harder for us to predict those types of customers, but the business is a recurrent revenue model business. The reason our visibility is reasonably poor is because, A, the numbers are small, and B, we're just beginning to see traction in terms of customer validation, publicly, that's gonna enhance our adoption. Which is why I think 2024, we should still be a bit conservative. I mean, we are a growth story. The opportunity is massive. I, for one, wanna see massive quarter-over-quarter or year-over-year growth, but I think that will only come on the back of validation, which we're just beginning to see come from our customers.
Got it. That's really helpful context. Maybe just looking at the cash burn here. So Seer ended the quarter for 3Q at $380 million of cash. So how should we think about cash burn as we head into 2024 and overall cash runway as well?
Yeah. So, we're obviously very prudent with the cash we have. We obviously are very thankful to our investors for you know, bestowing us with this amount of money, and we're very prudent about how we use it. We ended 2022... And I think about cash burn as free cash flow. We ended 2022 with about $71 million in negative free cash flow. This year, we'll be in the 60s area for 2023, and then for 2024, you know, we'll likely be even lower than that. And we'll give more kind of detail on that on our year-end call.
But we continue to see our burn declining because we feel like it's just the prudent thing to make sure that we're deploying capital appropriately.
Perfect. That's helpful. Maybe shifting over to some of these various alternative programs that you guys have used for customers to enable them to get access to your technology, just given the macro backdrop. Could you kind of highlight some of those? You've mentioned the Seer Technology Access Center that was launched during 2Q. So how much traction have these various programs really driven so far, and then how much more runway do they have to go?
I think the stack was a fantastic thing to do. And frankly, in hindsight, I wish we had done it earlier. We launched the company saying, "We are not gonna be in the services business. This is not what we're interested in. It's a distributed sales model," and we avoided doing services. We did some proof of principle studies. Right out of the get-go, we went and we created Centers of Excellence, and we said, "If somebody wants a service, they should work for our COEs." The reason I think maybe that may have been a miscalculation is because we hadn't done the heavy lifting to develop that market in order for a COE to be able to sell through it.
So in the early days, the market development weight should be on our shoulder, and I think the stack does a fantastic job with that. We did that. We launched the stack with the Proteograph XT as the assay kit, together with the Orbitrap Astral as the detector with Thermo Fisher. And the demand has been very robust. I have to say, for the first time, as CEO of this company, we actually have a backlog, which is something nice to see. But the thing is, we are not gonna expand capacity, because I don't want to be in the services business. So, we wanna make sure that the customers are being cared for. If a customer's demand is high, we will then shift that to our COEs.
We wanna serve the COEs, but we just want the stack to be a mechanism, a pathway for the customer to be able to access and see the power of the Proteograph Product Suite and the kinds of data that they can get to it. So I think the stack was fantastic. Now, of course, for reasons that you can imagine with data privacy and others, that stack is largely servicing a US customer base. But demand for that services is global. But of course, we're not able to address the rest of the globe in terms of the demand for a stack. We also launched the SIP program, and the SIP program was also meant to, again, lower the bar, lower the barrier, the friction to access the Proteograph and see what is possible.
A strategic instrument placement, and what that means is, just anyone who raises their hand and say, "I want one of these SIPs," doesn't get an instrument. It's meant to go to a customer that is likely to be a high-volume customer, ideally one that may publish or talk about it in the community and be a representative, if you would, in terms of what the performance looks like, and it's always tied to buying consumable upfront. Don't think of it as a reagent rental model, because the idea isn't for the instrument to sit there and for them to buy consumable. Really, the SIP program is designed that they take an instrument, and then there is a purchase expectation that comes a bit later.
We're early in the SIP for me to be able to kind of guide you in terms of what that uptake looks like, but so far, as, as we had predicted, a number of those customers have gone from taking the loaner program to then opting in and purchasing the program. Because once the loaner ends, we would take it back, but if they like what they see, then the only way to keep it is to buy the instrument. And so the loaner programs has been a great program. The SIP program also decreases friction to access. So both programs have served us well in that context.
Perfect. Thank you. And with that, unfortunately, we are out of time. So thank you, Omid and David, for joining us up here today, and thank you to everyone in the room.
Thanks, Rachel.
Thank you so much, Rachel.