The Canaccord Genuity MedTech, Diagnostics and Digital Health and Services Forum. My name is Alex Vukasin. I'm a member of the Life Sciences Tools and Diagnostics team here at Canaccord. Pleased to have Jeff Hawkins, CEO of Quantum-Si, here with us today. Jeff, thanks for joining us.
Thanks for having us.
Really briefly on QSI, QSI utilizes a semiconductor chip technology to enable high-visibility, single-molecule, next-generation protein sequencing. The offering consists of a sequencing instrument, Platinum, and analytical software, QSI Cloud. Jeff, just really briefly, could you just walk us through QSI's current product portfolio? How does your near-term focus on Platinum and upcoming Proteus platform reflect your broader vision for single-molecule protein sequencing?
Sure, yeah. I mean, so at a high level, the first-generation technology is there's an instrument called Platinum, and there's a sort of a sister instrument called Platinum Pro. Slightly different capabilities of those two. As you said, it's based on a semiconductor technology. It's a pretty classic razor-razor blade business model. People buy the machine. They buy consumables that allow them to prepare the sample and sequence it. And then they do the analysis in our cloud. That technology has been in the market for about three years now. You're really using that to develop the market, work with key customers across academia, biopharma, defense, agriculture, sort of a lot of different end markets. You know, starting to see the fruits of that work with publications and presentations at key industry conferences. Now, turning towards Proteus, this is a big sort of architecture change we've made with the platform.
We announced that originally in November of 2024, where we now move to a passive consumable, essentially a patterned set of wells on a fused silica backbone. The instrument has the optics in the machine. Moving the optics from the consumable into the machine gives us significantly more sequencing output, gives us a much lower-cost consumable. When we intersect it with what we're doing with the chemistry, you know, we'll really allow users to deeply interrogate proteins, you know, whether for single amino acid variants or post-translational modifications, you know, things that are really important in sort of the study of disease and development of new drugs.
You recently had 3 Q earnings. Could you just provide a quick summary on some of the puts and takes from the past quarter?
Yeah, we just went through the earnings. I think the key highlights, one of the things we have done this year with the NIH sort of headwinds and some of the capital constraints in the academic market was to begin to offer some alternate ways for customers to purchase the machine. Again, historically, capital purchase the instrument, then buy the consumables. We've opened that up over the last couple of months to allow users to get the platform without the upfront capital cost in exchange for purchasing reagents and using that over time. We opened that up again a couple of months ago, saw 12 machines go out under that program, more than half of those going into academia, which was really positive for us. That's a key area to really drive those publications. Shared a bit on the pipeline. You know, we've had five publications this year.
I have a rich pipeline sitting behind that. You know, the big on the Proteus front, I think the big milestone we shared and then built upon yesterday at our Investor and Analyst Day was that we've completed sequencing on more than 50 runs on our Proteus prototype systems we have in development. That was a milestone we set out to achieve in 2025. You know, we're able to achieve it, you know, at this stage in the year and in a really robust way. We're quite pleased with that.
On the data that you produced utilizing the Proteus instrument, you provided some snapshots at your Investor Day yesterday. How would you characterize the quality of that early data? What risks does this milestone allay for the launch? Are there any other key engineering or chemistry hurdles remaining between now and the launch?
Yeah, I think the data that was presented yesterday by Todd Rearick , our Chief Technology Officer, I think when you develop a new technology like this, you always have certain expectations for the improvements you think you're going to see. You know, you're hoping to get more than just more sequencing output or more automation. You're really aiming to often try to make a big leap in performance. Normally you don't see it till the end of a program. You have your prototype phase, then you get into your integrated systems, and you do all the integration work to achieve that.
I think what we were able to show yesterday is even at this early stage in the program, we're seeing across all of the metrics we look at for sequencing, everything from the number of reads we're getting, the length of those reads, the quality of that, sort of some of the raw performance, like signal to noise, all of these things on Proteus are exceeding what we see on our current first-generation Platinum machine. That is a great outcome because we're nowhere near fully optimized in terms of our chemistry on the Proteus platform. We expect that will just improve over the course of the next year as we move towards launch. You know, I think Todd said it best yesterday. You know, the data we showed yesterday validates this change.
It proves to us that this architecture works, that this architecture is going to give us the sort of the types of leaps and improvement we wanted. Now it's really, you got to go do the, you know, some of the R&D scientists might say we're going to go do the more boring part. You're not doing the raw innovation and think, you know, and sort of like ideation, you got to do more of the development. You got to replicate the work. You got to build that fleet of integrated systems and do system integration, the transfer to manufacturing.
All super important steps in the path to commercialization to make sure on the other end you got something you can manufacture and produce at high quality and sell to your customers, but maybe a little less innovative, which maybe if you're a scientist working on it is a negative. If you're an investor or an analyst, I think you view that as really having de-risked the program and it's really more about just going and doing that work and staying focused on it.
This leads to my next question. The Proteus leverages a whole new architecture compared to original Platinum. Can you just compare and contrast the differences between the original and the new architecture and why you made that change? Yeah.
Sure. Yeah, so the single biggest difference in the Platinum, it's based on a CMOS chip. So think of that as having sort of the brains of the operation. The optics are all in the consumable. That lends itself to a really small benchtop device. There are some limitations, right? That's a complex consumable to make. It has a fairly high cost to make that. There's going to be limits to how much you can scale that consumable. If you think about proteomics at the highest level, the real challenge of sequencing many proteins, doing this really deep sequencing for PTMs, you need a lot of sequencing output. That was going to be very hard to achieve on that platform.
When you move over to Proteus, what we did is we took the optics and rather than have them in the consumable, we put them in the machine. That way we can make a very simple consumable that's very low cost, but also very scalable. As an example, we have about 2 million of these wells on the current device. The first version of Proteus will be 80 million on a same-size chip. You're seeing about a 40x change in sequencing output just in the first generation. I think in addition, with Proteus, you're seeing more that you can run multiple samples at a time. You get automation of all the preparatory steps that people do on the bench today that will all be done in the machine, which really makes it a lot easier for users.
Very importantly, now you're on an architecture that can scale into hundreds of millions of reads and ultimately to billions of features on that consumable and billions of sequencing reads, which is really, if you want to get to the Holy Grail, de novo sequencing, that's sort of the scale you're going to need. Now we're on that architecture and have a realistic and actionable path to get there.
Taking a look at your R&D engine for a second, is this new architecture something that you've been developing since well prior to the Platinum launch? Or is this technology something that you've developed through your R&D engine since just before, perhaps even after the Platinum launch? Where did this idea originally go?
Yeah, I think this is probably why people are a little surprised. I think most people think we must have been working on this for a long time to have made so much progress. I joined the company in October of 2022. We obviously had the CMOS-based approach with Platinum. There was a robust CMOS-based sort of consumable roadmap actually in place. There was not a sort of a Proteus architecture. It was really a CMOS-based technology roadmap. I think we, in concert with my R&D leaders, I am really looking at the markets and different things. We determined we needed a big architecture leap. We needed to do something. This is really something we started to work on in earnest in 2024.
I think to the credit again of our R&D team, they were able to take sort of an understanding of the market and what we were trying to achieve and come up with an approach to do it, do that innovation cycle and do it well. That allowed us then to talk about that with some early feasibility data in November of 2024. Importantly, I think the depth of talent in R&D in all the different functions you need to build a protein sequencing device, we have that. Once we activated all those folks on this, once that feasibility had been established, that's how we were able to say we can do this in two years. We can get this out by the end of 2026. I think a lot of people thought that's a really big, that's a big ask.
I think a lot of people were a little worried that might be a difficult timeline to achieve. We have stayed on track with that the whole way to this point.
The current plan is to launch Proteus by the end of 2026.
That's correct.
You spoke to a $300,000-$500,000 price point yesterday at the Investor Day. Just curious if you could press on puts and takes on why that pricing might be logical for the offering at this juncture, recognizing that things could change between now and actual launch of the instrument.
Yeah, I think we gave out a range yesterday. We think about a few things when we did that. Our current device has a list price of $125,000. When we think about the capabilities of Proteus, when we think about everything from what types of analysis can it do, the sample throughput, all those different factors, we know it should have a premium over the current platform. You can look out and say, okay, let's look at the highest-end mass spec machines that have a lot of capabilities. Those run up $1 million or $1.2 million. We think that also can have a constraint on the number of labs globally that can sort of adopt these tools and implement them and really drive research forward.
We're looking to capture the value we think we're delivering, but not artificially or potentially throttle the level of distribution this platform could get to. We feel like a number somewhere in that range captures the value we've created, but enables us to get the installations and get the installed base going. Because ultimately, in these business models, it's about getting the consumable pull through long term, right? That's what you want to see. We're just trying to balance that value capture with retaining accessibility. You get up in a million or more, and we think that really starts to constrain who can afford that platform.
You get to 2026, you expect to have the first integrated systems completed. By the summer of 2026, that's when you anticipate to have the early access program. For that program, what types of customers are you targeting to include?
Yeah, I think what we want to do with the program, I think, is go through a series of milestones. I think you named off a couple. We've tried to give out several for next year to really help investors understand how to keep track of our progress. The integrated systems working and sort of operational in our lab, then there'll be sequencing on those, be able to deliver data like we just did to really demonstrate that.
That key activity in the summer places those machines at a point in the program where we think they're ready to get customers to test and compare to some of the other technologies they have in their lab, but still early enough that if we were to find something, we could incorporate those learnings and work it into the remaining months of the product development program before we move into internal validation testing and launch. The types of people we'll be looking to work with, obviously it won't be a huge number. We'll keep it pretty constrained. Everybody who's been in the industry for any period of time can sort of think of the types of places we'd want to be.
We're going to want to be at big centers that have core labs that have multiple different platforms that we could compare our performance to, have access to highly characterized samples across a lot of different application spaces. Those are the types of facilities we'd want to work with. Ultimately, we hope with quality data, those folks will also get out and talk about it and publish their results. That's how we're sort of thinking about the profile of that customer.
Switching gears a bit here, your recognized development program has now screened millions of candidates. Could you elaborate on how your proprietary binding kinetics and AI tools are accelerating full proteome coverage? What would be possible in terms of applications and otherwise if the entire proteome coverage is achieved? Something you really focused on yesterday.
Yeah, yeah, we spent a lot of time on it. I think it's been an area, if you think about sequencing, everybody immediately goes to DNA. DNA sequencers are looking at four bases. All four of those bases have the same charge and fairly uniform in performance. You get into protein and the alphabet's now 20. They're very different in terms of their chemical properties. Developing these engineered protein recognizers is a very complex thing. There's not something you can just pull off the shelf. I can't call up a tools provider or a reagent provider and buy these things from them. This is something we've had to develop through protein engineering and directed evolution. In the early days of the company, when they were doing this work, it's a very sort of, it's a fairly slow process.
You have to have scientists who come up with designs. They have to screen those designs. They have to take those learnings and try to think through the next design. There are computational tools available that help you. What has really happened, I think, over the last few years is two things. One is that the artificial intelligence tools available open source to help you model proteins and protein structures have just absolutely changed by orders of magnitude in terms of their capabilities. Equally important is not just that tool, but the underlying data we've collected from all that screening.
When you're screening this many candidates and we have all this data at all these different sort of testing points all the way through to sequencing, what we've been able to demonstrate recently is if you take those models and try to just design a new recognizer, the success rate's still pretty low. If you take that same model and you train it on this proprietary data, the ability to design a binder that when our scientists look at it, they wouldn't naturally design, it would take them a long time to cycle to that. AI helps you get there quicker. Then we can implement through our screening process. We think that sort of intersection of that data with the most modern AI tools is what allows us to accelerate. We'll go from 14 amino acids today.
We think we can raise that up to 15 by the end of this year. As we talked about yesterday, we'll demonstrate all 20 in 2026 and have that in a kit in 2027. What does it open up? I think amino acid coverage opens up lots of things. You can start to talk about a full range of single amino acid variants. You can talk about richer and richer coverage of PTMs, multi-PTM profiling. I think coverage of amino acids fits in with the depth we can sequence. All these attributes come together and eventually you end at de novo sequencing. This is one of the stepping stones on that path to de novo. I think it really unlocks that PTM space when you have really robust coverage of the proteome.
Leading into my next question, one thing we must talk about as well are PTMs and the various analysis methods that you employ, specifically kinetics, pre-recognition, as well as direct NAA detection. Just can you speak to the specific situations that each are optimal? Is developing an effective universal streamlined layered approach the ultimate goal? Or do you believe a plug and play approach for these different analysis methodologies will ultimately be more effective for the end?
Yeah, I wish I had a crystal ball for that second question, Alex. Proteomics is so much more challenging even than genomics. It's hard to say that and people necessarily believe you. I think those of us in the proteomics field really understand how complex this is. Many of us came from genomics and have an appreciation for how hard that is. I think the three methods we talked about, right, kinetic detection. Use information in the sequencing data to identify that PTM. That can include what happens to the specific modified amino acid, but also what's happening to the data coming off of the ones that are around it. You have the pre-recognition, which you talked about, which we showed some data yesterday around phosphorylation, and then direct detection. We see this as sort of a toolkit.
I would tell you if you work maybe end state back, the ideal state is you have maximum sort of coverage of the proteome, sequencing depth, amino acids, and you can do as much as possible through kinetics. That's sort of the foundational thing that wherever possible, we want to do it through kinetics. It requires no additional reagents. It's just in line with sequencing. There can be instances where maybe a PTM and with phosphorylation as an example, we've seen some examples where the phosphorylation event is really deep in a really long peptide. Rather than wait for sequencing to be perfect, we can combine sequencing with pre-recognition and get to that PTM. We see that as sort of a complementary tool. It's also really, really exquisitely sensitive for stoichiometry.
There could be applications where that is an important attribute of the technology that maybe is something we want to offer because of a specific application. Finally, the direct detection, that is just another toolkit. We have not really deployed that to date outside of some of the data that Brian showed. If you think about how we would deploy that, we would use this same capability we talked about with amino acid detection to engineer binders that could do that. We are confident we could act on that. Again, I think priority one is do as much as you can from kinetics. Priority two is sort of interplay with pre-detection if you need to to get the combinatorial effect. The last step would be engineering specifically for direct detection.
Looking at commercialization of Platinum and particularly for Proteus, for Platinum, you've been leveraging this multi-pronged strategy as of late, not just direct sales, reagent rental, and other methodologies. For Proteus, you've spoken about a trade-in as well as obviously direct placements. Could you also leverage one of these multi-pronged strategies for Proteus to kind of get the wheels on the ground and then after that decide from there? Is that kind of the strategy or is it?
I think all options are being considered. I think we, of course, will look to try to do as much as we can through capital. We haven't really decided what all the options would be available. Other than for sure, there would be no reason to have a Platinum user who wants to move to Proteus and not make it advantageous for them to move to Proteus. I think the whole value of having had Platinum in the market was to get people working with the technology, excited about it, finding applications for it. For those folks who want to move to Proteus, we're of course going to have a program for them to make that advantageous for them over someone who's never used the Platinum device.
I think we shared yesterday that we'd roll out a formal program around that in sort of the Q3 timeframe of next year. In terms of will it be all capital sales like it was in the beginning of Platinum or will we have some other models? We haven't decided that yet. I think we'll probably bias ourselves towards the capital sales model. We're staying open to that. We'll see sort of how maybe some of the broader macro trends change and then make a decision on that as we get closer to the launch.
Looking at the medium to long-term technological roadmap, you discussed high-speed scanning technology as well as controlled cleavage, which could be enabled on the Proteus 2.0 version and help you scale to billions of reads. Can you just briefly elaborate on the benefits enabled by controlled versus random cleavage?
Sure. Yeah, so we shared that data. I think what we've always tried to do in the last two years with these Investor Days was really paint the picture of we understand the end sort of need from the technology, like what the market is asking the technology to do in its most evolved and complete state. What we tried to do is plot a technology path that is actionable and can be intersected at key points and be delivered to do that. The first move was we had to get to the Proteus architecture. As I said earlier, there was just no way we were going to scale to a consumable with billions of wells on it on a CMOS backbone. Doing that on these passive consumables is feasible. I think the controlled cleavage, we shared that yesterday.
Our current technology, the cutting and the binding are happening sort of in the same reaction. Some people call it a one-pot reaction. That means we are monitoring the reaction the whole time. If you want to get to billions, you want to be able to scan. It's very similar to what people have done in the field of DNA sequencing. They have a larger area sort of consumable and they scan and they're washing reagents and are now able to control the reaction and how it works. Very similarly, we thought about how might we do something like that in protein. Obviously, scanning systems and microfluidics were well understood.
It really became a question of could we make a version of protein sequencing where the cutting is controlled, where we put the reagent in to create the cutting event and then we remove it and do the detection event. Very similar to how a DNA sequencing instrument would work. Now we have control over each cycle, the detection cycle versus the cutting cycle. That is what we showed yesterday, that we have proven sort of a proof of concept. We have some technology feasibility around controlling that cleavage. That is what you need then to intersect in the longer term with that scanning system. You take the consumable that we are making today and you just maybe make a different shape of it or a slightly different form factor, but the same manufacturing approach. You can scale now up into the billions of reads.
Switching gears again here. On the financials, you ended 3Q 2025 with over $230 million in cash. You filed an ATM as well. How do you view your current cash runway relative to the cost of scaling Proteus? What types of strategic opportunities might the balance sheet position afford you? How do you plan to utilize the capital saved from ending the New Haven lease as well?
Yeah, so I think maybe on the New Haven lease, that was a facility that existed from even prior to my arrival at the company. Our Connecticut facility is in Branford. We have another facility in San Diego. Really with New Haven, we were looking to just find a way to work with that landlord and exit that facility and have that operating cost not with us. By settling with them and paying that upfront fee, we're saving over $24 million. At the end of the day, I'd rather be investing in my scientists than paying for facilities. We don't see ourselves needing to take on another facility in Connecticut. We have a beautiful facility in Branford that's built out that has space for our team and for growth as we need to do it.
In terms of investment, the cash we have will carry us out into Q2 of 2028. We are very fortunate in that regard. That gives us roughly 18 months post the Proteus launch. I think our mindset in general is be very capital efficient. We will invest in targeted ways, but we are going to continue to be very controlled in that regard. We have a small commercial team today we think can help us, plus the channel partners will be enough to get us off the ground into the Proteus launch. Perhaps some targeted hiring for that, but really do not see scaling that until we see the traction with Proteus. In terms of sort of inorganic, listen, we have a strong balance sheet. We have access to capital.
We've said before on a call, we're always receptive to hearing about opportunities, hearing about technologies or products that could fit well with what we're doing. We're really open to it. I personally take the time to hear those sort of ideas and pitches. Nothing has sort of crossed our desk that we thought made sense. We're going to have a very tight filter on that. We don't want to be distracted. It has to make a ton of sense on multiple attributes for us to do something, but we're definitely open to it, staying engaged in the market. If it was there, we would act on it. Absent that really great fit, we're going to really just head down, focus on getting Proteus launched and then driving that commercialization in 2027 and beyond.
That's a place to stop. Thank you very much for your time, Jeff. Appreciate it.
Thank you, Alex.