All right. We can start a minute or two later.
Sure.
Are you guys doing a Q&A style thing, or what do you guys say?
I have some prepared. If you guys have questions, I mean, absolutely hop in. Do you?
I mean, I've got some stuff I'd like to know. I don't know. It's up to you. I'm just kind of looking at the room here.
No, for sure. I mean, I think I'd much rather talk about football, but I'll throw it out to you guys. My Broncos are finally in it, so.
I'm an Arkansas fan, so.
All right. All right.
I think if you guys prefer to do the descriptive stuff, we can go to you.
Literally, the scripted stuff is to fill the time, give you guys things that you need. If you have questions, I highly recommend you guys butt in, answer whatever you need. Absolutely. I mean, honestly, if you want, I don't know where AV guy is. We should have started by now. You can ask away if you have something.
I'd love to get just there's that, and I don't spend as much time here as everyone else, but just the long read versus short read and that sort of that idea. There's always been this idea that we're going to transition more towards long read over time. I kind of get the sense maybe that's happened a little bit slower. Maybe the short read guys have gotten a little bit better at what they do. I'm not sure. I'd just love to understand that part of it. It's just that long-term, long read to short read. Do we really go to more towards the long read over time? What drives that, and how to think about that?
Sure. It's a great question. I think maybe I'll step back and talk about kind of a little bit about the industry overall, short read, long read, and then talk about why we think long read has such a differentiation in the market and some of the stuff that's coming out now that proves that. High level, the way we look at it, it's a $6 billion-plus market in genetic sequencing. Illumina is the big gorilla. They have 80% plus of it. We have 2% of it because long read has historically been more expensive, slower, and there hasn't been as much data and software out there to support the advantages of it. When Christian joined about four years ago, he really sort of set about doing a number of things.
The first was to create a complete end-to-end product portfolio from sample all the way to software. The company really focused on that. That was one of the barriers to entry. We did not have a complete solution set for long read, at least PacBio did not. Over the last three or four years, Revio, we have released a number of solutions with Spark, chemistries, and others to help from sample prep all the way to the software with the variant calling. That was one of the big impediments to long read taking off. The other thing was, and we believe we have addressed a number of those things. On things like Spark, which is the chemistry we released recently, that enabled a lot more sample types to be used. It enabled a four-fold decrease in the amount of sample you needed to actually get a sequence to run.
We also created software that allowed variant calling. With Revio, we increased throughput. That was a big, big differentiator. We got into a lot more studies when we did that. One of the things that has also been a barrier to entry is people wanted to understand what's the true value of long read. What can you see with long read? We always believed if you see more, you're going to be able to discover more. That was the premise. We've been saying that the data wasn't out there yet to prove it. Most recently, even in the last two or three months, some really huge data was released.
The first was the All of Us study came out and produced some data that they basically said that with long read, we were able to see everything you could see with short read and see 50% more of the structural variants that are important and medically relevant. I mean, that's huge when you think about that. A big study ran only 8X coverage, not even doing the 20X coverage we provide today, and was able to find 50% more of what we believe matters for discovery and diagnosis. The second thing that came out recently was a HiFi consortium in Europe published some data recently that showed that they were able to find effectively 100% of the structural variants that mattered, of which 25% couldn't even be seen with short read. Once again, the data is starting to come out now.
We actually had some clinics, actually, because of that, converting over to going full on with long read. Radboud, which is in the Netherlands, had found that 93% of the variants that you could not see with short read, they can find with long read. They were fully converting their rare disease diagnostics over to long read. Customers are now seeing the value of long read. The other big impediment is cost. Simply, long read has been historically more expensive than short read. That is why we think that the Spark Next announcement we had at ASHG is so important, because for the first time ever, we are getting close to cost parity with short read.
That has been a clear barrier to entry for us for a long time, is even at $500-$600 a genome, that was still too much for certain labs to convert over. Now we are talking about, with Spark Next, potentially $300 a genome. Not only can you see things you cannot see with short read, diagnose things you cannot diagnose with short read, you have to go to reflex or other tests to look at. We are now close to providing cost parity with Spark Next. That is why we think it is going to be a big switch. That is why we feel like we have the wind at our back right now as we move into truly differentiating. One to 2% market share now, but we have got a full product portfolio.
We've just released a solution that allows you to use almost all sample types with Spark Next. We're going to be getting close to cost parity with short read. The barriers to entry are being removed. We believe now we're talking to customers that wouldn't even talk to us before because of cost. Some of the big PopGen studies that wouldn't consider us are talking to us now about the entire PopGen studies. That wasn't happening before the Spark Next announcement. We think we're really at a big inflection point for the adoption growing from the 1-2% now to carving away a significant percentage of the market. It doesn't even have to be a significant, a couple percentage points away from Illumina. That's kind of high level why we think we're at this inflection point for really establishing long read.
When you say you're at cost parity, so when you say that, is that on a sort of per basis pair? Because I guess when we talk about Illumina doing a whole new thing, you talk about an X or something. Yeah, specifically. I may be getting some of the SMRT values. And I think you just talked about maybe at an 8X. Is that sort of?
I was sorry, I'm throwing a bunch of numbers on. The 8X coverage was in reference to the All of Us study. They found that you could see 50% more, and they were only looking at it effectively covering it eight times. What I'm talking about, cost parity is a slightly different comparison. I'm going to move up close so people can hear me. It's a slightly different comparison. If you use a 20X genome right now, before Spark Next, we tell people it's about $600 a genome at 20X coverage. To get to what we believe is cost parity, because it's different prices you hear from different people, $200-$300, Spark Next allows us for certain customers to potentially price close to what they can get with short read.
We're not going to price it at completely equivalent yet because we believe we offer a lot more value. We give the whole genome. We not only give the gene sequence as well. We give epigenomic information, methylation information as well on the same read. You get that for free. We believe we have a huge differentiator. We never want to be at true cost parity because we don't believe we should be at cost parity with short read because ours is so much higher value. Ultimately, what it gives us is the ability to compete in a way we couldn't compete before and win accounts we couldn't win before. That is why we're so excited about Spark Next.
When you talk about, for example, the rare disease and that extra information, whether it's the methylation or otherwise, if I was to do that today on an Illumina machine and then I switched over to your machine, can you just help me understand what is that other insight that we're getting from that test?
You're talking about the methylation?
I'm just noticing, I just mean sort of the other things maybe that you're getting by running a long read on maybe a rare disease that today you're using a short read. If I switched to long read, what is the other information? Just help me understand.
I'll give you, so I'm not going to pretend to be a scientist here, so I'm not going to try to answer that question because I'll do it wrong. Let me tell you what people run long read to diagnose that they can't diagnose with short read. That's, I think, one way to look at it is, for instance, things like the ataxias, HLA testing in China, for instance. Berry Genomics just got an approval for a Sequel IIe machine using one of our machines to do that because you can't effectively see that. Some of these things you can't even see with short read. With long read, you can actually see it and diagnose it. When labs right now are doing it, they can't see it with short read. They use lots of different legacy tests to try to figure out and answer this.
What we're effectively saying is because you can see huge structural variants or large tandem repeats using our technology, you can diagnose or identify this using our technology. You can't do that using short read. You just can't straight up do it. You have to use another type of test to do that. What we're effectively saying is, and this is why our premise is ultimately we believe people will end up as whole genome test backbones, is we believe it's much better for clinicians or other people to get the whole genome and see all pieces of it, see if they can answer a question there, go back and look at it again if they need to. Whereas opposed today with short read, you can't even answer some of those questions or they have to use different types of tests, legacy tests to do it.
That's what we believe is the big differentiator for us going forward, especially with centralized labs. That's the premise as we talk about moving more into the clinic, especially in areas like the US where a lot of the labs are centralized. As we're talking about, we released Pure Target this year, which is really about targeted tests for things that short read can't see. For instance, the ataxia assay that Athena Diagnostics is using. We're doing carrier testing because there are certain things you can identify, Fragile X, other things that you can't identify using short read. That's the long read differentiation there. What we're ultimately excited about is as we get more customers to use it, they're going to find even more things you can't see with short read. That's our fundamental belief.
As people start doing more and more studies and start attaching the methylation, they're going to discover a lot more. That is the exciting piece, what are all the things we're not seeing today that as more people use long read, they're going to see tomorrow and start identifying and diagnosing. That is why we feel like we have the chance to get well above 2%, which is what our market share is now.
What are the major things that you have to get while you succeed to kind of get to that end state you talked about where maybe you start with this long read to get all that information and it's in the clinic? What are the big buckets of things you have to get right today to get there?
A few things. I think one, we have to just get more people using long read. Right now, one of the metrics that matters to us is new to PacBio. What that effectively means is these are people that have been sequencing before, but they've never used long read. Right now, the people that are buying our boxes, whether they're Revio or Vega, about 60% of them are new to PacBio, which means more people are using the tools. More discoveries, more clinical proof is starting to happen. That's one of the things we need to continue and encourage. Another big piece of it is software. Quite honestly, most of the software and the solutions were created on short read. If you think about those, those are incomplete data sets.
Most of the software and the whole genomes, the reference genomes, are incomplete right now. We need more folks, bigger PopGen studies, groups like the All of Us, like the Longevity study we just run at WashU, the Long Life study. We need more of them to come out and publish so you can start to see more of what these variants are so people can start to identify them. As more of that comes out, we believe we'll get much more of a transition to long read. We have to encourage that. I think we'll be investing more and over time as the software solutions, solutions there, more into the chemistry to help people do more types of samples, to develop the databases that allow people to figure things out.
This is where my, so before PacBio, I spent a lot of time at Apple and some other places running some of their big data teams. One of the things I'm really excited about is I think one of the big unlocks for us, quite honestly, is when people start putting some of the new models, the new AI models into big data sets like ours, what they're going to find. I think one of the things that's going to be really big and important for us is over the next two to three years people start to realize they need to go back and enrich their short read data sets with more complete data sets. That's our data. That's what they need to enrich it with. The discoveries, I think, are going to be profound once that starts happening.
I think that's another thing that needs to happen is folks need to look at, they're going to look at the data sets they have today. They're going to conclude they're insufficient to answer all the things. They're going to go back and start enriching data sets. I think they're going to enrich it with long read information like our tools provide, and they're going to come up with some pretty profound conclusions. I think that's the other thing that needs to happen is people need to apply a lot of this compute power to our data sets.
How does Oxford fit into that? Is their data sets helping enrich your data sets to kind of create this bigger data set for it, or is their data slightly different than your data? I'm just trying to understand, is it synergistic to the two of you, or is it?
I would say 100%. I mean, we have slightly different strengths and weaknesses, but we are both going to, and I was actually having this conversation with their CEO at ASHG, is we think it's good for both of us. I honestly don't care if it's their data or our data enriching these data sets because ultimately, the more data we have out there, the more solutions can come to bear and the more people will see the value of long read. I believe it's synergistic right now. Whether they're selling to now, I would much rather sell to a customer than have them sell to the customer. Ultimately, who's ever publishing the data and whatever people are finding is good for both of us.
What I know is they have a slightly different approach than you. How do you think about that? When do you win and when do you lose against Oxford?
It depends on the use cases. I think they seem to be much better in infectious disease and some of those areas where reportability matters, where we think we are winning and seeing a difference in rare disease, carrier testing, when sort of that reproducibility and high accuracy matters. I think we're each carving our own path. They're carving the path much more in infectious disease than us. We're carving a path much more in rare disease and carrier testing. I think both data sets, as I said, are going to enrich. When we crossover, I mean, the discoveries are going to be propelled by both of us. I believe we have a distinct advantage on the rare disease, carrier testing, genetic oncologies, some of the things that Children's Mercy in Kansas City is doing right now and showing how advantageous we are.
I do believe we will be, hopefully, a big player on that over the years to come.
What are some of the really big applications that are just in the early phase today, whether it's like the MRD stuff or the multi-cancer detection? Do you see yourself ever playing a role in those areas? I know those tend to be pretty small fragments, I think, but is there a place for you, or?
I think ultimately the data will find there's already some studies, and I want Christian has the study memorized, and I do not. There have been some recent cancer studies that show we're much better at identifying certain things that exist in the MRD tests. Right now, we're not focusing on that. I mean, that's clearly an area that I think short read is addressing right now. They seem to be addressing it well. We're focused on the ones where we have a distinct advantage. That's what I was like, some of the genetic oncologies, the leukemias, the other things that Children's Mercy is working on. I think that's where we're really going to focus our efforts right now.
Now, I would love to see over time people find the value of long read in these other areas that short reads are winning in right now, but we do not need that to happen for us to be successful. I think, like I said, there are some good solutions out there that already exist for that.
Jim, maybe give us a sense of when you're talking to US academic customers, what they're telling you about their expectations for 2026 funding. I mean, are institutions starting to plan new Revio or Vega purchases, or are they kind of in a wait-and-see mode until clarity around budgets come out for 2026?
Good question. I think one of the things we talked about in our Q3 earnings call was one of the reasons we're excited about Q4 is we have a strong Revio pipe. Now, one of the things we're not banking on right now is a release in academic funds, meaning we have lots of academics talking to us and wanting to purchase Revios. We're not baking any of that in right now into how we're thinking about Q4 or potentially even next year. The reason why is simple is we would sell 10 Revios a quarter in 2024 into academics. We've been selling one this year into the quarter. Right now, that's why we're so heavily focused on the clinics and the hospital segments right now. We're not looking at that. That would just be advantageous to us if that turns back on.
We still are absolutely talking to academic centers about their interest in Revio, but it's not something we're baking into our forecasts at all.
Got it. Maybe just extending on that a bit. When you look at the Revio pipeline, how would you characterize the near-term opportunities? Is it clinically focused right now largely? I mean, are these mainly new to PacBio customers, or are they existing users looking to expand their fleet? How does it kind of shape out?
It is both. The reason I can say it is both is that is what the data has been showing. About 60% of our Revio and our Vega shipments over the last couple of quarters have been to new to PacBio customers. They have sequenced before but are new to long read. We expect that trend to continue. We do not see that necessarily changing. However, we do have customers, for instance, some of the testing labs and others that are buying things like Pure Target and others. They are talking to us about potentially expanding their fleets as well. We have a mix of both. Once again, as we start thinking about our 2026 guide, we will figure out how to factor that in. Right now, it feels like we are going to be 50%-60% new to PacBio customers.
I think that's, as I talk about it, very important to us about establishing long reads.
In Q3, a few of the instruments that you sold were at a lower ASP trading off for potentially higher volumes with these specific customers. I think you guys talked about maybe ASP snapping back and remaining stable, but I guess just higher level, is strategic discounting something you're willing to offer more regularly to expand that footprint, or how do you guys think about it internally?
Yeah, I think there were some incredibly important anchor accounts that I know the team has been working on for a year plus, and those deals came in in Q3. I think these are deals that we've been talking about where we anticipate over time these will be higher utilization. They're in rare disease and diagnostics. They're these places that we believe over time demand is much more predictable and much more apt to be higher utilization percentage than our historical, especially in academics. These are areas that we've looked at. Now, once again, these were some accounts that we've been talking to for a while to get into.
Most of the accounts we're talking to now are not like that, which is why we anticipate the ASP snapping back, especially with some of the accounts we're talking to in Europe and others where the ASPs are much more normalized. We made a push in certain of these accounts to do that. Once again, we're not guiding on 2026. I don't know if another set of those are going to pop up in 2026 or not. It would be, once again, we have to depend it based on ultimately, this is a consumable business. My desire is to get our consumable percentages of product or percent of product revenue above, keeping it above 50%. That's how we ultimately expand our gross margins and get ourselves cash flow positive.
I'm always willing to listen to some of those strategic placements with, I can't say guarantee, but with the increased certainty that they're going to be consuming higher volumes of our chips and our reagents.
That makes sense. Maybe on the Vega funnel, clearly that's resonated with academic customers just given the backdrop and kind of the lower ASP there. When you kind of think about the opportunity for Vega, is the real opportunity still smaller academic labs that you're funneling into the long read ecosystem, or are you also seeing demand from hospital customers or clinical customers? I guess, what does that mix look like?
Once again, it's all the above. I'll give some examples of that. Vega has been the right instrument at the right time, especially in the Americas because of the capital constraints we've been dealing with. We had our best Vega quarter yet in the Americas in Q3. A lot of those customers were new to PacBio. They were a whole range. Some were academics, some were biopharma, some were hospitals. Let me give you some examples of where we think Vega could have a lot of traction in areas outside of where we normally sell. For instance, we announced the NMPA approval of the Sequel IIe in China for Berry Genomics. They're using Sequel IIe for thalassemia, but they've already expressed interest in Vega being the perfect box for that going forward, higher coverage, higher throughput.
That's an area where you may find Vega is moving straight into the clinic, which is different than how people are typically using it today, where it's more for targeted panels or for infectious disease or microbiome and microbiological example tests. I think we are excited with all the different use cases for Vega. We see it as the entrance into the Revio environment where we think a lot of these customers, they're either buying a Vega to supplement a Revio or over time may actually upgrade to a Revio as they start getting more use cases built in. It really is our entrance box and, like I said, bringing a lot of people into the long read ecosystem.
Got it. On that point of it, for lack of a better term, being like a feeder instrument into Revio over time, how do you think about the timeline of that conversion playing out? I know we're still super early. It's relatively early in the launch, but how do you think about customers scaling from Vega to eventually Revio?
Great question. We have not had a lot of discussion about what the conversion would look like going from Vega to Revio. I think Vega is still new, so we still do not even understand the pull-through on Vega yet. I think the question will be, when do they start seeing their throughput needs greater than 200 samples a year? They cannot do it without a Revio because Revio is about 2,500 a year. We do not have enough examples yet to sort of predict that, but we do absolutely anticipate that happening. Probably in the next year or two, that will begin to where people will start upgrading into the Revio life or the Revio ecosystem.
Got it. It makes sense to most likely make sense to go from a Vega to a Revio, not buy another Vega. Or are multi-placements an opportunity there too?
Where multi-placements are an opportunity, I'll use China as the example of that again, where their medical system is distributed. They're currently buying Sequel IIes, but they're also expanding with Vega boxes distributed all over China where they're not doing a ton of samples, but they want to make sure they have localized solutions present. I think that's the use case we'd see more. Now, would you potentially, you may have certain labs or things where they have different use cases and they might want to have two desktop boxes for a Vega available, but we haven't talked to a lot of customers about what their use cases are for buying multiple Vegas within the same lab yet.
Okay. You may not answer this, but I figured I'd give it a shot. As we kind of think about the outlook from here and what you've done this year, let's say, looking into 2026, throughout the year, you've placed, let's call it between 10-15 Revios. Vegas, in general, have been growing. You guys have talked about the funnel being healthy there. Assuming the backdrop stays the same, is 10-15 Revios a quarter a realistic expectation? Can Vega continue to grow?
I think, I'll preface by saying we will not guide for 2026 yet. Now, let's talk about some of the things that we're excited. I'll start with Revio first. What are some of the things that excite us about Revio? Is the Spark Next launch. We've had a lot of customers come into our pipe now because the fact that you can do a genome for about $300 makes the Revio a much more attractive box for them. All of a sudden, there's customers that are talking to us that weren't talking to us before about buying Revios because of the Spark Next launch. That's a reason to believe that we could continue with our Revio velocity.
The other thing is, like I said, as more and more data gets published on the value of long reads, we talk about converting more folks from short read to long read. That 2,500 samples a year is a nice sweet spot, especially in areas like Europe where they have single-payer systems. As you've got groups like Radboud just announcing, "We're going to bring all of our rare disease testing over to this," that's happening more and more in Europe. Those are things that encourage us that Revio still has a good life ahead of it. Now, with Vega, Christian will say he sees that as being a 40-plus quarter box. We just have to get that out there and start getting the long read acceptance to happen.
Once again, we don't know what we're going to say with '26 yet, but this will be our first full year of Vega launch, and it feels the 35-40 is about where it's running right now. I'll be interested to see what the field thinks they can do next year when we start to think about guide. I think Christian feels this should easily go above that into the future, especially as the long read acceptance grows.
On Revio utilization, what are you seeing in terms of pull-through from the early adopters? Is that still growing? I know as you layer in new systems, it is going to dilute that utilization metric. It is hard to see what early customers are doing. Is that continuing to scale higher? Has it plateaued at all?
We've been kind of bouncing in the mid to low 200s for the entire year. We had a really nice pull-through this quarter at 236. I think one of the interesting things for us is the Spark chemistry release we believe will continue to increase utilization. The reason why is more sample availability. You can do different types of samples. We've talked about the fact you basically just need one more run a month on the Revio to get pull-through to $300,000. We're trying to do everything possible to make it easier for our customers to do one more run a month. I think that's the Spark and the Spark Next chemistry. I briefly talked about the Spark chemistry, which we're improving upon, increases output by 30% based on previous chemistry.
We're showing people you can get more data by running samples to 4x decrease in the amount of sample volume. Not only more samples, but less sample needed to do that. In addition, once we do Spark Next and actually get the costs lower, we believe it'll be another reason to increase utilization. Now, we have to balance that with pricing, which is the whole early access program. How do we balance the pricing with the sample usage? We believe that's another thing that can really propel utilization higher for our Revio customers. Fourth is just more data. The more data that gets out there and the more things that people can figure out from it, we believe utilization will continue to increase.
Yep. Maybe on the pricing elasticity dynamic there and the potential impact it has to revenue, I mean, there are some differences here, right? Like given you guys are, it's more about converting samples away from an existing market than growing the market like we're used to with these kind of situations. How are you kind of expecting the impact of the new chemistry? I mean, do you expect to have volumes offsetting price near launch? Should we be anticipating some sort of price-driven headwind to growth? How should we think about it?
Yeah, that's something we're thinking very deeply about right now. That's part of the reason we're doing the early access. I'll talk a little bit about the early access program and the beta program we're doing now. We've opened it up. We had about 100 customers interested in doing it. We announced it at ASGG. We're only doing it for a handful of customers. They're all paying for it. They all see the value. One of the big things we're trying to figure out is not only is the data quality consistent, are our customers happy with it, but does it change sample and utilization? Can we encourage them? Because it is time-bound. You have to use the chips in a certain amount of time. Do we actually increase utilization?
Part of the reason we're trying to figure that out is really the solution was created for high-volume customers. It is really volume-driven. As we think about it, this is the solution for our highest volume customers. We are talking to PopGen studies. We have never talked to them before because they now see a path to doing this in a much more cost-effective fashion. We could, and Christian talks about this, be in early access all through 2026 while we try to understand the impact on utilization for the Spark Next chemistry and the multi-use because we will be offering both multi-use and single-use chips. We have to be very careful, and that is the nice thing about having both Christian and Mark on the team. They lived through this at Illumina numerous times, so they understand how we have to be careful with doing this.
We want to make sure that our customers are happy with the data quality, that the clinics are comfortable with the multiplexing and the barcoding, and that we are comfortable that utilization is increasing as we release it. Ultimately, this is going to be the solution we use for everybody, but we have to make sure we've converted enough people before we fully release it to every and all of our customers.
Got it. On the PopGen opportunity, I guess, how has the funnel of future programs tracked over the last 12 months? Has the announcement of this chemistry accelerated any of those conversations to some degree?
Yes. I'll talk about three of the PopGen studies or the studies we won. We just announced the Long Life study we just run, which is over 7,800 whole genomes with Washington University in St. Louis. They'll be doing both gene sequencing and methylation. They'll be doing both and potentially transcriptomics on it as well. Huge, huge win for us. There's the Korean Pan Genome Study, which will be kicking off in 2026, and the Thailand carrier screening or the newborn screening initiative as well. Those were all kicked off with just the Spark chemistry. Now, once we launch Spark Next, there's been a number of the larger PopGen studies that we've been talking to that historically they would talk to be running 5%-10% of their study on it with us.
They're talking to about running 50% or more of the studies now with the pricing we can give them with Spark Next. It was a handful of those before. Now, Christian and Mark are talking to a number of those initiatives about what we can do and how we compete. As they've talked about it, the pipeline's never been stronger for PopGen studies since we announced Spark Next.
Got it. Maybe a bit early to ask this question, but I mean, what are your expectations around how fast customers transition? If the data is good, if they can work through the multiplexing, I mean, what are your expectations about converting their volumes?
It is interesting. I mean, I think we converted with the Spark chemistry, it took about a quarter to convert our customers fully to Spark chemistry. You could see for high-volume customers, they could convert in a quarter or so for the ones. Now, what we have to figure out is, once again, in order to control that, we have to make sure the volume commits and the chip volume, these are hard-to-produce chips, the chips are available for them for the new multi-use chips. It could happen as quickly as a quarter like the Spark. Once again, I said that we are going to be very thoughtful about how we release it to all the customers over the course of the year. It will probably be based more on deals and deal size on who we release the new chips to.
That is how we're going to be sort of controlling access to that.
Got it. How are you thinking about the path to, I guess, extending reuse beyond two runs?
I think ultimately we'd love to land somewhere above that. We think that's an important place for us to be. That's really what part of the early access and beta program is about, is we want to make sure that right now it's one use and one reuse, so two uses total. I think a lot of our customers and us have initiated three or more would be where we'd want to land. We haven't decided where we're going to land in that pricing. Once again, you do eventually run out of the, I think it's the biotin and the wells. There is a time, there is a delay on how long you can use the chip once you start using it. We have to figure all that out through the early access program.
Hopefully by the tail end of next year, we'll know where we're going to land on that. Right now it's two. We're not committing to anything more than that with our customers. They're still really excited about it even at just two uses.
Got it. Yeah. When it comes to Vega pull-through, is that something we could maybe get with 2026 guidance?
That'll be interesting. I mean, it's something I would like to provide. I'll be curious if we have enough data to provide that in a way that's consistent. Once again, we're going to start regrouping before. I'm not sure if we're going to guide at JPMorgan or not, but we'll have a full four quarters. Now, one of the things we struggle with is it takes most of our customers about three months to get up to speed using a new tool. We won't even have a year of full usage data for the majority of the Vega owners by the time 2026 begins. I'm curious if we will or not. I'm not sure.
Okay.
Yeah.
Got it. I will pause here to see if there are any other questions before I start wrapping this up.
Can I ask another question? I know it's a note-based one. You talked about getting that whole genome down to $300. If I think about, like I know you're looking at a $300 whole genome, whatever it is, that's whatever. If you do it at $300, what's the difference? Do you know what I mean? In terms of the output, the data, the insights, if we're going to do a whole genome, I'm just trying to understand that. If it goes to $300, what do you actually get different?
You're getting beyond my expertise, but I'll give you the, based on my simple understanding, is one, you don't actually get a whole genome with short read. I mean, I think that's the first big difference, is you just don't. There's whole areas of the genome they can't look at based on the technology. I think the first thing is we're talking $350 below $300 for a whole genome. We believe that's 99.9% of the genome. That's not something short read provides today. I think that's the first basic difference. When I talked about those studies, there's just things you can't see using short read. That's the biggest differentiator. I'm not proficient enough to comment what's the difference between 20x coverage and 30x coverage and the different types of results. Mark or Christian would be better equipped to answer that.
What I can say with complete certainty is you just can't do the things with short read that you can do with long read. That's why different groups are trying to come up with different solutions to bridge, to figure out a way to go to long read. I mean, Illumina has got a whole product line trying to figure out how to get there because they realize and understand the limitations. You need a scientist to distill that down or Christian.
Yeah. Maybe as we look at gross margins from here and we kind of think about some of the improvements, like as we look ahead, is it when you think about the drivers and kind of rank order what they are, is it largely chemistry? I mean, are there unit economics on the instruments that you can still capture? Is it mix, higher consumable mix? How do you kind of think about the margin opportunity from here?
Yeah. I think the easiest thing for us to do first is just more consumable mix. I mean, that makes that's part of the reason we had such a great Q3 was we had higher consumable mix than we initially forecasted. That's the biggest driver. Over time, the more stable our consumable utilization gets and the larger percentage, the better off we're going to be. We're pushing hard into that area, which is why clinical accounts are so attractive to us. Two is we've basically fully inserted the Revio now. We're just now experiencing kind of the most cost-effective production of the Revio. The more units we put through the factory, the better we get with Revio cost. We've also just gotten the Vega fully on the production line.
It was we were paying a much higher price to build that on the R&D line. Now we're on the full production line. That's much more efficient coming into Q4 and going into next year. The fourth piece is the SMRT Cell right now is we're experiencing some of the best yields we've ever had historically on the 25M chip. If we can continue that, that's also very helpful for us moving into the future. I think lastly is anything we can do through chemistry or others to sort of get more output or enhance the value for customers allows us to hold ASBs higher. We're going to continue to push on advancing chemistry and also software. That's part of the reason I think right now, if you look at short read, it feels like there's a race to the bottom there.
We feel like as long as we continue to produce higher-value software, higher-value chemistry, we're not going to be pushed into that race to the bottom. The reason we're pushing our costs lower is not only will it increase our gross margin through SMRT Cells, but it allows us to gather more market share and establish our value as a premium product over short read. I think as long as we advance chemistry, SMRT Cell technology, we can still control that premium and get that premium over short reads.
Got it. You've made a lot of progress reducing your OpEx run rate. I'm not sure if it was formally reaffirmed on the Q3 call, but I think at least on the Q2 call, you guided $235 million to $240 million in OpEx for the year and said that you continue to expect 2026 adjusted OpEx to be lower than in 2025. Is that still the case?
Correct. Yeah. I mean, that's an area we're extremely focused on. I think we made some important decisions and good decisions to just focus on long read in 2025. In reducing our OpEx, we're still in that spend rate. We still are able to invest into the future of long read and control our OpEx and be more efficient with it. That's something I believe we're going to continue into our long-term guidance, is that expectation that operating expenses are not going to increase.
Got it. The goal of becoming cash flow positive by the end of 2027, what do you view as the main levers to get there?
It's what we've been talking about here today. We've got to continue to penetrate in clinical, get growth started again on our top line. We've got to be successful at the Spark NX launch so that we continue to get robust gross margins. We've got to maintain ASP discipline on our instrument sales. Though we'll make certain decisions like we did on some of the strategic accounts, we've got to maintain discipline on our ASP and pricing. We've got to control OpEx. If we can start to see Pure Target takeoff, the Spark NX bringing over new samples, start to get more Revio use cases built in and control our OpEx, we still believe cash flow positive at the end of 2027 is very realistic for us.
Perfect. All right. We can stop there.
All right.
Appreciate it, Jim.
Thank you. Thank you, gentlemen.