Good morning. You're at the first annual Third Conference. Welcome to my adopted hometown here in Miami. We're glad to have you guys from CSP News. I am Mani Foroohar, Senior Analyst, Genetic Medicines here at Leerink Partners. I have, of course, Taylor. I host it with some technical difficulty. Najat Khan and Ben Taylor from Recursion. Welcome in, guys. How are you doing?
Great. Thanks for having us.
Yeah. Nice to be in the warm weather.
Yes.
You are always welcome down here. Let's start the conversation with a little bit of a strategy question. That Najat had. This was a little bit of a conversation on the last earnings call. How do we think, you guys think, excuse me, about the continued rationalization and narrowing and tightening of the portfolio? Should we expect future updates to look a little bit like what we saw at the earnings call with fairly rapid iteration on what the pipeline looks like on a Q over Q basis?
No, great question. Again, great to be here. Look, I think from a strategic perspective, we're really focusing on three areas. One is doubling down on those proof points. Second is surgically investing in the platform where it can really lead to that proof point. Third is pairing that ambition with discipline. Just to answer that question, you know, in terms of the pipeline, we have rapid go-no-go decisions across each of the programs. There's a reason for that. We have a portfolio, which means that we want to make data-driven bets where we see the differentiation play out versus not. I think that's number one. Number two, you also see progress in terms of our partner portfolio. We just shared for the first time our joint portfolio with Sanofi.
It's about five programs, where we are designing the molecules for challenging targets in I&I and in oncology as well. The third that you'll also expect to see in earnings, we track how is the portfolio doing. The velocity that we're starting to see in the portfolio and the platform specifically. For instance, we are synthesizing about 90% less compounds to those that actually go to advanced candidate and to the clinic in about half the time versus industry standards. I think it's really important for us to be objective and data-driven on the program, partner program, and platform. The last thing I'll say, and you heard us talk about the outcomes-based budget. This was a lot of work. We basically zeroed out the budget, and Ben can speak more to it.
We said, in order to meet the milestones and the catalysts we have, what is the fully loaded cost of each area? It really helps to have a more objective investor-like mindset to say, "Okay, if I don't see the data here, I know exactly how much capital allocation I can extract either to extend the runway and/or apply to another program where you have more conviction." That's the approach that we're taking.
Najat hit on a really important point there, because what we're trying to do is use our technology to enable us to do portfolio management in a more classic way. Because we're able to advance programs and get better data earlier, we're able to make better decisions earlier and move programs forward. You look at currently, we've got about seven programs internally that are advancing, and we've got the partnership programs. You can expect us to be making not only quick decisions, but also a lot of transparency as we go through that program and advance it forward. It'll look more like something that an investor would be used to, but all of these programs have something that our platform has contributed that we believe creates a better probability of success.
Let's talk a little bit about how that translates, how that flows through our financial statements from the pipeline. Obviously, you guys are engaged with the FDA actively with what is your current most advanced asset. Is it reasonable to assume that as individual assets move forward in pivotal studies, under Reg FD, et cetera, that we're gonna see tweaks around the margins on how you guys talk about OpEx on a quarterly basis? Or is it more reasonable to expect tinkering with how you talk about OpEx assumptions on an annual basis? Like, how frequently should we expect you guys to be tweaking our expectations and sort of giving us feedback to make sure that we model right?
Yeah. I think what we'll do is we've provided annual guidance. When we're making more significant updates to the strategic plan, we'll give you more significant updates to the annual guidance as well. Otherwise, you know, you can expect that we're working within that guidance to try and execute on all of the different programs that we put forward. For 2026, we said we expect our gross burn to be less than $390 million. That doesn't include any of the inflows from the partnerships. We do expect to get meaningful milestones coming in from those partners. We will treat that, like I was talking about before, like a portfolio management area. That we may take money out of some areas and put it into other areas, but if there's a major update, we'll give it to you.
I think as you said, you're excluding partner inflows.
Mm-hmm.
Can potentially be meaningful, but hard to predict with certainty. Let's talk a little bit about the opportunity and target landscape on the partnership BD side. Obviously, I'm trying to inflect those numbers near term and get some leverage off of other people's infrastructure into the clinical development. There's been a lot of discussion on whether or not the tech-enabled drug discovery field is quote-unquote crowded. I'm not sure what that means. How do you think about the dynamics in terms of the level of pricing power you can demand for your platform, how that evolves over time, and the competitive dynamic with some other tech-enabled drug discovery companies, which are quite capital hungry, and so clearly looking to compete for partnership volume with pharma as an attempt to fund themselves inOn what I think we can agree on fairly choppy macro markets.
Very choppy. Look, big picture, I'll start on that, Ben, please feel free to chime in. I think the most important things is not partnership announcements, but partnership value realization. When I think about the landscape from that perspective, it's actually not that competitive. We don't really have a lot of companies that have actually shown that they can deliver, their pharma partner is actually paying them. Money talks, right? At Recursion, we just crossed over half a billion in upfront and milestones. Just to give you an example of that, one is with Roche, which is much more focused on,\ I like to call our platform like a trifecta, vertically integrated trifecta. Biological AI for novel targets, chemical AI, chemistry AI for small molecule design. We're not in biologics yet, also clinical development AI.
The Roche one was for novel targets in the biological space, and we just received $60 million in milestones back to back for two novel data sets that actually generate novel targets which we're working on. That's one. One thing that gets pretty misunderstood, I think, that makes us different is not just the integrated vertical AI tech stack, but also the fact that we do all of the wet lab and dry lab. To make these maps, we made one trillion iPSC-derived neuronal cells and the function models and the knockouts, and now we're working with Roche Genentech on the functional validation to say which insights are actually causal targets that we will collectively start programs on. Very different. The next I'll say is you mentioned Sanofi.
We just received our fifth milestone from them, where we have five different targets we're working on, around lead series. This is Sanofi's TPP and us delivering on that, and we have more milestones coming up with a development candidate milestone, which is the trigger to actually onboard it into their pipeline. That's sort of what I tend to track, which is there's a lot of activity, but where is their impact? Having been on the pharma side for a long time, prior to Recursion, that really matters. Can you hold value? I think the two things, to your point, the other we've also learned, we learn from some of the best partners that we have, and these are in areas of neuroscience, oncology, and I&I. Pretty big areas.
Last but not the least, it is also another dual track to validate our platform. We test, learn, and then we scale.
Yeah. Just a couple of points to add onto that. If you think about how we structure our partnerships, we get paid upfront or in early milestones to cover all of our direct costs. That building aspect that Najat was talking about is both a platform and an NPV value that we're building off of, basically without having to use our own capital off the balance sheet to make it. This is really important because about two-thirds of our spend is actually applied to our pipeline and partnership programs. That's including applied development on the tech side of the platform and our experimental work. We really are gaining a lot from the scale that comes with that, but the financials are terrific too.
I mean, those Sanofi programs that Najat was talking about, per program, we can get $343 million in potential milestones. $193 million of that is pre-commercial, not a big biobucks deal. If you think about the royalties, average royalties will be in the low double digits. Really nice, strong, financial relationship there. We've advanced those five programs through the first discovery milestone. The second milestone that's coming up is actually a development candidate. That's significant for a couple of reasons. One, it's a larger milestone. Two, it ends our operational obligations, that's all profit that drops down, and every milestone that comes in after that has no offsetting expenses to it. It's all gonna be profit that drops down to us. We have room for up to 15 programs on the Sanofi partnership.
The Roche partnership, I almost hate to say it, technically it can go up to 40. I doubt we'll get to 40, but we've got a lot of room in both of those to dive in.
Let's talk a little bit about the underlying infrastructure, that you guys are building these partnerships as well as wholly owned assets on. Obviously, the company had done a little bit of rationalization around number of sites, et cetera. Is there still room to run there, or do you guys feel like you've established like a baseline level of sort of maintenance OpEx, platform CapEx, et cetera? Should we still expect to see, you know, narrowing of the geographic footprint, et cetera?
Yeah, I mean, I'll start, but, I know this is close to Ben's heart. Just as context, we reduced our pro forma expenses by 35%, to less than $390, and we shared that at a conference earlier this year. Number one, I think OpEx, yes. From a G&A and so forth perspective, we have brought that down significantly. We wanna make sure every dollar is actually going to value creation. I mean, this is operational excellence one on one. I'm not saying anything that exciting. We are gonna continue watching that, number one. Expect that sort of discipline to continue. That has to be the case. The second thing that we're also doing is, look, you know, the platform is never static.
In this era of AI, where there's constant innovation happening, the way Recursion has stayed ahead is by actually investing in the frontier areas. You also have to balance that with, where does it really matter? You don't want a 1,000 flowers blooming. You wanna make the ones that are the bottleneck in R&D. We've taken a strategic look at that, and we're gonna be very surgical in terms of where we invest in our platform. The same time, we're also starting to see some of the velocity coming in and the efficiency from what the investments we've already made. Like, some of the stats around. Look back to the trifecta. In the biological part of the platform, once you've generated that data, which in biology, one of the biggest issues is the data sets don't exist. This is proprietary. 40 PB of proprietary data.
That's a lot of data. It becomes a search issue, right? You're just starting to search. You're not doing CapEx investment. You're actually just reusing that to understand, better understand relationships and validate them. Second thing, on the chemistry AI platform, You know, I wanna underscore that again, making only 330 or so compounds to get to development candidate, advanced candidate, like, what goes into the clinic in 17 months versus 2,500+ over 42 months, which is industry standards, and I'm being generous. I don't think it's one yet. These are green shoots. People ask me, "How do you do that?" You simulate more, you make less. That's how you also get efficiencies. Money goes both sides. We're gonna invest, but we also expect to see efficiency is what we've already built in. Not just efficiencies, that can lead to effectiveness in the clinic.
Yeah. I mean, if you think about it, we're a technology company. We should be getting more and more efficient every single time in every single new project we take on. As we look forward, we think about how can I make more tomorrow with less cost. I think to some of the points, like our data has become more and more valuable because we're able to mine and build and actually look at orthogonal data sets and orthogonal testing systems to be able to do more in a simulated environment rather than running to experiment. I think another important part is our CapEx spend. I mean, if you look at Legacy, Recursion, or Exscientia, both of them were in the tens of millions of dollars every single year on CapEx.
Last year, as a combined company, we had $6.5 million. We're only gonna have a few million this year, and that's because we've made the investments.
Yep.
We know what is valuable, and we're driving that forward to push programs ahead in the pipeline.
You know, if I can just build on that, you also wanna make sure your investment, especially CapEx investment, is future-facing.
Yep.
We invested a lot in this wet/dry lab loop. Not everyone's talking about it, that's an investment we made years ago. The more important question is how do you use it effectively to make the right dataset that actually are useful to generate these novel targets? We're doing that internally, we also learn how to do that with the likes of Roche Genentech. I think like, sometimes I get asked the question, "What's the differentiation of Recursion?" You can talk about many things, ultimately, it's not just the data. Data is a huge moat. Like, everybody, most models are based on public data. Having 40 PB of private proprietary data is important, it's also fit for purpose, high quality. Models, yes, that's important. People, very important. Finding people who understand both tech and science, harder than it looks.
It's actually the integration of that vertical tech stack, right? The fact that you can go from biology to chemistry to clinical and back and forth and learn, that's where the effectiveness comes from. That's how you become and produce a more repeatable engine and not just a one-off.
Let's talk about the talent piece of that.
Mm-hmm.
Now you brought it up. I know obviously the debate about the struggle for talent in AI land is eternal.
Yes.
Sadly, no one's throwing $100 million at me.
Yet.
For those who are listening, that would be okay. I think that's cooled down a little bit. As you mentioned, that overlap of technical and analytical skill with an understanding of science, especially with an understanding of drug discovery, which is its own unique art and science, how should we think about the pool of talent, and managing and investing in talent as an asset?
Mm-hmm.
where we are in terms of the competitiveness of recruiting for that piece of the.
Mm-hmm.
technology stack, labor capital?
Yep.
How are we gonna think about it?
Yeah, you know, previously, I'd built an AI team at J&J, which was, like, to 300 people scale that across. One of the hardest things, Manny, was finding what I used to call and I still call bilingual talent, like, proficient in both science and AI. Scientists who understand. They don't need to code anymore, really. Do you need to code anymore? They need to understand the interpretation of AI-generated data. That's really important. Like, if you think about scientists, statisticians and scientists talk similar languages, but it's still not the same case with AI. Reverse, AI scientists who have the humility to understand drug discovery and development and how much of it you can't really engineer yet. Let's just be fair here, right?
To answer your question, there are a few very rare people in the last decade that have been working in this space. You know, a few of us happen to be by accident. I remember doing my PhD, and I was doing both coding and computer science and organic chemistry, and I was consistently made fun of. Like, pick one lane. No, actually, innovation comes from the intersection of the two. There's not a lot of people that exist like that. I think what's more rational and pragmatic is you hire folks that have the openness, like a drug hunter, that has the openness to understand AI and doesn't sit there and get threatened by it. Let's be fair. AI scientists that actually want to learn about drug discovery and development and the timelines it takes.
It's so much easier if you're optimizing ad revenues and so on and so forth, right? It's, like, the reward cycle is so much faster. I think the core of how you get those people to join you and to find you has to be the mission, has to be the purpose. I think there's a lot of people, especially post-COVID, where, I mean, let's face it touched so many of our lives. Patient and improving patient lives, everybody's got somebody that's a patient, or they themselves are a patient. Third is I think some of the innovation, Manny, like things like the folds. I like to call them the fold, AlphaFold, dragon fold, whichever fold, right? The fact that you're actually starting to see these green shoots of, hey, I can simulate more and make less. This is something we talked about.
I mean, I get asked the question, "Oh, now you're good at efficiency. What about effectiveness?" I'm like, "Thank you for noticing that, because six months ago, nobody was saying there was even efficiency." I think these green shoots are as important to talk to an investor or an analyst as it is actually to an employee, a potential employee, 'cause they're looking for who is that one company that's gonna have the best shot of success. Because they have all of the pieces together, the scaffolding is right, and they also have the right purpose and mission. I will say, and you bring them in, but the journey just starts there. Getting the teams to come together, not having silos, not having, you know, organizational constructs where they compete. It is not a versus.
This is where big pharma, I mean, I can tell you this, like, even though we had 300 data scientists at the prior company, it was, that was 2% of the R&D org. You can do the math how big the R&D org was. 2%. How do you win? How do you have that impact when you're 2%? The cultural adoption and the inertia is one of the reasons people don't stay. You gotta recruit them, but you gotta retain them by actually taking both disciplines and saying they're both equal. That's one of the hardest things and one of the things that I spend a lot of time on.
I think recruiting 300 AI data scientists who are characterized by humility Sounds like an interesting task. snark aside, I'm gonna pivot over to the financial side of questions. How do you think about accessing capital? Obviously, the most undilutive to ownership capital is partnership inflows.
Mm.
How do you think about accessing different parts of the capital stack in, in current markets? How do you think about use of the ATM in the future, equity, debit instruments, partnership, et cetera? How do you think about those and rank them on the path between now and cash flow break even in the future?
Sure. Well, obviously we can't comment on future financing, but I'll give you a few parameters on how we're thinking about it, generally speaking. Partnerships, we always hope to be a good flow of non-dilutive, capital in. We obviously have our existing partnerships, that I talked about earlier. We're always evaluating, new BD and different opportunities. I think part of that also depends on, what the pipeline looks like going forward. We are going to be very disciplined, and I think you get to, a very different set of options if you have seven successful programs versus if, you know, you focus on the first one, the FAP program where we had proof of concept. All of that needs to factor in. That's why we've got a very dynamic business model.
We can actually pivot very quickly based on what the results are in from that pipeline and move behind that. Now, you brought up the ATM from last year. We did dip in opportunistically. Looks pretty good right now, based on where everything has gone, and we've got a nice runway that actually goes out into early 2028, which I think puts us into a good spot to hit a lot of the upcoming milestones. ATMs are never meant to be a primary-
Yeah
... financing source, and we're really focused on how do you build out the shareholder register with lots of great investors. I think we're also getting to the point where a lot of the biotech investors that traditionally wanted to see data first, now they've got data from the FAP program they can dig into, some early data from CDK7, and lots of interesting green shoots as Najat would say. We've been getting a lot more attention from that side of the universe, not just the innovation and tech investors that were sort of our five years ago crowd that really drove us on.
I think one of the other topics I wanna talk about here, you've talked about the value of data as an asset.
Mm-hmm.
Talk to us a little bit about where you are in your relationship with Tempus. Opportunistically, how do you think about the role of other, like, transactions to acquire assets, access to data, expand your pool of other proprietary data sets that aren't necessarily available otherwise? How should we think about that, both in terms of that existing relationship and its financial implications, but also that as part of your strategy for accumulating your pool of data assets?
Yeah, I mean, it's a great question. Look, big picture data strategy, whether you're on the biology side, chemistry, or the clinical development, there's no one provider that has it all. It's a little bit of patchwork, smart patchwork, in order to have partnerships with the right people that really stitch together the data set one needs for the programs that they focus on. Just as an example, like if you're going into a program, ovarian cancer or non-small cell or prostate, there is a variety of different providers that are complementary. We're gonna be opportunistic always in terms of which data set. Tempus is one, but we also have partnerships with at least seven, eight other providers that don't get talked about, but we're constantly doing that. The other thing, one, is the space of data providers is also evolving, right?
It's not static. The amount of multimodal integration that we're starting to see, because look, we do a lot of the phenomics, transcriptomics, a lot of the omics data generation. Coupling that with genetics from others and also that connected to clinical data, they're also generating transcriptomics really helps us with that signal to noise. It's incredibly opportunistic. We're gonna stay flexible, and we're gonna stay smart. Another thing I'll say is, look, there's always a question as to how much money you spend with each partner, breadth versus depth. As we have more programs coming in, we're gonna do not just breadth, we're also gonna do depth. That means we have to be smart about how to allocate our dollars. Everybody should be on their tippy-toes. We want the best data.
When you think about depth of access to a partner that's providing a data asset, is that something that we should think about in terms of the length of the relationship as they continue to accumulate the data, or is that a function of just transaction size? Like, what does that mean?
Yeah. When I say depth in terms of the data set, like, I'll give you an example. You can either partner with somebody and say, "I'm in oncology," or you can say, "I'm in oncology, in ovarian cancer patients, platinum-resistant, how many patients do you actually have?" This is really important, due diligence with data partners the right way. The top of the funnel always looks good. 10 million patients. You start to apply the inclusion, exclusion, you end up with 20. Right? Where a lot of the value comes from is actually that 20. The top of the funnel is good for sort of broader causal AI networks, but then when we're applying it to a specific patient population, you wanna get very specific where the datasets have high quality and high depth. That's what I mean by that.
Not the length of the partnership per se, but the richness of the data, cause there's a lot of data messiness that people are still working through, and that's where, I mean, for me at least, I judge the quality of the data and what they're doing to actually close out the messiness.
Let's talk a little bit about that dynamic. We've talked about acquiring data assets. Partnerships are, in a way, monetizing your own asset. Let's talk about moat.
Mm-hmm.
I think there's a lot of discussion, and I'm sure it's gonna come up in the panel later, that well, to what extent is there investment in building internal infrastructure to enable drug discovery tools at your pharma counterparts, either your partners or those who are not your partners, et cetera. Other than the cultural dynamic you mentioned, which is obvious, how do you think about internal efforts at large pharma as competitors or as complements to what you guys offer as a partner?
Yeah, it's a great question. Look, I will say, I expect the world to be where pharma partners are gonna continue to build, and they should. That actually shows conviction in the fact that leveraging AI, leveraging larger datasets is gonna make a difference, number one. Number two, you know, pharma's always done that. Like, think about any modality, ADCs, siRNA, any other platform. They build their own, they also partner, right? I think it's gonna be a little bit of both. Some probably will be competitive, some probably will be complementary. Again, at the end of the day, the value proposition comes from the integration of the different layers. In large companies, they sit in different organizations. I mean, when I was at J&J, we were one of the few companies that had it all together under one organization.
Organizational construct matters, cultural adoption matters. Then the third thing I'll also say is, you know, the speed with which you can also innovate. The reason why you end up partnering with a specific company, not just AI, but any other platform, is the depth and that they have in that area, right? I mean, the 40 PB of proprietary data we have, that wasn't done in six months. It took time. The design of actually building a wet and dry lab, it's not non-trivial. Like, in some ways, like Recursion has one of the most longstanding historical platforms possible. You can look at it in many different ways, but one thing I think about, we have made a lot of slash mistakes, slash learning across the board.
You really want somebody who's really gone through those reps, who has a lot of reps, and that is also, I think, important and complementary, for any organization. I think it's always gonna be a bit of both, and the proof is gonna be in the pudding in terms of do you actually have better data, whether it's in the clinic, discovery, both effectiveness and also the efficiency and velocity.
One other thing I wanna add on. I love the question of, is there enough space in drug discovery for everyone to be competing.
Yeah.
I mean, about 3% of the genome has an approved drug. Around 10% has something in development. That doesn't even factor in if you think about the diversity of proteins that come off of that genome. We are just brushing the surface. The reason we have a 95% failure rate in the industry is because we don't have enough data. We don't have enough ability to make predictive models and really search and understand biology and understand chemistry. We're actually just starting to step into the much, much, much bigger part of the industry that has been primarily untouched.
That's also part of the problem with the public datasets, that those public datasets, not only do they have a lot of different ways of annotating that data that makes it really hard to use in machine learning, but also it's focused on that 3%. You're gonna keep going down that same hole unless you come up with some new ways to explore the rest of the space.
Most of the models that exist, 'cause you know, all of the public models, anything that's open source, we can bring it into our platform, leverage our data, refine it, use it in a matter of a week. Once you have that infrastructure, you can do it rapidly. Most of the models that we have found is they don't work well in out of domain areas. Might work really well with kinases, but you try to go into other target classes, it doesn't work as well. Somebody's gotta do the work to actually generate that data and be hyper-focused on it. Once you have it, I mean, you think about some of the other AI companies that have grown rapidly, OpenAI, Anthropic, et cetera, is based on the corpus of data from the internet.
We don't have a corpus of data in biology, chemistry, or even in clinical development. Somebody's gotta build that road before you actually build a good car to drive it. You have to do both at the same time. That's why the portfolio and the platform strategy, but you have to be very smart about capital allocation.
Awesome. With that, we're now over time, and I'm being given the get off the stage signal. Thank you so much.
Thank you.
I look forward to continuing this conversation.
Thank you.
Thank you.