Absci Corporation (ABSI)
NASDAQ: ABSI · Real-Time Price · USD
4.730
-0.120 (-2.47%)
Apr 28, 2026, 9:30 AM EDT - Market open
← View all transcripts

R&D Day 2023

Oct 4, 2023

Alex Khan
VP of Finance and Investor Relations, Absci

Good morning, everyone. On behalf of our team at Absci, I'm thrilled to welcome you all to Absci's first R&D Day. We have an exciting program of presentations, speakers, and updates that we are pleased to be sharing with you all, and I'd like to thank everyone, both here in the room in New York and on the webcast, for your time and attention today. Before we begin the presentations, please take a moment to review these disclaimers, as during the program today, we'll be making some forward-looking statements which involve risks and uncertainties, and you may refer to our filings with the SEC to learn more. Looking at the agenda for today, we are pleased to have a lineup spanning Absci's leadership team across the board and are honored to welcome some guest speakers to present today as well.

First, Sandi Peterson, Operating Partner at Clayton, Dubilier & Rice, and the Lead Independent Director for Microsoft's board of directors, will deliver opening remarks. Previously, Sandi had been the Chair and CEO of Bayer CropScience and CEO of Bayer Medical Care. She is also the former group worldwide chair for Johnson & Johnson and is a renowned global business leader who we are honored to have with us today. Next, we'll hear from Sean McClain, Absci's founder and CEO, Zach Jonasson, Absci's Chief Financial Officer and Chief Business Officer, and Andreas Busch, Absci's Chief Innovation Officer. Then, after a short break, we are very excited to have a guest presentation from Jonathan Cohen, VP of Applied Research at NVIDIA, to discuss the evolution of AI and how recent advances can be applied to life sciences and biology.

Afterward, we'll hear from Amaro Taylor-Weiner, Absci's new Chief AI Officer, and Christian Stegmann, Absci's SVP of Drug Creation, who will discuss new details about our internal pipeline of drug creation programs before closing remarks from Sean, and we'll open the floor to Q&A. Thank you again, everyone, for your time and attention today. I'm excited to be handing it off to Sandi Peterson to kick off our presentation.

Sandi Peterson
Operating Partner, Clayton, Dubilier & Rice

First of all, thanks, Alex. It's great to be here today and to welcome you to today's discussion. And I'm very excited about being able to talk to you about what Absci is doing, but I want to put it in a broader context. As Alex mentioned, I've been working in this, the area of technology and biology for a fairly long time, and I just wanted to bring you back to a little bit of history. One of the interesting things that started happening 15-20 years ago on the campus of the Institute for Advanced Study is Arnie Levine, as all of you know, as, you know, one of the godfathers of oncology, started trying to convince and recruit the physicists at the institute to come start a new program with him.

Take your brain, take the understanding of physics, and apply it to biology, because biology is the next frontier." And quite honestly, he was quite successful in doing that. He ended up convincing a handful of physicists that the future was in understanding the use of technology broadly, AI specifically, of course, but technology broadly, and applying those rules to biology. And from that work, a handful of companies were spun out, some of which you all know. But I think the interesting thing about this is this is a 15-20-year journey. And I think if you would have asked somebody like Freeman Dyson, who unfortunately passed away recently, if he had to do it all over again, I've heard him say this, that if he were born today, he would focus on biology, not physics, because that's where the frontier is.

Now that we have computational capabilities that can actually enable us to learn things about biology that we couldn't even do 10 or 15 years ago, it is a very exciting place. Why am I here, quite honestly, and why do I feel very bullish about where we are and the specific things that are unique about this company? You will hear from John a little later today, and I'm sure all of you have read this, so I don't mean to, you know, not make it clear that you all probably understand this, but there are a number of things that have happened in the last decade, decade and a half, that have enabled biology to be thought of in a very, very different way, in using these tools.

Obviously, image processing has developed in such a way that we can now do things that we couldn't even have thought of doing five years ago. Everybody knows about what's happened in the world of genomic sequencing, and it's now, you know, there's another Moore's Law that's been applied to that, and it's being taken down to, you know, very, very deep molecular levels that we couldn't do a while ago, and if we could do it, it cost way too much money. We all know, and NVIDIA has been a great benefactor from all of this, is the data computational horsepower that originally was created and the efficiencies were being done to enable cloud computing and big data centers operating at much higher speeds and capabilities at much lower cost.

We now can use all of that capability that's been built, and NVIDIA's, obviously, you'll hear more about that, has been on the forefront in enabling some of that and the work that they've done. And then last but not least, large language models. You know, Microsoft, years and years ago, spent a ton of time and energy and has one million patents on translation, and that's where some of the early thinking came from how do you translate from one language to another? And then that started to be applied to the broader context of actually instead of doing very mechanistic ways of thinking about AI, and that has been a very significant breakthrough. So all of that has happened in the last 10 years.

I would say that there is the – what I would say is there's the classic, no offense to anybody, AI, the Silicon Valley hype right now. Everybody's relabeled themselves as an AI company because they think they can raise funds and... but most of them really aren't, as we all know. But I think one of the interesting things about Absci, in particular, is that they've been at this for a long time. So they've been at it for a decade, and the work that's being done in the last 2-3 years is fundamentally different, quite honestly, than the work that they did in their early years. And there are a few things that are quite unique about this organization and this company.

One of them is that they have a team of people, and you'll hear from Andreas, or as I like to call him, Andy. He and I did use to work together at Bayer years ago. He understands drug discovery. He's a brilliant drug hunter, and he understands the process better than most people do. And somebody who understands that can then apply these technologies in a much more thoughtful way than somebody who's just a technologist. So I think that's a very important distinction that this company has. Obviously, the combination of the wet lab and the silicon labs, as I jokingly call them. That combination is also quite unique, and you'll hear more about that today.

And then just more broadly, our capabilities as a technology company and our ability to recruit unbelievably talented people, which you'll hear from today, makes me pretty excited about what this company is doing. Most people underestimate how long it takes to get these technologies to actually be commercializable and what it really means, and my sense is we're sort of on the cusp here. You'll see this company now has a pipeline. It's not just a dream, and I think that's a huge indication of why this is gonna, we hope it'll work, but it is quite different than most other people. We have real targets, we have a real pipeline, and we have a lot of exciting things going on here. So for all of those reasons, hopefully, you'll enjoy this morning.

I'm very excited about what Absci is working on and the possibilities of the future to do some very unique and different things in the world of drug development, drug discovery, and completely change that, the timelines of those things and our ability to drug things that we literally couldn't get our access to in prior eras with prior technology. So with that, I am going to stop talking, and I will 'cause it's much more interesting to listen to Sean and the rest of the team. And I just wanna thank you all again for being here. Thanks, Sean.

Sean McClain
Founder and CEO, Absci

Thank you, Sandi. Imagine standing at the cusp of a technological revolution, much like the dawn of the internet or the inception of the iPhone. With the rise of generative AI in drug discovery, we are at such a threshold. The buzz, the projections, the promise about AI's potential can often sound like a story in the future, many years out. But here's the twist: the future is here today. At Absci, we just, we don't just dream of the future, we build it. And while the world has been caught up in the rapture of what AI might do, Absci's been showcasing what it can do. Before the world caught on to the transformative power of AI, before it, you know, before the buzzword resonated in every boardroom and lab, we at Absci were already harnessing its potential. We were doing AI really before it was cool.

We've been integrating AI with our robust, scalable wet lab technology over three years, long before the rise of the AI hype. But our story doesn't start there. A decade prior, we began building our ultra-high-throughput, scalable wet lab technology for measuring protein-protein interactions with a clear vision of scaling this technology to inform antibody design. Why is this important? Because this fundamental wet lab technology is the cornerstone of our present and our future. It enables us to generate training data for our AI models. But our wet lab technology not only is generating data for training, but it's also being used to validate our AI models. This integrative dance between AI and wet lab occurs in an impressive six-week cycle time and drives our mission to create better biologics for patients faster.

Imagine a world where drug discovery isn't a marathon, but a sprint, a world where timelines are stretched over years, but compressed into months. Historically, achieving IND for a drug asset took, on average, five and a half years. Now, picture this: with Absci's pioneering approach, we're on a trajectory to reach that milestone in a mere 18-24 months. That's just not faster, it's transformative. Our internal portfolio tells a tale of innovation through AI. Four wholly owned assets, where three have the potential to be best-in-class, and one holds the promise of being first of its kind. Each asset is derived from rigorous scientific process anchored in cytokine biology. What's even more exciting is the rapid progress we're making towards significant value inflection points.

We've strategically developed our assets to demonstrate their potential early on, aiming for proof of mechanism in phase I clinical trials. I'm thrilled to share that the four assets have the potential to reach IND filing in 2025, one of them having the potential to reach it earlier that year, and each having blockbuster potential. As we share this journey with you today, it is with a sense of profound humility and gratitude, because at the heart of Absci isn't an innovation story, but it's built in belief: belief that we can, belief that we will, that we will change drug discovery. It's about making a difference, making a difference in patients' lives. Thank you for believing in our vision at Absci, and thank you to the incredible Unlimiters for your hard work, your dedication, day in and day out, to make this a reality.

Welcome to our present, where AI isn't a dream, it's a reality. Welcome to Absci's R&D Day. What if the next transformative drug was designed at a click of a button? Well, that's exactly what we're doing here at Absci. We're going from this paradigm of drug discovery, where you're searching for a needle in the haystack, to drug creation ... where you're actually creating the needle, and in our case, a biologic. Now, how do we go about that? How do we use cutting-edge AI with our wet lab data to make that a reality? Let's dive into a little bit of the history of AI. So AI started out with image recognition or classification. You use AI to identify an image. Is this a dog, a cat, or a parrot? And the AI is able to correctly identify that, a dog, a cat, and a parrot.

This has actually been used with AI drug discovery in the past. Actually, Andreas at Bayer was using similar technologies, where you take a large 1-million-member small molecule library and figure out which of these compounds should we be progressing, all done in silico. But now, we've shifted to a new era, generative AI, where you're able to look outside of the training set. You're able to have the AI make predictions that it previously didn't have within the data set, and this is really the future. This is how you start to design these foundational models, which John and Amaro will be talking about today, where you can create new proteins from scratch that have all the design elements you want from the get-go. So why are we wanting to apply generative AI? Why do we want to have the design over drugs?

Well, the drug discovery paradigm is ripe for disruption. If we look at how antibodies are traditionally designed, it's designed through immunization. You take a, let's say, an oncology target, you inject it into a mouse. The mouse uses its immune system to generate antibodies, but you have no control over what the mouse gives you. The mouse will give you an antibody that binds to a particular epitope. That may or may not give you the biology, the affinity that it has. Again, may or may not give you the biology you want, and you have no control over the immunogenicity, developability, and manufacturability. So you have to go through this iterative process, substituting out one change for another.

This is why it takes over five and a half years to get new therapies into the clinic, and why the success rate—or one, it's one of the reasons why the success rates are so low. We need to be able to take control of the biology, turn biology into engineering principles, to increase that overall success rate and to decrease the amount of time it takes to get into the clinic. So as you all know, and as Sandi had talked about, there's a ton of buzz going around with AI. Not only AI in general, but AI within drug discovery. And if you look at the companies that have emerged, there's a lot of exciting companies that are out there, but most of them are focused on small molecules, not biologics. So why is this? Well, it all comes back to the data.

Anybody can go take a million-member small molecule library from a CRO at Bayer and go run that, use that for training data to create a generative AI model. But that's not the case with biologics. Biologics, you don't have million-member molecules or member, a million-member libraries sitting on the shelf. You have to have a living organism make every single antibody you want to test and screen. So instead of a chemist making a small molecule, you have a living organism making an antibody. So how are these antibodies being made? They're made in mammalian cells or CHO cells. And how, you know, how much can you scale that? You can scale it to maybe thousands or tens of thousands of antibodies being produced in a given week.

Well, that's just not enough data to actually start to train these generative AI models, and that's where Absci fits in. This is the problem that we solved. We solved the scalability problem within biologics to be able to train and validate these generative AI models. So how do we go about solving this, solving the scalability of biological antibodies? Well, we did it through the hero of our story, our E. coli cell line. So we went with something that was super simple, unlike a mammalian cell. We were the first to engineer an antibody to be produced in a bacterial cell. Now, what's the significance of this? Well, what you can do is you can do a pooled approach. I can take a test tube of my engineered E.

coli that have billions of cells in there, and I can take a million or billion member DNA library that encodes for unique antibody sequences that I want to test and screen. I take that DNA library, I throw it in my test tube, and now in that test tube, I have every single E. coli in there making a different antibody. So I've gone from being able to produce thousands or tens of thousands of antibodies, to producing millions to billions in that single test tube in a single week. Now, we're starting to get somewhere with the data. So now that we have those antibodies produced, how do we actually test them for their functionality? How do we look at their protein-protein interaction? So all of us know that designing an antibody, there's two key things that are important.

It's where the antibody binds or the epitope, and what the affinity is, and in essence, that's the functionality. It's the protein-protein interaction data. And we developed our ACE technology, a proprietary assay, where we're able to interrogate every single E. coli that's making a different antibody and look at what its interaction is with the target of interest. And so we're able to get millions to billions of protein-protein interaction data points that we use to train our models, as well as to validate them. So how does this work in practice? It's through our Integrated Drug Creation platform. It's data to train, AI to create, and wet lab to validate. This cycle time happens in a mere six weeks.

We're able to generate millions of billions of protein-protein interaction data points to train our model, and we're then able to use that exact same technology to validate the models. Now, this is what gets AI scientists excited. Right now, AI scientists can go work anywhere. They can go work for NVIDIA, they can go work for Microsoft, they can go work in any other industry that they want. What's exciting here is not only can you use this, use your skill set as an AI scientist to help create better drugs, but you can actually make substantial progress on models because you're able to essentially act like you're at a tech company, where you can rapidly iterate on the model designs and architectures. You're able to get the right data, figure out what the right models are, and rapidly iterate.

That's what, that's what's exciting for these, these AI scientists, and that's why we have some of the, the most leading, cutting-edge AI scientists that, that have come work here at Absci. And you're gonna hear a lot of, of that from, from Amaro Taylor, our new Chief AI Officer. This platform and this ability to rapidly iterate, generate the data, has allowed us to make huge breakthroughs within generative AI in antibody design. Recently this year, we came out with a manuscript that showed we were able to design an antibody from scratch. And so what do I mean by that?

We were able to take a structure of a target, feed that into our model, specify where we wanted the antibody to bind and design, and we're able to then design the critical CDRs that bind to that particular target, all from scratch. This is a huge breakthrough, being able to actually use AI to hit the epitope you want, and additionally then use lead optimization to hone in on the affinities. This is what's gonna start unlocking new biology. This is what's going to allow us to start to increase that overall success in the clinic and shorten these timelines. I've talked a lot about how we design antibodies. One of the areas that we're focused in on, as well, is target discovery.

We have a reverse immunology platform that Christian's gonna be talking about today, where we're able to take tertiary lymphoid structures in tumor samples, take the antibody sequences that come out of those samples, do a proteome panel screen, and find out what novel targets exist within these patients that have exceptional immune responses. And what we've actually found is a brand-new immuno-oncology target that came out of this that we're gonna be talking about today. And then we use our AI from there to the design the right antibody that achieves the biology for this new, given target. And then from there, we do AI lead optimization. This is our drug creation platform. So what is this unlocking for Absci, the industry, and partners? First, it's allowing us to access new novel biology.

One of the areas that the industry has struggled with is being able to generate antibodies to GPCRs or ion channels, exciting new targets in oncology and other indications. And why are these targets difficult to drug? Well, there's very little surface exposure of these GPCRs on the cell surface, which means that the immune system has a hard time generating antibodies towards these. Well, our AI doesn't. We can specify, even if it's a very small region, that's exposed on the cell surface, we can still target antibodies to these particular targets. This is unlocking a new novel biology and enabling first-in-class assets. We're able to create superior drug attributes. We're able to create multivalent antibodies. We have a case study today where we're gonna be talking about COVID, where we're able to have...

use AI to engineer an antibody to bind to three of the different COVID variants in one antibody. We're able to increase overall half-life by engineering Fc the Fc domain. And we're also then able to have pH-dependent antibodies, where you have antibodies that can bind in the tumor microenvironment in an acidic pH, but not bind in healthy tissue samples. We're able to dramatically reduce the time to clinic. You're gonna be seeing today how quickly the assets that we've stood up can get into the clinic. We just started building out our pipeline this year, and in 2025, we have the ability to file our first IND on all four assets that we've started. That will be all done within an 18-24-month time period.

We're able to increase overall probability of success, again, by honing in on all the right attributes you want from the get-go, getting the right epitope, the right affinity... the right developability and manufacturability profiles. And one of the other areas that I'm really excited about that you'll, that you'll see today is how AI can actually, what I'll call, patent bust, as well as being, as well as establishing very broad, IP claims. What we're seeing from our AI is the ability to search in a much larger search space, getting sequences out of the model that you traditionally wouldn't see in immunization campaigns. And since we're able to validate at such a high throughput, we can actually enable broad claims.

If we can go and test over three million unique AI-generated designs that have very high sequence diversity in our wet lab, that's gonna enable us to file on very broad claims. If you all remember the Sanofi Amgen case that just occurred, the patent office said that you need enablement in order to get broader claims, and that's exactly what our AI model is allowing us to do, along with the wet lab. The AI model is allowing us to search a bigger search space, but then our wet lab technology is actually enabling us to go in and enable those broader claims. So this is really unlocking a new era within drug discovery. Sandi hit on this in her opening remarks: the team. We have a team here that is bilingual, that understands both the AI, the wet lab.

We're technologists, but also know what it means to develop drug assets. We have heavy hitters like Andreas Busch that have joined our team, that has, you know, under his leadership at Bayer and Shire, over 10+ drugs approved under his leadership. And we this powerhouse team we have here is really enabling us to usher in this new technology to create first-in-class, best-in-class assets faster than we've ever seen before. And we have a world-class board of directors that supports our vision. Recently, I'll point out that we've had Frans van Houten join our board. He was the former CEO at Philips. We also have Joseph Sirosh, the head of AI at Amazon, that's on our board. A really nice...

Again, that mix between tech, biotech, this is the future, and you have to have teams that can be bilingual in order to see these audacious visions through. Over the last three years or two years since we went public, we have had extraordinary momentum. We closed a $610 million deal with Merck early in 2022. We, you know, then subsequently closed an exciting partnership with NVIDIA. We came out with our manuscript that showed that we could design antibodies from scratch with a de novo model earlier this year, and we've built out an extraordinary team, recently having Andreas Busch join, Amaro Taylor, and Zach Jonasson, who you'll all be hearing from today. Additionally, the partnerships that we have here are really helping us push the platform forward.

Not only are we developing our own assets, but we have partnerships that help us become better at what we do, 'cause at the end of the day, we can't do everything. We need partners that really understand how to scale what we have and move it forward. This is just a quick glimpse of the overall team. Again, you'll see that we have domain expertise on the AI side from leading institutes, as well as on the drug discovery side. We have, as you saw in the video today, we have over a 77,000 sq ft campus that allows us to generate the data for our AI, as well as being able to validate that. We've raised over $450 million to date, and this has allowed us to make, again, huge transformations within AI drug discovery for biologics.

With that, before I hand it over to Zach, I just wanna say that I couldn't be more thrilled to announce our own wholly owned pipeline that we're rolling out today. We're really embracing a hybrid business model, where not only are we partnering with companies, like Merck to develop assets, but we're developing our own assets as well. And we're doing this to ultimately be able to show that our technology can deliver, that we can develop assets in 18-24 months. We can develop best-in-class assets and first-in-class. But we don't plan to take them, all the way through commercialization. We plan to be opportunistic, being able to sell them in preclinical or out-license them in the preclinical phase or in phase I or phase II. Again, we're gonna be opportunistic in that front.

Then you'll hear from Andreas and Christian about the four wholly owned assets that we have. Three of them have the potential to be best-in-class, and one of them is a new novel target that has the chance to be a first-in-class in immuno-oncology. And then the pipeline is all built on cytokine biology. With that, I'm gonna hand it over to Zach Jonasson, our new CFO and CBO. Thank you all.

Zach Jonasson
CFO and Chief Business Officer, Absci

Thank you, Sean. Good morning, everyone. Zach Jonasson, 35 days, or it's been 35 days since I joined Absci as CFO and CBO, and it's been nothing short of exhilarating. Today I'm going to cover a few topics. First, a little bit on some organizational highlights from the last 13 months, a few notes on our facilities and infrastructure, and then I want to comment a little bit on some efficiency gains that we're starting to realize at Absci that I think are really exciting and point to a pretty bright future. And then I'm going to spend most of the time discussing the evolution of our business and business model as we integrate our internal pipeline. And then finally, a few comments about what's next.

So just by way of introduction, while I joined Absci 35 days ago, I've had a long tenure with the company. I led the Series A and Series B rounds from the investor side and served as chairman from April of 2016 until January of 2021. Prior to Absci, I was a managing general partner and founder of two different venture firms, where I led the firm's life sciences investment strategy and also was the principal fundraiser for four of our funds. In the last five years in that role, I focused heavily on investing at the intersection of AI and biology. Prior to that, I've had experience being a founder myself as a CEO and CBO founder, where I established the team, built the strategy, and spent a lot of time building collaborations with large industrial partners.

Earlier in my career, I completed a PhD in cognitive neuroscience at Harvard. So why Absci? For me, and keep in mind, I've had a long history with the company and the team, for me, it's the opportunity to be part of a world-class team that's scaling AI for the benefit of patients. So if I focus in on the team again, I think we have an amazing team that... at the intersection of both biology and drug discovery. And the culture at Absci is so mission-driven, for me, it was a no-brainer to join. I will also comment that having spent quite a bit of my career investing in companies, working with management teams, sitting on boards, and advising startups, I've never been involved with a company that has a faster pace of innovation.

It's really exciting, and, and I've been involved with a lot of companies. So I think that is also a remarkable feature and a key reason why I joined. And the third point, sort of echoing some of the comments of Sean, having invested in this space, what is really different, and there's a number of differentiating points, but one of the most important ones is our data engine here. Our ability to scale the wet lab in order to train the AI models and then do validation, creates this virtuous flywheel, and for me, that's very differentiated in the market. I spent the last five years focusing on diligence and investing in this space, and I've seen nothing even remotely close to this, and I think this is a big part of our future. A few organizational highlights from the last 13 months.

We've established a world-class drug discovery and early development team led by Andreas Busch, and he's handpicked very talented people that he's worked with before at Bayer, Christine Lemke and Christian Stegmann, and you'll hear from Christian later today. And then I'm really excited about Amaro joining the team. He's the most recent addition, and I can say I'm truly excited. He brings experience commercializing and productizing AI, but also equally important, scaling AI teams, and that's definitely in our future here at Absci. So Amaro doesn't like it when you ask him about his H-index. I think he's bashful, so I'm just going to put it up there. It speaks for itself. So a little bit about our facilities and infrastructure. So despite starting from humble origins, Absci today leverages a 77,000 sq ft laboratory in Vancouver, Washington.

This is where our scalable wet lab systems are in place and a lot of our drug creation activity occurs. We also have an AI research lab based in New York City, and we're supported by a collaboration with NVIDIA to help scale and refine our AI models. We also have our own supercomputer, and more recently, we established an innovation hub in Zug, Switzerland, where Andreas and members of his team, his team sit. So I want to talk a little about improvements in efficiency. So looking over the past year, we've had an estimated 17% improvement in our R&D workflow efficiencies, and this is coming from this virtuous cycle here, where we get data to train, generate the AI models, and then wet lab to validate.

And what we've been focusing on at Absci more and more is the integration of that wet lab team and protein engineering with the AI team. It's a core initiative, and we're starting to see results from that already. We've also been able to reduce gross spend this year, while at the same time increasing the number of programs we're working on, both external and internal. And I'd say finally, as we look forward, I mean, one of the most exciting things about this flywheel here is we look to learn more and improve the AI after every single iteration. So as we look into the future, we continue to... or we expect to continue to realize efficiency gains and capability gains. So this is an overview of our Integrated Drug Creation platform.

Christian will go into much more detail about this, but just at a high level, if you start in the top left panel. We have two different approaches to reach a candidate. The first one leverages a partner or ourselves identifying a target of interest, and then we use our de novo AI platform to generate candidate or to generate antibodies to that target. After mechanistic validation, we then have a lead, and we will take that lead into our AI-driven lead optimization workflow. In that workflow, we're using AI to do multiparametric optimization to arrive at a candidate that we think has the potential to be best-in-class. On the bottom panel, this is our workflow that's more oriented towards first-in-class, where we leverage patient data to identify a target and its cognate binding antibody.

After further validation, once we have a lead, we then put that through our AI optimization workflow with the aim of having a candidate that could be first-in-class and be highly defensive in the market. So Sean talked about the differentiated value drivers that the platform delivers. I'm going to go over it a little bit again, because I think this is really important for not only investors, but our partners. The first point is the access to novel disease biology, and as Sean mentioned, we're now training our AI models to solve and tackle this problem of how to design antibodies to membrane-bound proteins that have hence been inaccessible. And this is really designed to enable first-in-class therapeutics. Secondly, engineering and drug attributes that can enable best-in-class from the very beginning. Thirdly, the speed with which we can reach the clinic.

Sean mentioned reaching IND in less than two years. I'll mention one other metric that I'll come back to later, but the ability to reach a candidate in as little as 6-12 months, I think is truly remarkable. Fourthly, engineering for a higher probability of success. So we do that in our multiparameter optimization AI, and here we're looking for developability attributes that we can engineer in at the very beginning. And the goal here is with higher probability of success, is also to have higher overall NPV per program. And then finally, I'll just echo Sean's statements. I think it's having worked in this industry for so long, it's remarkable to see the breadth of composition claims that we can support using our AI and our wet lab.

This enables us to go on offense for fast follower, and it also enable us to have really broad defense in our first-in-class therapeutics. So a little bit on our speed. This is a diagram looking at the traditional timelines at the top for industry, and here we sort of focus on the left-hand side. It's a typical 4-6-year process to reach an IND for industry. For Absci, we're doing that in two years or less, and specifically reaching a candidate in 6-12 months, which I again think is remarkable. This also has profound business implications. If we can reach the clinic and reach commercial phase sooner by 2+ years, that allows for a much longer period for royalty generation, and since our business model is partnership-based, that's very important.

So if I roll together some of the advantages of the platform, you can think of what we're doing as being an ultra-efficient IND generation machine. We're leveraging our AI drug creation platform to design both first-in-class and best-in-class therapeutics. We're exploiting our speed advantage, two years for an IND versus what we estimate is 4-6 years for industry, hence, more programs per unit time. We're also exploiting cost advantages to reach IND. We estimate it'll take $14 million-$16 million for us to reach IND, versus a broad industry average that runs in the range of $30 million-$50 million, hence, more programs per unit cost. So here, I'm going to show a little bit about our pipeline. I'm not going to give any details. I don't want Christian to be angry with me.

He's going to go through that in much more detail later. But the point here is to frame this in terms of what it means for our business. So we're leveraging the AI platform advantages we just talked about to create these internal programs. And our plan is to opportunistically look for partnerships around these assets, from the candidate phase all the way into early clinical. The goal here is to create and capture significant near and long-term value, but also, this provides us with public information we can use to further validate our platform with investors and to support business development. So it's very synergistic.

This is just a little bit of a look into sort of why we would do this, and the short message here is that internal programs provide partnerships that have a very attractive risk-return profile. So while internal programs do require us to make investments upfront, they offer larger partnership deal economics, more optionality as in terms of when we would seek to partner them, and they provide an overall greater NPV. So the chart here is just for illustrative purposes. These are not exact numbers. I'll show you some actual industry data in the next slide.

But the point here is, if you look at a discovery, like a traditional drug creation partnership versus a partnership around an asset that's say, at an IND, the upfronts, the milestone payments, and the royalties will be substantially larger the further developed the asset is. So here's some recent data. So I'll spend a little bit of time just explaining how we put the analysis together. Here, we looked at all the oncology and immunology partnership announcements dating back to January 2021, and we looked at partnership deal economics as upfronts and milestones in the cases where there's an—these were announced for biologics only. And here we're comparing partnerships that were struck at preclinical phase, which was primarily IND or candidate phase, versus phase I.

I think it's pretty clear, you can see that, the data indicate a significant increase in deal economics associated with phase I, moving an asset to phase I versus partnering at preclinical. We don't have a good breakout on discovery versus preclinical or versus candidate and IND. Most of the deals do not announce whether it's a candidate phase or before that, but based on our experience and some of limited data, we can say that you see the same increase in deal economics when you go from target-based platform partnerships to partnerships that are based around a candidate or an IND. So I think the key takeaway is there's potential to create significantly more value with additional development.

So rolling this all together into the evolution of the business model, our mission is to really define and create a growing portfolio of drug creation and internal program partnerships. The diagram here on the left is for illustrative purposes only. This does not reflect our portfolio today, but it lays out some key principles. For example, diversification. You could imagine the colors on the circle diagram being different indications, and then, and in every case, we want to look at the potential risk of a program and make sure the potential return makes sense in that context. And when we map that against internal for program partnerships versus discovery and creation partnerships, we can think about the portfolio consisting of drug creation partnerships that offer R&D and upfront funding. It's a broader set of indications in those partnerships, so we're getting diversification there.

But it is lower relative downstream milestones and royalties, and it gives us less control. Whereas when we look at internal program partnerships, we do require upfront costs and investment on our side, and the development partner is not locked in from the beginning. However, the downstream milestones and royalties are significantly higher, we have more optionality, and we focus on a set of indications where we have deep expertise, and Christian will talk a little bit about the cytokine biology. So what we're trying to do strategically, and what we are doing strategically, is growing and diversifying our portfolio of partnerships. A few metrics. In the last two slides here, I'll talk about metrics and then a view as to what's next. Today, we will announce four internal programs.

Three of those are designed to be best-in-class, and one we believe will be a first-in-class. We also are continuing to project that we will have 10 new active programs this year, and that's based on a strong pipeline of interest in our drug creation platform. We have not even announced our internal programs yet, so that figure of 10 does not include our four internal programs. We're also estimating that we have a 17% improvement in our R&D workflows versus over the past year. And then just, in the bottom, for continuity, I will note that we are continuing to focus investments and operations on strategic initiatives and near-term inflection points, providing cash and cash equivalents and short-term investments into late 2025. So what's next for Absci?

In short, we're gonna leverage this flywheel, and I think what comes out of that are some near-term and midterm value drivers. We expect to see increased capability and efficiency gains in our AI platform for drug creation. We expect to announce and cement new drug creation and internal program partnerships. We're working to advance our existing drug creation partnerships and our own internal programs, and we're looking to also initiate new internal programs. And I think that's a good segue to introduce Andreas Busch.

Andreas Busch
Chief Innovation Officer, Absci

Thank you. Well, thank you, Zach. After those three speakers, there's very little left for me to say, to be honest. I'm, of course, humbled about what I heard, and I want to just give a little bit of a professional background beyond what Sandi has already told you. I was R&D head at Shire and Bayer. Certainly, they're responsible for R&D portfolio and strategy with the help of many people. I did bring several blockbusters from bench to approval. Some of them you can see here. And yes, I did have several positions, or have several positions on boards or scientific advisory boards. But I think right now I'm at the most exciting place I've been in my career.

What you don't see when you see this professional background is the number one reason for success, and the number one reason for the success at the end of the day is being fortunate enough to have the best team working for you. I just want to make sure that everybody recognizes that I'm super humbled and privileged to work, to have worked, and presently to work with what I do think is the best group of people. You will hear today from Christian and from Amaro when they talk about our Drug Creation capabilities and when they talk about our AI approach to drug creation.

But also here in the audience, and please try to take a chance and talk to them during the break, is Jens Plassmeier, who is the head of our wet biology, of our wet labs. If there is one person in this earth who can take an E. coli, do this, and understand what antibody can be expressed by this E. coli, taste it, and say, "This is gonna be an effective antibody," it is Jens. So go to Jens if you want to get a little infection. Then I have the big privilege to work together with Christina. Christina has had a great career as a Founder, as a CEO, as a Head of Business Development, but mainly as a Corporate Strategist.

We could get her from Ferring, where she was most recently Head of Corporate Development, and she's certainly a huge asset for our corporate strategy. And then, yes, I'm gonna present to you Christian and Amaro, who also joined us this year. I think if you look at this team, you can see the experience we get together and that we really span from technology to drug creation. I think we do have everything in place. We have a collective 100 years of collective discovery and development experience, and this team has indeed contributed to a number of what I would consider blockbuster programs. And if we look at pipelines of Bayer and Takeda, there are still a couple of big, big assets to come. People ask me, "Why Absci?

Why did you join Absci?" It is actually very, very simple. I did develop a huge confidence into AI a long time ago, probably about 10 years ago, or a little more, 12, 13 years ago. But it was clear to me that AI must have and will have a dramatic input on R&D efficiency and success rates. And that's when, at the time, we did introduce first AI capabilities at Bayer, with the understanding it should contribute to target identification, validation. It should contribute to compound optimization. It even should contribute to clinical trial design. The problem we had and the low progress we saw at the time really was we didn't produce enough data.

We didn't have enough consistent data, because for AI, it's true what's true for most aspects of life: bullshit in, bullshit out. You need a lot of good data, and you need consistent data for AI to be as productive as we want it to be and as it can be. When I was first exposed to Absci, I saw that actually Absci optimally addresses this past weakness by being capable of producing consistent data, high-quality data, and a gazillion megabyte of data. That's what we need for Absci to trans—for AI to transition into the productivity mode we want, and that's why I joined Absci's board of directors last year, early last year. Then I saw very quickly how disruptive our potential really was, and the reason for that was the true full integration of wet labs with AI.

This is what makes a difference and where we do have the difference still in place and the competitive edge over a number of different companies. And what we did see at that time is that we were indeed in a situation that we could generate better biologics faster based on the tech stack we had in place. So when I was sitting over there, I looked at this at this poster here, and I saw, "Creating better biologics for patients faster." And I thought, BBF, my daughter always tells me BFF is best friends forever. And I thought maybe today we can introduce BBF, which is best bi- Better Biologics Faster.

I did see this incredible progress, and this incredible progress stimulated certainly discussions at the board, and by the way, Zach was at the board at the time, too. Stimulated this discussion about if we can do that, why shouldn't we be in a situation to produce our own internal pipeline and make sure that we capture significant value of those significant capabilities ourselves? And after some discussions, Sean decided to fire me from the board and hire me back in his executive team, which was a tough moment in my life. But I've enjoyed it.

I have to say, I've enjoyed the last year very much because we saw this incredible progress we've made from very quickly producing significantly multi-dimensionally optimized antibodies based on an antibody we had in place, going over to generating really zero shot, the first de novo design CDR3 region for a known antibody. So this is really getting us into that situation where we're in right now. With this continuous improvement and this continuous progress, we will be in a situation to really be capable of very, very quickly, like it was described by Zach and by Sean before, getting from a target to a better biologic. Better biologic in the terms of multi-dimensionally optimized very, very fast. This is the tech stack, and you've heard it already.

Jonathan Cohen
VP of Applied Research, NVIDIA

At the beginning, you wanna appreciate the aspect, how we choose targets for our own portfolio. With partners, we're going for pretty much every difficult target they offer to us because we believe we can address, in particular, difficult targets better than others can. For our own portfolio, we want to make sure that the validation of the target is as good as it can be, and for that, we do use actually, I dare to say, we do use some natural intelligence, which is, our deep disease insight, which I think, Christian, Christina and myself bring on board. But using the reverse immunology platform, which Christian will talk about, will tell you that we start at a much higher level of target validation that you usually get in industry.

Andreas Busch
Chief Innovation Officer, Absci

We then, of course, want to be in a situation very soon that we can deliver based on nothing but a sequence of a protein, de novo designed antibodies. And from that first shot, from the first hits we will get, if they are not already delivering exactly the profile we want, we are in a situation to really multi-dimensionally optimize those antibodies. And looking at those tech stacks and looking and bringing them together really is the basis for what Zach has described as our hybrid business model. I think we are in a very good position from the talent and from the tech stack to be best partners for big companies. I think we understand what big companies really need.

I remember what I needed for a very long time in my life, and do know how to discuss the needs and what we can deliver for them, and we do think that this is very disruptive to their R&D process. But we also are in a situation to develop an own valuable pipeline very quickly, at a record speed with limited resources and limited investments. With that, I wanna introduce the speakers of today, our latest additions to the innovation team, which I'm incredibly proud of. This is Amaro Taylor-Weiner, Taylor-Weiner. Yeah, very, very little to add to what Zach has already said. He hates to be named together with his h-index, which is below 50. Yeah, which is...

The citations, they were the ones from last week, so they're probably over 20,000 this week. There's a lot to say about Amaro, his great background, but I think what he should be recognized for is the science he brought, not just in AI, but in protein science, the management skill he brought and leadership he brought in building up teams and optimizing teams at the places he was, and it's just incredible having Amaro on my team now. This is so much more fun. Then there's Christian. Christian, which, you know, helped to further decrease the diversity of our team by being German. He has a great background, though, in his education. Max Planck, MIT, Broad Institute, Stanford. It doesn't sound so German after all.

He has a great history in different types of companies, including Bayer, where I observed him for quite some time. I had the pleasure to work with him, and has a great track record as a drug hunter. We were blessed enough to snatch him away from Vifor, where he was head of research and non-clinical development, and he brings this experience to Absci. With that, I wanna introduce the next part of the program, which I hope you were all waiting for, which is the break. I thank you for your attention. I hope you have a chance to talk to the people I've introduced to you, and I'm enjoying very much this day and look forward to the discussions. Thanks a lot. Oh, did you wanna introduce the break?

Alex Khan
VP of Finance and Investor Relations, Absci

Thanks, Andreas. So we'll take a brief break now, and resume the presentation around 10:30. Refreshments available in the hall right outside the room. Again, we'll resume the program at 10:30 with, with John Cohen from NVIDIA. Thanks, everyone....back everyone, and we'll resume the program with a presentation from John Cohen, VP of Applied Research from NVIDIA.

Jonathan Cohen
VP of Applied Research, NVIDIA

Good morning. Thanks. Thanks for inviting me to speak. So Sean asked me to just say a few words about who I am. I started my career. Actually, I'm a computer scientist. I started my career in computer graphics. I've worked in Hollywood on special effects. I won a Technical Academy Award for some work I did on simulating fluids in lots of movies. I joined NVIDIA in 2008, originally in NVIDIA Research as a research scientist. My initial focus was mostly on solving differential equations, partial differential equations on GPUs, which led to lots of interesting work.

And then I ended up managing some of the software library teams, which resulted in me founding the Deep Learning Software Group at NVIDIA around 2013, which led to cuDNN, which is a very successful accelerated library for deep learning technologies like TensorRT and a whole bunch of stuff. I stepped away from that role, actually went to Apple, came back to NVIDIA about five years ago, and today I'm a VP of Applied Research. I have three major projects that I oversee. The first is NeMo, which is our platform for large language model development. The second is BioNeMo, which I'll talk a little bit about today. And the third is Riva, which is a platform for speech recognition and speech AI.

It was interesting to see Sean's slide showing, you know, the history of AI going back only 10 years. It goes back quite a bit farther than that, if you think more broadly. I guess there's a lesson there, that we only think about the last 10 years, because it just works a lot better in the last 10 years than it ever did before. So AI really started in the 1950s, roughly, and the initial approaches to AI were people, smart people sitting down and thinking about, Well, what are the rules that govern how to make decisions? And coming up with these very easily understandable rule-based systems, and so things like Prolog or theorem proving, kind of early technology.

The problem with this is you need to sit down and you need to understand a system, you need to understand all the rules that govern how a system works in order to write them all down explicitly. And so there are some technology that came out of this that was very useful, but what really made AI a much more useful technology was in the 1980s, the realization that, in fact, the world is a more statistical and probabilistic place. And so that led to a lot of the... the modern methods we would today maybe call data science or statistical pattern matching. And so things like support vector machines, the invention of neural networks. And so these are algorithms that can look at masses of data and look for statistical patterns that can then be applied.

This was very effective. It's used in lots of areas, but what it really did was it set the stage in about 2010, 2012, or sorry, 2010, 2012, for what we would now call deep learning. And deep learning is simply taking a lot of the neural network, which was one of the algorithms developed in this machine learning period, and realizing that if you approach neural networks in a slightly different way, but threw a whole lot more computation at it and a whole lot more data, they just worked extremely well.

I think what people think of as the modern AI era really started around 2012 with machine learning algorithms applied to computer vision problems that started to actually achieve human levels. What's remarkable to me is this modern era that we're in. Let's call it the generative AI era or the foundation model era, and I think maybe typified most by ChatGPT, which really was a bit of an earthquake that shook the entire industry, only occurred about 10 years after this realization that deep neural networks were such a powerful tool. And so if you look at-...

The sweep of history here, the cycle time from kind of one technology to the next, to the next, to the next, is shortening, and the power and the scope and the generality and the applicability of each new wave is significantly higher than the one before. And it's really shocking to me to think that deep learning is only 10 years old, and we already have something like ChatGPT. Imagine what we're gonna have in 10 years forward from now. So I thought it would actually be helpful to talk a little bit about how large language models work. I think we hear a lot about AI and this notion that you need great data and lots of compute, and you can train these very powerful generative models.

But there's actually—it's a relatively straightforward, thought exercise to develop a little intuition for what do these models actually do. And so fundamentally, a modern language model, and I'll start in the context of human language, is something very simple. It's an algorithm that learns to predict the next word. And the way something like ChatGPT is trained is you give it an input sequence from the internet, Wikipedia, textbooks, whatever it is, and you teach a neural network to predict the next word. "So through hard work, he supported himself and his..." A reasonable next word might be family. Or I can say, "Because it crossed state lines, their criminal behavior attracted the attention of the?" This would be FBI. This seems like a very simple task.

All we're doing is we're taking a sequence of words and predicting the next one. But if you think about how much of your knowledge and common sense and understanding of the world, and knowledge of human language, and grammatical structure, and gender agreement, and pronoun agreement, and all the things that go on in your brain, you need to engage in order to solve this task, you realize this is no simple task at all. It's simple to state, it's extremely general, but it's very hard, and a neural network that can actually perform this task has learned an awful lot about the world.

Joe Biden, who in 2011, was the vice president." In order to do this, you need to know who Joe Biden was, you need to know what he was, his history, you need to understand time, you need to understand the concept that a person has a job, different jobs at different times, et cetera. It's just a tremendous amount of common sense wrapped up in this very simple answer. We can feed it things like programming languages. So any C programmers out here, this is a very simple idiomatic C. We're gonna loop over a string int i, for i equals zero, i is less than—and probably the next token would be strlen. If you're a C programmer, this would look familiar to you. This neural network, in order to predict the next token, has to do other tasks.

So, for example, "The restaurant was fabulous. My star rating is?" So this is a sentiment analysis problem, right? Framed in the context of predicting the next token, but in order to do this task, you need to understand a review, you need to understand its sentiment, and understand there's a five-star rating, and probably this is saying this is a five-star review. So all of these many, many, many tasks are wrapped up inside this seemingly simple task of just predict the next token. And it turns out that if you train a very large neural network on this task with enough data on a giant supercomputer, it can actually work. And again, you know, the existence proof here is look at ChatGPT. This is fundamentally the technology underlying ChatGPT.

The reason this works is that predicting what comes next is equivalent to understanding the micro, meso, and macro structure of a sequence. So we can think of human language as a sequence of words, and the structure in this case is everything from grammar to common sense, to memorization of facts. There's some theory here, emerging theory, it's I would say, relatively light on theory, as a lot of modern AI is. But there's some emerging theory that you can think about this as a compression problem, and essentially what a neural network is doing is it's compressing all of the knowledge required to solve this task into, you know, encoding it as the weights in these neurons. And so that's it. I'm not gonna say more about that, but it's actually a super interesting topic.

It really was the invention of this particular kind of neural network, particular structure, which is, again, just really a mathematical formulation, what's called an autoregressive transformer deep neural network, or mostly, most people would just say a transformer or a transformer DNN, which was invented around 2018. It really unlocked this ability. This is a very powerful representation. It's capable of, in fact, learning all of these complicated things in a very efficient way. Transformer DNNs, they can model complex structures, and there seems to not really be much limit to how complicated a structure they're capable of modeling.

They're very scalable, so as you make the network bigger, you give it more capacity to learn things, you train it on more and more data, and you scale it up with more and more computation, they just seem to do better and better. This was an observation, Ilya Sutskever, who's the chief scientist of OpenAI, he, he points this out all the time, that this realization that this was a scalable thing is really what unlocked a lot of the applications for OpenAI, as they realized, "Let's just build giant computers and train bigger and bigger transformer models and see what happens." And we see the result. So modern language models, again, in the context of human language, are enormous.

So the training data, and so this is the data where you feed it, you know, a paragraph and predict the next word. You train it on a corpus equivalent to, let's say, 1,000 times the size of Wikipedia. So think about reading Wikipedia, and then that's 0.1% of the training data you feed a modern neural network. The number of parameters in these models is easily up to, well, 100 billion, I would say, is medium size a trillion would be considered large, but absolutely, this is something people are doing today. And to train it, this computation, in order to teach it all these things, where it is implicitly learning all these micro, meso, and macro scale structures, you're running on the largest supercomputers in the world for several months.

So tremendously expensive in terms of compute and time. Well, it turns out that the same approach that works so great for human languages, nothing in anything I showed you is specific to human languages. You can throw any kind of language at it, any. And language is really just a sequence, right? So we can feed it sequences that we believe have some structure. Whether we, as humans, can understand intuitively understand the structure or not, really doesn't matter because these algorithms are able to extract whatever structure there is. And there's been just an explosion of research in this area. But two kind of early papers that, to me, demonstrate how much potential there was in this technology were DNA BERT and ESM.

So DNA BERT, they basically took human DNA and trained a model in exactly this way. Lots of examples of human DNA, and trained it to predict the next nucleotide in the sequence. And I just pulled a quote from their conclusion. "Fine-tune DNA BERT pretrained on human genome on 78 mouse, pretrained..." Sorry. "Fine-tuned the DNA model- the DNA BERT model trained on human, on mouse." So we start with a model that understands human DNA, and we just trained it a little bit on mouse DNA, and it outperforms all baseline models on basically mouse DNA prediction tasks. So this shows the robustness and applicability of DNA even across different genomes. So why is this?

There's quite a bit of variation between human and mouse DNA, and yet the model is able to find structures that are so deep that they actually are conserved across different species, right? I think this is kind of a remarkable result. It's, like, buried in the conclusion of this paper, but again, demonstrates that this model is doing something more than just learning simple statistics. The ESM project, which was started by Alex Rives and collaborators when they were all at Meta, was a very similar idea, but training it on protein sequences. And so they trained it on, I believe it's 250 million protein sequences found across all around the evolutionary graph. And their conclusion: networks that have been trained across evolutionary data generalize.

Fine-tuning produces results that match state-of-the-art on variant activity prediction. Predictions are made directly from the sequence, using features that have been automatically learned by the language model rather than selected by domain knowledge. So what they're saying is, there was no human applied their expertise in protein structure or protein function or any of this. They just learned from the sequences, just using the task I showed you before. Predict the next protein, or in their case, it's not the next one. They chop out things in the middle and predict the missing amino acids. And so I think these are, again, showing just across very different kinds of sequences, how powerful these algorithms are, and they're able to extract and learn structure. And the consequence of this is that Generative AI is turning biology from science to engineering.

This is just a chart of the number of papers on arXiv, which you could think of as kind of a good proxy for the intensity of the research activity. You know, plotting over time, so the very famous paper, AlphaFold, ESM, DNA BERT, lots of follow-up work on AlphaFold, and lots, lots and lots of work on predicting structure from protein sequences. And today, just a tremendous number of models. Models that can predict docking affinity, models that can predict, can generate proteins, that can predict protein function, all sorts of things. Huge amount of attention and focus. And the reason why there's so much attention and focus is 'cause the technology works. People wouldn't be working on it if it didn't seem to work. So what is our ambition?

And here, I'm gonna borrow my ambition from Jensen Huang, our CEO. And so he talks about a 1,000,000x speedup. What does a 1,000,000x speedup mean? So some problem today that just seems incomprehensibly intractable, imagine if your computational ability were 1,000,000 times greater than it is today. Imagine the problems you could contemplate solving that today just seem beyond reach. Well, a 1,000,000x is something you can only achieve if you're on some kind of exponential curve. And historically, computer science was riding the exponential curve of Moore's Law. And in fact, we did see 1,000,000x. There's many applications I can point to where over a couple decades, we saw a 1,000,000x improvement.

Computer graphics, which obviously is what created the conditions for NVIDIA's success today, is something where workloads run a million times faster than they did 20 years ago. Remarkable progress to think about what a million X can do. And so imagine that improvement applied to life sciences or drug discovery or problem domains that can really impact people's lives. Well, how are we gonna get a million X? So many of you may have heard Moore's Law, Moore's Law has ended, Moore's Law is dead. And so what this really means, you know, Moore's Law is a complicated topic. I don't wanna I don't wanna go into it too much. But Moore's Law, as the way people typically think about it, is really a statement about how fast a single-threaded CPU can run.

And if you're as old as me, and you remember the Intel 286, and then the 386, and then the 486, and the Pentium, and every time a new CPU came out, and everything you had would just run faster, and it was really an amazing time to be alive back in the 1990s. Well, that's not happening anymore, right? Anyone who has a laptop, your laptop is not twice as fast as your laptop was two years ago. That's just not happening anymore. And so that's what's being plotted here in the blue dots.

These are, these are actual measured speeds of a variety of CPUs over time, and you can see, they are still increasing, and in fact, they're still increasing exponentially, but the, that exponent is, is basically 1.1, so it's much, much lower and, and flattening out. Now, accelerated computing, which is what NVIDIA specializes in, is a different notion of computing, where we say rather than building a CPU that just basically runs through its instructions as fast as it possibly can, we're gonna reformulate the problem in a parallel way. And the way we're gonna do that is we're gonna look through some application, and we're gonna find the parts of that application that spend most of the computational budget.

And so, for example, in training a neural network, it's this step called back propagation, which involves lots of, dense and sparse linear algebra operations. In the case of solving differential equations, like I used to work on, let's say an elliptic, differential equation solver, it's some kind of multi-level linear solve. So there's these algorithms that take up the core part. On DNA, sequence secondary analysis, of DNA sequences, it's the alignment step. So you can look through, you know, whatever problem you're trying to solve and say, "Well, where's the actual computational bottleneck?" And then you can reformulate that computational bottleneck in a way that typically, exploits what's called parallel processing.

So rather than decomposing it as a set of tasks that you're gonna calculate one after another, you realize, in fact, we can do lots of things at once. And the way this is reflected on a chip is that the chip, rather than just executing a single instruction at a time, has execution units that many, many of them can run in parallel. And on our modern chips, you know, tens of thousands of operations can run in parallel in a single clock cycle. Parallel computing is more complicated. It requires more complicated programming. And NVIDIA really innovated here with the introduction of CUDA in 2007, which coincided with about the time I joined the company. And so I was able to be a part of developing a lot of algorithms for these parallel processors.

The nice thing about accelerated computing and parallel processing is that it actually is still riding this curve. It's, it's not quite Moore's Law anymore, I would say, but it is still riding this exponential growth in terms of, we are able to pack more and more transistors onto, the same silicon wafer. And so we are still continuing to see, you know, the continued Moore's Law-like growth in terms of the throughput of an accelerated computing processor. And so this is one of the reasons why GPUs have been so successful, is, in fact, we are still scaling, you know, as that first green line shows, rather than as the blue dots are showing. The next thing we can do is we can scale up, and we can scale out. We don't need to run on just one computer.

We can run in a whole data center. We can run across many racks. We can build extremely efficient interconnects that allow us to connect up many processors. The architecture of cloud computing is changing. Historically, cloud computing meant you had lots of computers that were very loosely coupled. The network that connected this computer to that computer was, in fact, very slow relative to the speed of the computers. And so we would take a problem, we'd decompose it into lots of little things and kind of have them all run, and then we'd try to regather the results. And the cloud is now shifting to more of what I would call a high-performance computing style architecture, where you have tightly connected computers, nodes, organized in a hierarchical way, where you essentially have a supercomputer in the cloud.

And this is coupled with improved software layers, improved algorithms that allow us to exploit these kinds of tightly interconnected and hierarchically connected architectures, and can allow us to do things like scale training a neural network easily up to 1,000 nodes, very efficiently, almost linear efficiency, up to 1,000 nodes, and beyond, in fact. And so scaling up and scaling out is gonna allow us to continue—It's not just one computer now, it's two computers, four computers, you know, 1,000 computers working together, but continue to allow us to see the scaling. And so that gives us another, you know, let's say, two, three orders of magnitude. But the last couple orders of magnitude has to come from completely new algorithms themselves. And this is where AI is just incredibly powerful.

The ability of these generative models, these transformer-based neural networks, to extract and understand deep structure, means that rather than simulating, let's say, the physics of how proteins fold or understanding some very, very complicated biological process, we can model it with extremely high precision without having to do all that calculation. This is being applied in areas like weather prediction. There's some work from NVIDIA called FourCastNet, which shows state-of-the-art weather prediction using neural networks. Rather than actually calculating and simulating all the detailed, you know, micro-scale physics happening in the atmosphere, you just use a neural network to learn it. This is happening in areas like quantum chemistry, where rather than calculating the very complicated quantum interactions between atomic particles, you can learn force models using neural networks.

And this is happening in things like what Absci is doing, where instead of having to, from some kind of first principles calculation, understand how all of these complicated protein sequences turn into antibodies and what they do, we can just learn the structure, the functional structure of the outcome, by directly building a model that learns it. And so we're seeing across many industries, these, again, many orders of magnitude improvement. And so all of these effects multiply together, right? And so when you multiply accelerated computing times scaling up and scaling out, times the effect of AI, you can start to see these million X improvements in our ability to solve very challenging problems computationally.

You're gonna hear-- you've already heard some, you're gonna hear more about what Absci is doing, but I wanted to place it in the context of this larger story, of this kind of million X speed-up, and it's really this marriage of accelerated computing, access to high-quality data, and the innovations in AI that make this happen, right? Yeah, I don't think I need to go through this slide because you're all gonna go through this slide in much more detail. We-- NVIDIA and Absci have had a partnership for about two years, I believe, and it's really around exactly this. It's around taking their models and making them run very efficiently on our hardware. Several of us went and visited the lab in Vancouver. There's me, second from the left.

Sean and several of his team, Andreas, were there, gave us a tour of the lab and walked us through in detail how their assays work, and we got to ask lots of questions, and it was just really a fun and interesting trip for us. But I want to talk about, again, kind of the larger perspective on what Absci is attempting to do in creating a foundational model for antibody discovery. And again, I think this is a really interesting example and I think, you know, hopefully very industry-changing example, but we're seeing it across many different areas. And again, if you want some intuition about this, just think about ChatGPT.

The underlying technology is extremely similar, and it's something where we can all really feel the impact of, you know, this kind of AI in the work we do. You've heard Sean use this term zero-shot. So zero-shot is kind of a jargon. Shot refers to how many examples of something do you need to see before you understand it? So if I want to explain to you, you know, this is bottled water, and I can show you this kind of Pellegrino's bottled water, and I can show you, I don't know, Dasani bottled water, and a couple different examples of bottled water. I've given you five examples of bottled water, and now you say, "I got it, bottled water." So that would be five-shot learning.

I give you five examples, you've learned the concept. I could also just do one shot. "Hey, this is bottled water." I don't need to show you any other bottled water, but you probably understand the concept even from one. Humans can do this incredible thing, which is referred to as zero-shot learning, where I don't need to show you any examples, I can describe to you. A zebra is a horse with black and white stripes. You've never seen one, you instantly get the concept. If you ever saw a zebra in the wild, you would say, "Oh, I heard about this. It's a horse with black and white stripes, must be a zebra." And so people refer to this as zero-shot learning, which is a little bit, you know, oxymoronic, right? What is, what is there to learn if you've never seen an example of something?

But the point being, if you have an intelligent system like the human brain, that's truly intelligent, it can understand concepts even without an example. And so this is what we talk about, this is what we mean when we use this term zero-shot learning. And I think, it's useful to think about a spectrum on the right here, showing from most specific to most general. So an AI model has some scope for how it can generalize. And in the early days, let's say, you know, in the eighties, if you started with these simple machine learning models, they could do maybe one instance of one task. And in fact, this is probably true, 10 years ago with the computer vision models.

So a model that can tell the difference between a cat, a dog, and a bird. It can do one instance of one task. Now, you can have a model that can do many instances of one task, or a model that can do multiple tasks, or a model that can do any tasks within a domain. And the more general the model is, the more valuable it is, but also the more data it takes to train that model. So what allows a model to generalize is it's seen a diversity of high-quality examples of things. The reason I understand that a zebra is a horse with stripes is I've seen lots of things. I've seen lots of horses, I've seen lots of things that are stripes. I understand color, right?

There's a lot of, again, much deeper concepts going on that allow me to generalize to something I've never seen before. And an AI system that has learned these deeper concepts is able to generalize. And the way you teach an AI system these deeper concepts, is you give it access to lots and lots of data across, you know, all sorts of examples within your domain. And so Absci is pursuing, you know, as the term Sean was using, zero-shot antibody design. Antibody design is a enormous problem space, tremendous variety, tremendous complexity. And rather than trying to teach a model, you know, in this little narrow corner of antibodies, we can have a model that can do one task. We can think about a model that can generalize to all antibodies. And so what would it take to build such a model?

Well, again, you need tremendous amount of high-quality data. You need tremendous-- a tremendously complicated model, which means it's gonna be large, it's gonna be expensive to train, it's gonna have to be trained on a lot of GPUs. But once you have this model, if you think of the value of a model, is sum, is the sum of all the things it can do. So once I've trained a model, how much, how much is this model worth to me? Well, a model that can do one thing is worth the value of that one thing. A model that can do a lot of things is worth the sum of all the things it can do.

And this is what we mean when we talk about a Foundation model, is a model that has this zero-shot generalization capabilities and can generalize to just a tremendous number of tasks in a domain, and therefore, is tremendously valuable. And this is a huge shift. The reason, frankly, for NVIDIA's success in the last few years is because training these models requires a tremendous amount of computation, which was not worth doing before if you're just gonna have a model that can only do one thing. But if suddenly this model can generalize to all kinds of tasks within a domain...

You know, ChatGPT has never been asked to write a poem about a zebra in love with an orangutan and what happened in their honeymoon, but if I asked it to do it right now, I'm sure it could, right? If anyone has played with ChatGPT, you've seen this remarkable generalization capability. That's what we're talking about, something that is that general is tremendously valuable, and this shifts the economics. So it's actually worth spending an awful lot of money to train this one model, if this one model can do so many different things. And this is what's really shifted in the economics of AI and deep learning over the last couple of years. And you know, my belief is it's really all due to this rise of these zero-shot capabilities of these models.

I wanted to say a little bit about the platform NVIDIA's building, BioNeMo. You know, we believe in this as well. We believe that AI is turning biology into an engineering discipline. It's certainly in the early days, but we see the rise of these AI models across all sorts of domains. Like I said, everything from folding calculations to protein generation, to molecular generation, small molecule models, docking models, protein embedding models. And so we've created this platform called BioNeMo, which is really just a curated catalog of high-quality models that we take, we optimize, we tune, and we make available on a simple easy-to-use platform that's backed by our DGX Cloud, which is a cloud resource that has GPUs available in the cloud.

Simple API, there's a nice UI, but it's really designed to be integrated into pharma workflows. And the reason we're doing this is because we believe that this is the future. So we want to build a platform that makes it easy for these kinds of models to actually be used by pharma companies. You don't have to worry about procuring a GPU and hiring people who understand AI, and downloading something from GitHub, and getting the Docker containers working, and all the complexity involved in actually doing this stuff for real. We just make that super simple and give you an easy-to-use platform that provides access to these kinds of models. Not gonna say any more than just, you know, plug BioNeMo.

It's still in an early access program, but we believe very strongly in this future, and we're investing heavily in building out this platform. So I'm gonna conclude. Large language models are building on this very long history of AI and deep learning. It's not a new thing. You know, it goes back quite a long time, but there is this shift that's happening, and it's because the innovation in AI, I believe, is increasing exponentially, accelerating exponentially. The fact that we basically just got neural networks working ten years ago, and suddenly we have things like Absci's de novo models and ChatGPT, and models like ESM and DNABERT, and really just incredible things that seem like science fiction already in just ten years is absolutely remarkable to me.

There's two simultaneous revolutions happening, right? One is the rise of generative AI, and the other is the rise of accelerated computing, and it's not a coincidence. They really, they really need each other. Generative AI is the killer app that's really driving the demand for accelerated computing and allows us to make the investment in building this platform that we're making. Accelerated computing is the catalyst that actually allows generative AI models to be even trained in the first place. Zero-shot foundational models, they can learn enough structure, and I hope I gave you a little intuition about what this means, that they can generalize to novel problems even without problem-specific data.

This is just a remarkable thing that I don't think 10 years ago, if you'd asked AI experts if this was possible, I think most of them would have said no. I think it's a surprise that this works as well as it does. I guess I don't want to speak for, you know, like, Geoff Hinton or these godfathers of modern AI, but I think some of the comments they've made recently, to me, indicate that they're quite surprised by how well these techniques will actually work in practice. A consequence of this zero-shot, the zero-shot capabilities of these models, is that the cost economics of training a foundation model are just substantially different from any previous AI system.

It's just, it's just worth investing way more than anyone ever had imagined investing to build a model that's capable of generalizing so much. And the conclusion is that accelerated computing, plus AI, plus this kind of scalable biological data that Absci is able to produce with their high-throughput wet lab assays, I believe, we believe, is what's gonna lead to these kinds of million x improvements and turn biology into an engineering discipline. And with that, I will conclude. Thank you.

Amaro Taylor-Weiner
Chief AI Officer, Absci

Thanks, John. So hey, everyone, I'm Amaro. I'm the new Chief AI Officer at Absci. And today I'll be giving you an introduction to myself, talk a bit about my background and motivations, why I'm excited to be at Absci, and then give an intro and talk about two case studies for how we apply our technology, our AI platform, to achieve de novo antibody design and lead optimization using generative AI. Okay, so starting with my professional background, I've been working at the interface of machine learning and biology for about 10 years, a little over 10 years. And the focus of my career and really my passion and how I pick what to do next is really motivated by making biological discoveries, understanding human disease, and ultimately improving patient outcomes using computational tools.

I think for me, you know, there's lots of things you can do using computational tools, but if you can improve someone's life who's suffering from a disease, that's a really noble calling that I feel very strongly about. And that's really the thread that's carried me through my experience is leading me to Absci today. So I started my career at the Broad Institute in computational oncology. I was in the cancer program. I was fortunate enough to work with Christian there, actually. And so I worked there in precision medicine, developing novel tools mostly using statistics, some deep learning and machine learning. This was right before NVIDIA was really releasing GPUs for us to use, to build new tools to understand patient tumors. Then I did my PhD at Harvard.

I did that in the field of biomedical informatics, where I was focused on building deep learning and machine learning tools to better understand patient biology. Then I worked at a company called Nabla Bio, which uses AI to do antibody engineering. I was one of the early employees there, helped them build their early language models and stand up some of their wet lab. And then I spent about the last four years at a company called PathAI that uses computer vision to do digital pathology. And the mission of PathAI is to improve patient outcomes by creating better biomarkers and diagnostics with computer vision. So I joined that company when it was about 22 people. Now, I think it's somewhere around 200. And I helped scale that AI team and the data science team through my career there.

And one of the things that's really sort of key to many of these aspects, in addition to just being translationally motivated, is that they're all interdisciplinary environments, and they're collaborative organizations where AI researchers have to work with domain experts in order to be able to build the best tools. That's something I take a lot of pride in, is building teams where people know another domain enough to be able to be creative using their computational skill set in that space. And I'm really excited to be doing that at Absci, and I'll talk a bit about that in the next slide. So, super excited to be joining Absci. This is the middle of my third week, but very happy to be with all of you.

And the reason that I decided to join Absci is that I think Absci is really positioned to make a lot of progress quickly on a very difficult problem of developing novel therapeutics. So, why do I think that is? I think they have all the ingredients that are required to build a winning AI team. So they have a differentiated data platform, they have good engineering and infrastructure, and they have technical prowess, and I'll talk about each of those in a second. So for the differentiated data platform, we've heard a lot about this, but one of the advantages that Absci has had is that they were operating for a decade as a synthetic biology company. That means that they have built a lab, they've stood up a bunch of new assays.

They know how to do technology development deeply, and that's given them proprietary assays like the ACE Assay for rapid and quantitative screening of antibodies. But they've also developed expertise for developing novel assays that can really be tailored to power generative AI development. The last thing I'll say is that the rapid cycle times are really amazing. I mean, having worked in digital pathology, in order to get more data, you have to go run a clinical trial. That takes a long time. If I can get new data from my model in six weeks, that's really exciting. And we can definitely do that here. In addition to that, the engineering infrastructure is set up to fuel AI development.

So the pre-built data infrastructure that was already developed to support the synthetic biology lab, helps us funnel the data out of the lab and into our models. They've also invested in, AI development and AI infrastructure, so we have our own GPU cluster. We have a team that's been supporting that, the AI research team for the last two years. And I think the last thing that's a bit of infrastructure, but it's like cultural infrastructure, is the deep integration between the AI and the wet lab. So that's one of the things that makes me really excited, is the opportunity to help build a differentiated culture where we're able to bring two groups of people together that wouldn't normally have worked together 10 years ago.

You know, in most pharma companies or in many pharma companies, computational biologists are a core service where they just give you an analysis. They throw it back over the wall. But here we work together, we design our experiments collaboratively, and we really try to understand our hypothesis as a group. And I think that's really how we're gonna take best advantage of our technical prowess, so really just the people on the team. Sean has done a great job of building a great AI team over the last two years through acquisitions and hires, and we also have AI scientists with veteran drug creators. So, you know, one of the things I was super excited about was joining Andreas' team and getting to learn from people who have actually brought drugs to market. You know, I'm just a kid building AI tools.

I know how to build AI products, but I don't know how to bring drugs to market, and Andy's gonna teach me that. I'm really excited for that. Cool. So getting into our technology, I'll start with... I'm gonna go over two different case studies. The first case study is using our generative AI to do de novo antibody development. The goal of the de novo antibody development is to really create antibodies at the click of a button or basically at the output of a model. The reason that's valuable is because it enables us to develop targets that aren't easily accessible via screening technologies that exist today. So as long as we can predict the structure and we can give an epitope or the structure already exists, we can design an antibody against that target using our generative AI model.

So this is a workflow that shows an overview of our approach. Like I said, we give the model a target antigen structure. That can be a predicted structure using AlphaFold or a crystal structure that was experimentally determined. We pick an epitope or a region on that antigen that we'd like our antibody to bind, and we provide an antibody scaffold sequence.... Then our AI tool generates the CDRs or generates the sequence that goes into the complementarity-determining region of our antibody, and we can synthesize those in our lab. We can create, using our model, we can create millions or even billions of possible variants.

We then have to rank and select those to synthesize and order DNA, and then send them over to our wet lab, where they actually create these antibodies that our model has sort of dreamed up or predicted to be useful. Once we've created those, we can apply our wet lab assays. We have the ACE Assay and SPR to assess the binding of those antibodies, and at the readout of those assays, we have validated de novo binders or validated antibodies, which we can then use again as a way of taking the next step in our modeling procedure. Cool. So, the case study I'm gonna talk about is highlighting some results from our preprint earlier this year, where we designed, using our generative model, a de novo antibody, anti-HER2 antibody.

The goals of the case study was to test our zero-shot model for designing heavy chain CDR3, as well as, heavy chain CDR 1, 2, and 3 for anti-HER2 binding. The reason we're calling it zero-shot is because our model had never seen an example antibody that binds HER2. It learned from other antigens and antibody pairs to be able to create an antibody against HER2. In our case study, we assessed multiple parameters of our de novo designs, including binding rates, sequence diversity, immunogenicity, functionality, and developability. We have many highlights from that preprint, which I think is quite impressive. I won't go through all of them here, but if you want to see the paper, it's on bioRxiv today.

So, the first data highlight I want to talk about is the diverse, novel, and high-affinity binders that were created by our model. So on the left, or I guess the right, you can see a plot showing each point is an antibody that was designed by our model, and along the X-axis, you see the sequence diversity or mutation distance away from trastuzumab, which is the therapeutic antibody that binds HER2. So you can see from our model, we're able to get antibodies with similar binding to trastuzumab or superior binding to trastuzumab that are up to 12 mutations away. Remember, this model never saw trastuzumab, so it's searching an enormous space of possible antibodies and finding binders that humans wouldn't be able to find on their own or using traditional campaigns.

Just to give you a sense of that search space, because we looked in a region of 13 bases, that's a search space of 20 to 13, so that's many antibodies to look for. And in addition to being able to find novel binders, our design procedure was highly efficient. So we had a 4-fold improvement over our OAS baseline for designing de novo binders for HCDR3, and 11-fold for designing all three heavy chain CDRs. This is some new data that's not in our preprint today, but hopefully will be out there soon. We can also assess the functionality of those new binders. So we show that they bind, but do they actually? Will they actually kill cancer cells and bind HER2?

So we performed an ADCC assay, and you can see the results here, where the curve for variant A in red shows higher efficacy than the wild-type trastuzumab. So that means is that our de novo model, without ever having seen trastuzumab, not only designed antibodies that can bind, but we also believe that they're more functional, at least when tested in vitro. So to understand that better, we performed what's called epitope mapping, where we compared the epitope of our designed antibody to that of trastuzumab. You can see that here on the right. The key area to look at is the area in that black box, where you can see in trastuzumab, you have an area that's partial binding in blue, that flips to critical binding in red.

What we think that means is that our antibody has developed sort of a different hotspot for binding, which enables it to be more functional. We're able to go from de novo designs, test their functionality, and then come up with an explanation for why we think our de novo designs are more, are more functional. That, to me, I mean, it's super cool to be able to do all of that and have that assay development. That's something that is really taking advantage of our expertise of our wet lab.

The last thing I want to say about this first case study is that in addition to doing this, first HER2 cycle in de novo design, we've also scaled our wet lab, and this is an example of a single run through our wet lab, where we scaled to testing 15 different targets in 10 weeks. So not just HER2, but 15 different antigens. Of those 15, 12 out of the 15 targets were successfully screened in our wet lab, and eight of those 12 generated validated binders. So this really demonstrates our capability to generate diverse data sets quickly, and I think will really power our innovation pipeline over the next couple of years as we really build out and develop, and grow this platform. Okay, cool.

So, the next case study I'm going to talk about is multiparametric optimization, where you have a lead antibody, and you want to improve its features in a few different axes. So in order to do this, in order to go from a de novo design or a lead antibody to an optimized therapeutic, you need to develop data to train a model in that specific task. You then have an AI model to create new antibodies, and then you use the wet lab to validate it. And once you've validated the created designs, notice that that is also data to train the model. So we can actually close that flywheel.

In this example, the wet lab assays and technology we're going to use take advantage of our SoluPro cell line, our large library, ACE assay, and our small library SPR platform. Cool. So in this case study, what we're doing is we're generating a multivalent lead co-optimization for broad-spectrum antibody for COVID. So the goal of this case study was to re-engineer a therapeutic antibody, Regeneron's SARS-CoV-2 antibody against re-engineer it to bind alpha, beta, and delta variants. So we want to improve the binding against all three SARS-CoV-2 variants. And you can see the affinity for the parental antibody here in the bottom of the table, where the antibody binds wild type CoV- 2 with 8.5 KD, where lower is better for affinity.

It binds the alpha variant at eight, the delta variant at 5.4, but it doesn't bind the beta variant very strongly, so 607. And the reason for that is that the beta variant actually mutated the epitope region that the antibody would bind. So our engineering task is to create an antibody that is going to be general to all three epitopes without losing binding, or if we can, improve binding for each of the strains. So how do we do that? Well, first, we need to create the data to train the model, right? So to do that, we create an information-rich library for model training, and that's what's shown here in panel one. So what we're gonna do is mutate the parental antibody.

We introduce three mutations, many times to all the different CDRs, and that enables us to create a diverse data set around the parental antibody, in order to evaluate the function and train a model. To give you a sense for the search space that this procedure enables us to find, we have the figure on the right. If you look at the little yellow box down in the right-hand corner there, that's where traditional training sets or traditional antibody engineering would occur. Using our Absci...

Using the ACE platform, we're able to explore a data set of hundreds of thousands of sequences in a 10^7 combinatorial space that's determined by the number of mutations we make, and we're able to train a model using that set, and our model is able to search a space that's much larger than that, in 10^13 combinatorial space. So, this is what the training data set and pipeline looks like. We develop 75,000 data points per variant, where we have those triple mutants and we evaluate the binding against each of the variants.

We then train a model on that data, and you can see the results of that trained model all the way on the right, on the x-axis you can compare the ground truth affinity as determined by ACE, and on the y-axis, you see the AI-predicted ACE score. You can see that there's a really strong correlation, meaning that the model has learned to predict the affinity of these antibodies against the beta variant. The same is true of the alpha variant and delta variant, as shown by the Pearson correlations on the bottom. Once we have that model trained, we can use it to select new designs that we believe to have binding to all three using our predictive scores. That's what you'll see here.

So we took that model, we scored a bunch of sequences and generated sequences and scored them, and then we selected the binders that our model predicted to work against all three variants. And of the 39 that we tested in our small library using SPR, 79%, or 31 out of 39 of the evaluated predictions, exhibited higher binding affinity than the parental antibody to alpha, beta, and delta. So you can see that on the right, and this is from SPR measurements. You can see most of the points are in the upper right, indicating that they have strong affinity for alpha and beta, and strong affinity for delta and beta. And this is a table summarizing some of our top variants and the case study outcome, where we have demonstrated that using AI, we can perform true multiparametric optimization.

And that's really, if you just look at that top variant, ABCY001, you can see that for the alpha variant, we've improved the binding 3-fold. For the beta variant, we've improved the binding 37-fold, and for the delta variant, we've improved it 3-fold. So all binding to all three variants improved using this AI-guided design. So we went from a large quality data set we generated in our wet lab, model training, model predictions, generating the predicted antibodies, and then validating that our model was correct. So this is a really general pipeline that I think can solve many different problems in this space, and I'll just talk about a few of them for affinity maturation against multiple targets. So the first that we show is infectious diseases, where we can develop broad-spectrum antibodies with simultaneous binding to multiple viral variants, even as the epitope evolves....

We think that this could be valuable for pandemic preparation, where you develop, you can try to predict how a virus might evolve and develop variants that are going to be robust to that evolution. It's useful in preclinical development, where cross-species reactivity or cross-species binding is valuable, because that will improve the speed of development and improve success rate. And it's useful in immunology, where having an antibody that binds multiple isoforms of a target can create improved efficacy. So just to wrap up here, at Absci, I believe we are building and already have the industry-leading AI drug creation platform for biologics. I think we're going to progress really quickly because we're poised to progress on antibody engineering by having strategic investments in our wet lab and AI infrastructure.

Our AI models today explore a large biological space filled with novel variants that potentially have higher functionality. We're able to perform multiparametric optimization to create best-in-class molecules, and we can design and validate molecules quickly, and we've scaled our wet lab, dry lab workflow to evaluate greater than 10 targets in under 10 weeks. And that's it for me, and I'll hand it off to Christian.

Sean McClain
Founder and CEO, Absci

So we do have a special guest. In addition to John joining us from NVIDIA, we also have Najat, who is the Chief Data Science Officer and Head of Portfolio Strategy at J&J, to give a few words. I've known Najat for a while, and so thanks to Najat. I know you have to go, so I'll give you a couple minutes here.

Thank you.

Najat Khan
Chief Data Science Officer and Global Head of Strategy and Operations, Johnson & Johnson

Thank you, Sean. First of all, thank you to everyone. It's been great to see the progress, both on cycle time, and congratulations on starting your pipeline. So a big round of applause. So yeah, just a few words. You know, at, at J&J, we have been on the journey of leveraging AI and data science broadly, from drug discovery to drug development. I think, there's many in the audience, especially I'm looking for Sandi Peterson, if she's there, but she's one of those, that had the foresight early on, many years ago, in terms of the impact it can have when it wasn't a thing, right? So the...

What I would like to share is, you know, some of the examples we're talking about is new novel biology, being able to cover a broader, you know, just search area than we can do today. But the stat that comes to my mind, if you look at any of the proteomic data, anywhere from 60%-80% are not addressable, right, with the therapeutics that exist today. So clearly, that's a massive, burning platform in terms of how, why, and how we need to do this better. I think the second part is, people often ask me from a data science perspective, you know, about speed and how it's going to make everything faster. I would, I would encourage everyone to really think about the novel insights that's actually going to have more value, and I think that's primary.

I think the pace will come, but if you think about our industry with a 10% success rate, it's the probability of success and enhancing that, that's really going to bend the curve. The third thing, I think when you think about headwinds that are coming, and I think you had this in, Sean, in your slide, but around IRA, not just patent busting, but actually having diverse set of molecules, so you can actually address a lot of the various diseases. From a business perspective, it's going to become really important, and I think that's where data science and AI can play a big role. So for us, we have been on this journey for three years. I want to say, building on what Andreas said, you know, we really started in the development side, accelerating trials.

You know, we've had examples where it's accelerated by 1.5-2.3x across many different programs, enhancing diversity. Some of that you'll see will come out... We've had some interviews recently where we talked about our stat. Driving precision medicine, right? Patient stratification, right patient for the right therapy, multiple examples of that. But the holy grail to get it right is to know what's the root cause that you are trying to solve for. So Alzheimer's and Parkinson's is probably not one disease, and that's probably why it's challenging to actually solve for these diseases.

So that's where I think the role of AI in both target ID, but not just the ID, but the stratification, because precision medicine starts at the root cause, which is understanding what your target is, which is why I'm really excited about some of the work that you guys are doing, which is the reverse immunology. Immunology and neuroscience is not where oncology is from a precision medicine perspective, and that's driven by the fact that we still don't understand the biology in a crisp and precise way, and I think that's where there's a lot of, you know, promise. And then the last thing I'll say is, in terms of antibody design, it's very, very artisanal how it's done, and that's great. And this is an and strategy to be able to generate versions that we haven't seen before.

My one word of caution is, I know a lot of people say how diverse it is, is your molecule, is it really different from the starting point? I think that's important, but the most important question is the binding affinity, the multiparameter. Can you actually address immunogenicity, the true challenges that lead to failures of some of our molecules? So I'm glad to see that focus here. All right. Thank you, everyone.

Sean McClain
Founder and CEO, Absci

Thank you.

Christian Stegmann
SVP of Drug Creation, Absci

... Terrific. Thank you. It's great to be here. My name is Christian. Just by way of background, Andreas has already introduced me. I'm not gonna say too much. I have a background in structural biology and molecular pharmacology, been previously with a company called Vifor, where I was Head of Research and Non-Clinical Development, and I had the privilege of transforming this organization from iron therapies into kidney disease. And CSL, and just in case you wonder, CSL is a great three-letter abbreviation, but it does not stand for Christian Stegmann Labs. Yeah. In any case, I had the privilege to work on teams that have brought new molecular entities into clinical development.

I think I really want to highlight also that drug discovery, drug creation is a team sport, and throughout my career, I was always enjoying this, this opportunity to lead cross-functional teams that make these successes work and bring novel therapeutics into clinical development. All right. So today I'll talk a little bit about our internal pipeline. I'm really privileged to, for the first time, publicly reveal Absci's internal pipeline. After that, I'll highlight some of the work we've been doing within our Reverse Immunology Platform. I hope I'll be able to convince you that this is an exciting technology platform, which has the potential to bring a number of first-in-class assets forward, discover them, and also come up with the therapeutic candidate, candidates for them.

Okay, so here we go. Drum roll. This is the Absci internal pipeline, and I'm particularly thrilled to reveal these four proprietary assets that we have been working on, focusing on cytokine biology. Why cytokine biology? We view cytokine biology as the first frontier for AI-driven disruption. Absci's pipeline consists of a TL1A antagonist, which we intend to develop for inflammatory bowel disease, as well as three additional targets in the cytokine space. Two are in the broader dermatology space, and then we have one best-in-class. These are to be developed as best-in-class assets, and then we have also one first-in-class asset, to be developed in an immuno-oncological indication.

In addition, we're working on number of several undisclosed early assets, but each named asset that you can see here on this slide has the potential to reach IND filing stage in 2025. And I think that's a testament to the speed that we are leveraging here at Absci AI for to actually deliver on the speed. So in the next slide, I'll share a little bit more about these assets. I'll start out with TL1A. TL1A has the potential to become a blockbuster in inflammation and fibrosis. Just a little bit about the science on TL1A.

TL1A is an important cytokine, and it's expressed in a number of immune cells, such as macrophages and dendritic cells, and it's an important modulator in the development of mucosal inflammation. It's enhancing, on the one hand, Th1, and on the other hand, Th17 effector functions. So if you block TL1A, this can lead to attenuation of chronic inflammation, but at the same time, it also allows you to maintain the ability to clear pathogenic bacteria. So, IBD is a enormous market, currently $24 billion worldwide, and it expected to grow significantly, and this market is particularly dominated by biologics. There is recent clinical data from other companies out there that actually show clinical remission in ulcerative colitis as well as Crohn's disease.

There was a recent high-value deal announced. Prometheus was acquired by Merck. And just this morning, we didn't have time to update these slides, but just this morning, there was another big announcement. Teva has struck a deal with Sanofi on their tier one asset. So, I think this is a vindication of this scientific mechanism that has been worked on for many years. And it's also, I think, important to highlight that there is clearly potential beyond inflammatory bowel disease. So, there's clearly an area of immunological indication that is very interesting to address with this mechanism, such as rheumatoid arthritis, atopic dermatitis, lupus, and also a number of fibrotic conditions.

So let me show you a little bit, what we've done on TL1A, and I think, Amaro already introduced you to our HER2 case study. And he could show also some very exciting data on, on SARS-CoV-2. But as promised in the introduction, I would like to share now how AI-driven, drug creation, has become real, and by real, I mean, an actual pipeline asset. So we could apply both our de novo as well as our lead optimization capabilities on, on TL1A. And what is shown here on the, on the right-hand side, is a chart, that is very similar to the chart that Amaro has shown you earlier. So this is a holdout dataset, and we used our proprietary AI screening, technology to train an AI model.

And you can see again, an excellent correlation between measured KD scores and predicted KD scores, showing you that the model has actually learned to predict affinities for TL1A binders. So using this model that we trained, we could identify 143 variants with higher affinity than the competitor molecule by up to an order of magnitude and up to 12 amino acids at a distance. So, we have currently selected 20 of these highly potent and diverse variants, and we are in the final assessment to select the therapeutic candidate. And for us, this really represents a major milestone of deploying Absci's AI capabilities on a pipeline asset. So on this slide, you can actually see these 143 binders I was mentioning earlier.

And on the, the, the dotted, red-- the dashed red line is the, is the benchmark, affinity, and you can see that indeed, the AI predicted binders, exceed the, the benchmark in terms of affinity, and from these, we selected, 20 leads. We aim to develop, this, candidate in moderate to severe ulcerative colitis in adults, and we aim for a dosing interval of up to once quarterly, ultimately really, delivering a best-in-class, asset. Let me talk a little bit about our early dermatology portfolio. I want to briefly highlight two assets that, that we have in here. ABS-201 is being developed for an undisclosed, dermatological indication, but it has a significant unmet need, and especially the, the current standard of care is really not sa- satisfactory for patients.

Here we aim for a best-in-class profile with a once monthly dosing or even less frequent low volume subQ injection. And ABS-401, that's an asset that has potential in immune-mediated skin conditions such as psoriasis, and here we aim to serve patient populations that are currently poorly addressed. All right, so now I would like to switch gears a little bit. So everything you heard until now has been what we call the target-based approach for discovery, where, as Zach pointed out earlier, using prior knowledge or using existing data, you select a target, and using our AI, we come up with novel binders, which you can then funnel into lead optimization. What I will talk about now is a different approach, and we call it the Reverse Immunology approach.

So this is really an alternative paradigm, because here we start with a patient sample, and, and from this patient sample, we isolate, and reconstruct patient-derived antibodies, and then these are then fed into our AI-driven lead optimization engine. And, and this approach enables us not only to discover novel, fully human antibodies, but more importantly, we can uncover novel targets, right? So this is really a platform that allows us to come up with a first-in-class mechanism. In essence, Reverse Immunology is harnessing the human adaptive immune response to identify novel targets and potential therapeutic candidates. So, how does this work? We focus on a particular, on a particular, tissue, which is called a tertiary lymphoid structure.

So tertiary lymphoid structures are centers of immune activity that develop in the vicinity of chronically inflamed tissues, and this can be tumors, for example. And it's been known for a long time that the presence of so-called TLSs is actually associated with longer progression-free survival and better response to immune checkpoint inhibitors. So this already suggests that these centers are potential sources for potentially therapeutically meaningful antibodies. And that drove us to actually develop a workflow to come up with these antibodies and de-orphan them. So what we do is we collect biopsies from patients of interest, we perform RNA- seq on these biopsies, and then we assemble the immunoglobulin chains in silico.

So we computationally reconstruct the antibodies, then we manufacture them, we express these antibodies, and then we run a de-orphaning campaign to actually find out what is the target that these reconstructed antibodies bind to. Once we have successfully de-orphaned, we validate that using biophysical methods, and then we end up with a novel, fully human antibody, plus its cognate target, which is really interesting. And I'm gonna give you an example of how this actually looks like in practice, because we could deploy this platform to uncover a novel mechanism for tumor immune evasion. So, basically, what we did here is we successfully de-orphaned an antibody that was reconstructed from a patient biopsy.

And you can see here on the right-hand side that this particular antibody, which is now our asset, ABS-301, binds potently and specifically to a target that has potential in immuno-oncology. So you can see from these curves that indeed we have a high affinity and a dissociation constant of 26.5 nanomolar, indicating that this is indeed a meaningful antibody that seems to be delivering affinity to a novel target. We then went ahead and validated this antibody further, and what we did is we took primary human cells, and we asked the question: Does this novel antibody inhibit this immune evasion cytokine in primary human cells? And lo and behold, we could demonstrate that.

So you can see that indeed, this antibody does have an effect in primary human cells. And our current hypothesis is that tumors upregulate the target of ABS-301 as an immune evasion strategy, and this potentially limits immune infiltration and turns tumors immunologically cold. And obviously, the consequence of that is that if that holds true, then ABS-101 or ABS-301 treatment in cancer may release this immune suppression and permit immune cells to infiltrate the tumor, potentially allowing for robust antitumor response. So we are very excited about this mechanism, really, a first-in-class mechanism, and we think it has broad potential in immuno-oncology. A lot of you are familiar with checkpoint inhibitors.

I think it's obvious that the existing immune checkpoint inhibitors are leaving a lot of unmet medical need out there, in particular in these four immuno-oncological indications, non-small cell lung cancer, melanoma, head and neck, and gastroesophageal. It's known that there's a significant percentage of patients that have immune checkpoint inhibitor resistance. In addition, these are obviously also significant global markets in oncology. So we are currently profiling ABS-301 comprehensively across a number of oncological indications, and we think there is significant potential here. All right, so to wrap up, I wanna conclude with our pipeline slide, and I hope I could convince you that Absci's internal pipeline is promising.

Again, we selected cytokine biology as the first frontier for AI-driven drug creation. We have selected targets that are biologically and technically highly de-risked. In addition, we think these targets have also the ability to actually create significant value inflection in early-stage clinical trials, such as, for example, using biomarkers deliver a proof of mechanism in a phase I clinical trial. With this, thank you for your attention, and I think I'm handing it back to Sean.

Sean McClain
Founder and CEO, Absci

Quickly wrap up here. I just want to thank you all for attending Absci's inaugural R&D Day. First off, first, thanks to Sandi, John, and Najat for coming to speak on our behalf. Thanks to the rest of the team for all the work that they've put into this.

You know, just to leave you with a few closing thoughts, I hope that you've seen that the technology that we're developing is an approach that can be very widely applicable, a foundational model, as John and Amaro laid out, where we can actually use AI to start to develop better biologics, best-in-class, first-in-class, as seen by our own pipeline, and this is really what's going to help accelerate these drugs to patients much faster. This is the future of healthcare and where we are headed. If you look at what is needed, it's not only the technology, it's a team that knows the drug development, the drug discovery, the AI, the engineering. Then it takes the technology.

You have to have the scalable wet lab technology that gets you the data. Additionally, you need compute. With all of these, you're able to ultimately start to design drugs in silico. Get back to what John was saying. You know, a lot of people thought, and I think still think, this is science fiction. This can't be real, but it is. I hope that you've seen today that these are early proof points that generative AI is making an impact. Do we still have a lot longer to go, a lot more technology development, a lot more investment that's needed to go into this to see the big vision through? Absolutely. But are we seeing early indications of this making a huge impact in biologics drug discovery? Absolutely.

And so with that, I'll open up for questions.

Alex Khan
VP of Finance and Investor Relations, Absci

All right, we'll start with Q&A now. I just ask that everyone limit themselves please to one question and a follow-up, and just for those on the webcast, just announce your name and affiliation, just so everyone knows.

Speaker 17

Sean, thank you very much for doing this. This is R. K. from Hitachi Vantara. Sean, thank you very much for doing this.

Obviously, the technology is there, you guys have it, it's very clear. But when you're talking about molecules such as ABS-301, where we don't know what the target is, and you're actually trying to identify the target now, how much has the regulatory world started accepting, you know, such science? And how easy or how difficult do you think it would be when you get in front of the FDA?

Sean McClain
Founder and CEO, Absci

I'll speak at a high level, but I think, you know, Christian and Andreas are, you know, more qualified to answer this. But we're, you know, developing, you know, assets in a way that you'd put a standard, you know, IND package together, just like you would with, you know, any other molecule. And so we don't see any other regulatory hurdles by designing it with an AI versus, you know, designing it with an immunization approach. But, Christian, Andreas.

Andreas Busch
Chief Innovation Officer, Absci

Yeah, I think, there is fundamentally gonna be no difference when it comes how regulatory agencies look at our technologies versus any other technologies. They just look at the end of the day at the profile, the development compound or the development asset has shown in clinical trials. So I think what we try to make sure is that we deliver into clinical development a better antibody in our case, than others is because we have the chance of, you know, starting with a hopefully a better validated target, going over into a better optimized antibody. But then at the end of the day, you just have to show in clinical trials exactly what other antibodies also being generated by traditional models will have to show, which is efficacy, which is safety, and so forth.

Regulatory agencies, they will not make a difference at this point, what technologies were involved to get to those assets.

Speaker 17

Thanks.

Ethan Markowski
Equity Research Associate, Needham Company

Thank you for the presentation. Just want to hear your thoughts, I think I just touched on it. This is Ethan from the Needham Company. Thank you for the presentation. Kind of touched on it a little bit, but do you guys think there's an increased immunological risk at all for antibodies that are created completely synthetically? Just curious to hear your thoughts on that. Thank you.

Christian Stegmann
SVP of Drug Creation, Absci

Yeah, it's a great question. Obviously, it's known that immunogenicity can occur, and you can train an AI model to actually avoid that. So we are looking into exactly that question. We are using data to train our model to actually limit immunogenicity. There are several ways you can do this. We have published earlier a so-called parameter called naturalness, which indeed seems to correlate with low immunogenicity. Amaro, you want to comment on that as well?

Amaro Taylor-Weiner
Chief AI Officer, Absci

Yeah, I guess the thing that I would add is that using the generative AI, we're able to control what we get. So we're able to perform that multiparametric optimization. If you use a traditional animal campaign, you get whatever antibodies the mouse creates. Those are more likely to be immunogenic and then need to be engineered. Whereas when we develop our initial hit or our initial lead candidate, we can control that immunogenicity from the start using AI.

Ethan Markowski
Equity Research Associate, Needham Company

Thank you.

Sean McClain
Founder and CEO, Absci

Robin. Or-

Speaker 18

Thanks. Robin from Truist. Just updated thoughts, given that you're going into this IND portion, what kind of additional technology or space do you need to sort of get to the finish line for 101? And then any updated thoughts on how you're going to- at what time points you might start partnering them, or do you think this will be more consistent, you'll take it through IND, or would you partner some of these earlier?

Sean McClain
Founder and CEO, Absci

Yeah, I can speak on this at a high level, and Zach can go into it. But, one of the things that we're really excited about that I think that the AI is opening up for us is kind of a new business model. So instead of partnering at the target phase, you can start partnering at the candidate phase or the IND phase much easier because you're able to get there in a shorter amount of time with less cost of capital. And we know deal economics are much better off, you know, in post-candidacy phase. And so, we are gonna be extremely opportunistic when it makes sense.

If it makes sense to partner at the candidate stage, we'll do that. But if we really do believe in a target and, you know, the capital is there, and we see that there's, you know, bigger inflection points to take it into a phase I or a phase II proof of concept, you know, and sell it off at that point in time, we'll do so. But we will look to partner these assets anywhere from, you know, a candidate all the way through a phase II proof of concept, and that applies to all four of our targets.

Dan Arias
Managing Director, Stifel

... Sean, Dan Arias from Stifel. The business is clearly evolving here. As we think about this evolution, what is the optimal mix, or what should we think about in terms of resource allocation and just overall project work when it comes to internally developed programs versus your drug creation programs?

Sean McClain
Founder and CEO, Absci

Yeah.

Dan Arias
Managing Director, Stifel

Essentially, how much of the business can be split between the two ideas?

Sean McClain
Founder and CEO, Absci

Absolutely. Zach, do you want to talk about just, like, the efficiencies that we're seeing over time and how we look at that?

Zach Jonasson
CFO and Chief Business Officer, Absci

Sure. It's a great question. It's something that we are continuously evaluating. I think the most important point to make is the drug creation piece of this applies to everything, whether it's an internal program or our current partnered programs, which are focused on the drug creation phase. So we're leveraging that infrastructure, and that's capitalized already. So, the big question is: how far do we take the assets? And that's a decision that Sean was pointing out is really made opportunistically. We also apply a lens of, you know, which candidates would we feel the most capable of taking into the clinic. So I hope that gets to your question. I mean, as we look at the resources required to go past the candidate stage, then we are selective.

I wouldn't suggest that a plan is to take every one of these all the way into a phase 2. That's not the plan. The plan is to really start exploring partnership opportunities around candidate phase, and then we'll make the right decision for each asset based on its merits and based on the potential deal economics.

Dan Arias
Managing Director, Stifel

Okay. Maybe just as a follow-up and, and getting to this point, so prior to your decision to go internal with some of these development programs, you had this active program pipeline that was progressing, and there was a bar chart that had a higher number of active programs the farther you went out. 10 this year is still your goal. Do you envision there being a higher number next year for the active programs? Can you talk to confidence on that? And then the 10 this year is interesting in the sense that you, I believe, have two right now, so you're, and you're still confident in getting to 10, which obviously equates to a new program announced every week and a half or so here between now and the end of the year.

Is it just a matter of having some press releases ready to go, and, and we'll hear about them shortly?

Zach Jonasson
CFO and Chief Business Officer, Absci

On the second question, I would just say stay tuned. As I mentioned in my talk, we have a very healthy pipeline. I think we're excited about where we're headed there. And on the 2024 question, we're not ready to give guidance yet. As you can imagine, we'll be thinking about what guidance we'll give it towards the end of the year, and we'll also be focusing a bit on what guidance we want to give around internal program metrics.

Dan Arias
Managing Director, Stifel

You will be giving some guidance in 2024 during the year?

Zach Jonasson
CFO and Chief Business Officer, Absci

The plan is to do that. I don't have a date for you just yet, but we are working on kind of what we'd like to communicate to the street.

Sean McClain
Founder and CEO, Absci

Yeah, and I will say also, one of the things that we are excited about is actually using these assets to, when we seed a partnership, to not only partner around the asset, but then partner, or, you know, have a multi, you know, program, you know, sort of deal that comes in, but it's seeded by one of our own pipelines that we do have. And I will say, we do have two active programs. We are reiterating guidance for 10, and obviously... So we have extreme confidence on what is gonna be announced now through the rest of the year.

Silvan Türkcan
Director, JMP Securities

Hi, Silvan Turkcan, JMP Securities. Thank you so much for the presentations today. I just wanted to ask a little bit more, if you could provide a little bit more details about your capabilities for the multidimensional optimization on the wet lab side. So you have your ACE assay, which is obviously focused on protein-protein interactions. Then you just mentioned that you're using some training set for tolerability. Is that general data? Do you have any plans to have specific assays that will provide a training set for tolerability and any other modalities, or data sets that will be unique to your platform? Thank you.

Christian Stegmann
SVP of Drug Creation, Absci

Yeah. Yeah, it's a great question. I mean, we made this point throughout today that data is crucial, right? And obviously, our ACE Assay indeed delivers affinity data, so it's a fair question to ask, like, what other data, proprietary data, do we have, for example, when it comes to multiparametric optimization, when it comes to immunogenicity, et cetera? We are, we are—we have several proprietary internal assets. May you just highlight one, a very interesting application that plays into multiparametric optimization is pH-dependent binding, and we've optimized the ACE Assay for pH-dependent binding, and what that allows you to do is come up with a so-called sweeping antibody.

That is an antibody that allows a much lower dose to actually sweep the antigen out of the plasma. That is a technology that we have continued to develop, and we are building on the ACE Assay here, and that is a technology that you simply cannot deliver otherwise, for example, using phage display or mouse immunization. So we are continually improving our tech stack to go beyond pure affinity.

Sean McClain
Founder and CEO, Absci

Yeah, and I'll say another data source that we are leveraging is our hospital partnerships that we do have for the reverse immunology platform. We are getting those antibody sequences and leveraging that data as well, not only for reverse immunology, but other applications as well.

Andreas Busch
Chief Innovation Officer, Absci

...And maybe one more aspect, when it comes to, to assays and to preclinical characterization of our antibodies, there's of course, in, in-house several opportunities, what we are doing, what we have established. There's not just gonna be different types of binding assays, which we're working on, like the ACE Assay, which you've heard a lot, but of course, also SPR. But we also have functional assays established, in our, in our laboratories.

But always keep in mind, the future of our hybrid model will be to have partnerships in antibody optimization, which will mean that we also will always have, the, aspiration to work together very intimately with our partners and having them share their expertise, for example, in, in vivo pharmacology, which is not something we're necessarily building up ourselves, but this will be done in partnerships, bringing assets forward. I'm not sure, Jens, whether you wanna talk a bit more about assay technologies we wanna have? That's, that's the guy with the E. coli between his fingers.

Silvan Türkcan
Director, JMP Securities

Yeah. Thanks. Thanks.

Ethan Markowski
Equity Research Associate, Needham Company

Don't, don't, don't shake his hand.

Jens Plassmeier
SVP of Synthetic Biology, Absci

I wash them quite frequently. Thank you. Yeah, sure. So, as I just wanted to also remind you that we're coming from the development. We have a strain developed that was able to make antibodies before, so, right? We have deep understanding and technology development on the developability side, and that was in-house, even before we were switching to the AI side. So we have established assays for developability. We're looking at immunogenicity, and we have everything completely integrated, even with the cell-based assays that we have in-house now. So all of this has been even developed before we even made the jump into the AI age here.

Yeah, so, and of course, we're also trying to, for several of these, having really high throughput versions that allow us to feed that data also back to the AI team, to be able to train on those datasets, to have the multiparametric optimization that we're talking about here. Thank you.

Ethan Markowski
Equity Research Associate, Needham Company

We have time now for two more questions.

Yuan Zhi
Senior Research Analyst, B. Riley

Great event. Thanks for inviting us, and thank you for taking our questions. This is Yuan from B. Riley. I have one quick question and a follow-up. So first, maybe Sean or Zach, you can help us to break down that how you reduce the cost per IND from, you know, $30-$40 million to about $15 million. Does it mean shorter time durations to get to the IND, or does it mean running fewer assays? Then I have a follow-up.

Sean McClain
Founder and CEO, Absci

Yeah, absolutely. So if you look at, you know, breaking down the IND cost, there's obviously what is the cost to get to a candidate, you know, which we're spending in-house, and then what does it cost to do your IND enabling studies? And that's where we get to, you know, the total amount of that $14 million and that 18-24 months. We haven't, you know, broken out at this point in time what our own internal costs are versus, you know, IND enabling studies. But roughly, that's where we see it. And we do as we internally get the efficiencies continue to you know or increase overall efficiencies, decrease that time to candidacy.

You know, right now it's about, you know, 6-12 months to get to a drug candidate. We do see, you know, those overall costs continuing to, you know, decrease. But I will say a substantial amount of the $14 million is IND enabling studies. So to get to an actual candidacy, or a drug candidate, actually isn't too expensive for us to do that. Again, most of the cost is in IND enabling studies.

Speaker 16

Hey, Sean. Gar from Berenberg here. Just a quick one from me. You know, I feel like you're seeing a lot of prominent players in the drug discovery space, particularly on the small molecule side, start to, you know, get into the biologic space through generative AI, particularly. You know, how does that change the competitive landscape in the biologic space, if at all, in your mind? And then, you know, way down the line, do you see Absci potentially getting into the small molecule space?

Sean McClain
Founder and CEO, Absci

So we don't see ourselves getting into small molecule space at all. We continue to stay laser focused on biologics and in particular antibody and antibody derivatives. And look, we're focused on our own technology. I think that, you know, there are other, you know, players that are out there. But I think in terms of you look at how advanced others are compared to where we're at, I think we have a very strong position, not only in the technology side, but actually showing this from a pipeline perspective, what is this actually, you know, gonna deliver in the clinic? And that's one of the big reasons too.

We decided to build out our own pipeline was just we knew that, these case studies are really important, not only for partners, but investors as, as well. Being able to, to show you can take this technology and actually create a potential best-in-class, first-in-class, and see those results, in, in the clinic, and we're sprinting as fast as possible. I mean, I'm super, I mean, proud and impressed of on the team. I mean, we started building out this internal, portfolio, at the beginning of, of this year when, you know, Andreas and Christian and Christina came on board. And, you know, and all four of these, have the potential to have an IND, in, in 25.

And so it just goes to show like how fast, you know, the pace of innovation, you know, we can do when you have that integration with the AI, and I think that that's gonna be the winning combo at the end of the day. And I think there's one more out-

Alex Khan
VP of Finance and Investor Relations, Absci

Yeah, we have time for one final question.

George Farmer
Managing Director, Scotiabank

... Great, thanks for squeezing me in. George Farmer from Scotiabank. You know, perhaps just semantics, but it seems to me, given the power of this platform, you know, any of the predictive antibodies that come out should really just jump to candidate phase rather than lead. You know, if this platform is as powerful as you make it sound, like, what happens during that lead to candidate phase during this process? And then on another note, just, just very interested about the 301 molecule that you have and the target. Has this target been described anywhere in the literature, and can you say anything more about it, you know, from your perusal of patent filings, et cetera?

Sean McClain
Founder and CEO, Absci

Yeah, I can speak to the patent side and then hand it over to Christian and Andreas. But no, this is a true first in class. This is a novel target that no one has discovered at this point. We haven't seen any literature on this, you know, particular target. And from an IP standpoint, it's wide open and, we're—I mean, we're very excited about the potential of this IO target, but.

Christian Stegmann
SVP of Drug Creation, Absci

Yeah, on your question, what happens between lead and candidate? It's a great question. Right now, to be very frank, we do perform, for example, pharmacokinetic experiments. But this is simply a de-risking step, right? So before we actually enter into IND-enabling, which for which we need to follow regulatory guidance, we do perform certain de-risking experiments. Now, as we scale and as our AI models get better and better, we will see efficiency gains here too.

Alex Khan
VP of Finance and Investor Relations, Absci

All right, that concludes the program. Thank you, everyone, again, here on the webcast for your time today. A replay will be available on our website later today. At this time, we invite everyone in the room to the room next door for a reception in the Morgan room. This concludes our presentation. Thank you again, everyone.

Powered by