Good morning, everyone. I'm Jaren Madden. I'm the Senior Vice President of Investor Relations and Corporate Affairs here at Schrödinger. I'd like to welcome everyone who's in person with us today, as well as everyone who is listening in virtually to Schrödinger's first platform day. Before we begin, I'd like to share some really exciting news. Today, Newsweek issued its 2022 list of America's top 100 most loved companies, and Schrödinger is number 21 on the list. I'm really proud of the company and the culture that employees have worked hard to build, and I'm pleased to be highlighting our progress with you today. Also, a logistical note for our webcast participants. There's a box in the webcast window where you can ask a question. I will collate submitted questions.
We'll respond to as many of them as we can during our Q&A at the end of this formal remarks. If your question is not answered, please feel free to reach out to ir@schrödinger.com. Now, a few other logistical notes. Before we begin the formal presentation, please note that during today's discussion, our team will make statements that are forward-looking and made pursuant to the Safe Harbor Provisions of the Private Securities Litigation Reform Act of 1995, including, without limitation, statements related to our computational platform, software and licensing business as well as our collaborative, partnered, and proprietary programs, including potentially favorable profiles of the molecules, as well as the timing of IND submissions and the initiation of clinical trials.
These forward-looking statements reflect our current views about our plans, intentions, expectations, strategies, and prospects, which are based on information currently available to us and on assumptions we have made. Our actual results may differ materially from what we project today due to a number of important factors, including those described in our SEC filings, including our most recent 10-Q that we filed in August. These statements represent our views only as of today. We caution you we may not update them in the future. With those logistics out of the way, I'd like to turn the rest of the program over to Ramy Farid, our CEO, and the rest of the Schrödinger team.
Thanks a lot, Jaren. Welcome everybody. We're really excited to see you all here. Welcome also to everybody who's on the webcast. We're really excited today to talk to you about the company. Our focus, of course, as the name of the event implies, will be on our platform. You know, there's a lot of excitement around the role that computation plays in all, nearly every aspect of our lives, right? And in particular, in drug discovery and in materials design. Of course, with all that excitement, often comes a lot of confusion and hype, right? I mean, that's the kind of thing that inevitably happens.
We think really you should demand from any company that's claiming to have developed a transformative platform that they actually talk about the platform, what it does, what it's. How it works. Also, of course, provide validation, right? Of the platform. Does it really do what the company's claiming to be doing? We are going to tell you our goal today, right? Is to tell you about our platform, but most importantly, you're gonna see a lot of the focus is going to be on validation of the platform. Okay? That's gonna be where we focus. Maybe we can advance to the slide that has the agenda and the people that. Oh, I am. I'm the one doing that. Sorry. Awesome. Sorry about that.
Forgot that I had control over it. Of course, obviously, we're going to talk about the platform, but we're also going to tell you about business outlook and opportunities, you know, you'll hear. Let me tell you who you're going to hear from. After I give a few sort of opening remarks, you hear from Robert Abel, who's our Chief Computational Scientist. You'll also hear from Karen Akinsanya, who's our President of R&D Therapeutics. You'll also hear from Hamish Wright, our VP of Translational Science. Finally, from Geoffrey Porges, our Chief Financial Officer. Okay. Hopefully, it's clear what the goal of our of this platform day is. Let me start.
Before I get into sort of talking about the platform, let me just give you a few slides and sort of overview of the company. I think you all are pretty familiar with this, but I think it's good to sort of set the stage. Of course, we've developed a computational platform, a technology platform that you will see today, I think we will show very clearly, is transforming drug discovery and materials design. We're gonna focus on drug discovery, though, today. We're gonna show you that the platform has, is delivering on the promise that of delivering drugs and drug candidates faster, higher quality drugs faster at lower cost. You're gonna see some really clear data to demonstrate that we've also improved the probability of success compared to traditional methods.
As you know, we license our software to pharmaceutical companies, biotech companies, commercial and industrial companies, materials companies, and academic labs worldwide. Again, both in life sciences and material science. We also, as you'll hear about today, form collaborations on the drug design area, and again, that's gonna be the focus today, but also in materials design. You'll also hear about, today, our proprietary pipeline. You'll hear about programs that are at the discovery stage and the development stage. In some cases, those programs are wholly owned, and in some cases, they're already partnered. One more slide, sort of just give you a sense of the company, and I'm gonna be talking a little bit about this.
We have a long track record, 30+ years of scientific innovation in computational chemistry, of around 800 employees, a little over 800 employees, and you can see a very large fraction of these employees have advanced degrees, PhDs. Half the company, and this is a real serious commitment to advancing R&D, half the company is dedicated to research and development. As you can imagine, these technologies are pretty advanced. We've made a really serious commitment into supporting our customers, educating our customers, educating both at the sort of really early stages, even students, but of course, also our customers. There's a really serious commitment to supporting an education around these advanced technologies. This is a little bit of a detail, but we do have quite a few releases every year.
We have four releases a year, where we introduce new functionality and improvements to the software. Again, you're gonna hear a lot about that today. Before we get into the details of talking about the platform, I think it's useful to give a little bit of an overview about what it is that we're doing. Again, I started off by saying there's a lot of confusion and what space are we really in? You've heard a lot of different things about our drug discovery and, you know, we all know, of course. You keep hearing, yeah, it's hard and it's prone to failure and it's expensive, but it's not so clear actually why that is, really.
I'm just gonna spend a little bit of time explaining why designing drugs is so unbelievably hard, why it takes so long, why it's so expensive. Again, you keep hearing about this, but most importantly, why it's subject to such high failure rates. The reason, the primary reason, is that the goal of drug discovery is to identify a molecule that possesses a whole bunch of properties. You see them listed here. A molecule to be a drug obviously has to be potent. It has to bind to the target, that you're targeting. It has to be selective. In other words, it can't bind to any other, really, it shouldn't bind to any other proteins 'cause, of course, that can cause adverse effects. In order for it to actually be a drug, it has to be soluble in water, obviously.
It has to be bioavailable. Obviously, if it's not bioavailable, it isn't gonna be a drug. It's not gonna do what it's supposed to do, and so on. You can see the list, permeable and so on. Now here's the challenge, though, and here's the problem. Physics is sort of not on our side in this particular case. These properties are all fighting each other. That is, when you have a molecule that's potent, it's not soluble. That's just physics. There's not much we can do about that. When you have a molecule that's made more soluble, it won't be permeable. It won't get through cell membrane. If it doesn't get through cell membrane, it's not a drug. All of these properties, they're complicated in and of themselves, but they're fighting each other.
This is why you may often hear that drug discovery is like a whack-a-mole problem. You're gonna see. Let me show you now the consequence of the fact that these properties are anticorrelated. Here's a kind of representation of a typical drug discovery project. On the first column is, let's say, a molecule. You see the representation of the molecule, first column. Okay? That molecule, let's say, came from, you know, a six-month effort or one-year effort in high-throughput screening. You screened hundreds of thousands of molecules. You finally found a molecule that's potent. Okay? Now you need to make it into a drug, which means you need to start making changes to it so that it actually has all those other properties. Here's how a typical project will go.
You'll make a change to the molecule because you're trying to make it selective. Right? You're trying to make it not bind to all these other proteins in the human body. You make one atom change. That's what that blue atom there is in the second column, and you're excited you made it selective. Invariably, what happens is now the compound's not potent anymore. Okay, now you make another change. That's the third column. Okay, now you're getting excited. You've got a potent molecule, and you have a selective molecule. Now let's try and fix solubility, the next property. You make another change, and you see, okay, I made the molecule soluble. It's not that hard actually to make a molecule soluble. You can add charge, you add some polar atoms. It's not that hard.
Again, invariably, what happens is you've now made the molecule not potent. You keep going. You see what happens? The fourth molecule. By the way, I'm showing it as one molecule. Usually, hundreds of molecules are made to try and solve the thing. I just couldn't represent that. You keep making molecules. You see what's happening here? It's kind of very representative of a typical drug discovery project. The fifth molecule, you finally identify a molecule that's both potent and soluble, but now it's not selective. This, if you look at this example, the last molecule is back, you're back where you were in the third step, right? That's what we mean when we say whack-a-mole, and this really happens. Here's the result of this.
Thousands of molecules get made, thousands, and even sometimes tens of thousands of molecules get made in the lab. That takes five years-10 years to do, tens of millions dollars. Of course, you can imagine, given how hard this problem is and this anticorrelated nature of these properties, large majority of programs that start with that potent molecule actually don't achieve the end goal of filing an IND. You see, that two-thirds of programs fail to deliver an IND, and it's because of this whack-a-mole problem. What's our vision for the future? What's vision for how this could be? Well, now it should start to seem kind of obvious. If we can do two things. One is compute the properties of molecules, all of them with high accuracy, and we can do it on a large enough scale where we can test.
Let's not get into the numbers right now. Test a very, very large number. Remember, you're looking for a needle in a haystack, so to speak. Sorry to use that cliché, right? That's the whole idea. It's very, very hard to find a molecule that meets all these properties. What do you have to do? If you can compute all the properties, you can obviously do it on a very large scale, and then you can identify that one sort of magical molecule that somehow balances all of these anticorrelated properties. That's what our focus is on. Our focus is on developing methods that can predict these properties and do it on a large enough scale so we can solve this multiparameter optimization problem or this whack-a-mole problem. How are we gonna do that?
There are two at a very high level, there are basically two approaches. One is machine learning, knowledge-based. You know, always hear this being referred to as AI because it just sounds cooler to call it AI. It's just, it's machine learning. In computational chemistry, when people say AI, they mean machine learning. There's a lot of excitement about it because of what we have seen in our almost everyday lives. Well, I don't know how many of you are playing chess and Go in your everyday lives, but you know, you get the idea. You hear a lot about this, right? About these extraordinary things that AI is doing in playing games, in image processing, obviously in self-driving cars. Some of you may even have some of that technology in your own car.
The question is, can this be used to design drugs? I'm gonna get to that. The other completely, totally different approach, I can't emphasize that enough. It's not in any way knowledge-based machine learning is developing what we call rigorous first principles, physics-based methods. I'm gonna show you in the next slide that this requires a really deep understanding of the underlying physics that governs these properties, potency, solubility, selectivity, permeability, and so on. Again, I'm sorry to do this. I'm gonna say it a third time. These are totally different methods, and I'm gonna show you in a second what I mean by that. Before I get into this, I think it's really helpful. I'm glad this is actually working.
I'm gonna show you a simulation of a molecule binding to a protein. I'll talk you through this in a second. This should give you a sense of what we mean when we say things like understanding the physics that underlies these complex properties. What you're looking at, the green molecule there that's jiggling around is the drug candidate. Those things that look like birds are water molecules, of course. What's happening is the purple surface is a representation of the protein. Underneath that surface are tens of thousands, hundreds of thousands of atoms. If I showed them all, you wouldn't be able to see anything. We just represent the protein as a surface. You can see the pocket there, right? You see the molecules coming in and binding.
This is what happens when a molecule binds to a protein. It comes out of water and binds to a protein. These water molecules have to rearrange. The water molecules that were in the pocket of the protein have to obviously evacuate from the pocket, right? That the molecule can fit in. Look, all that jiggling around, eventually the protein and the molecule have to adopt a certain shape, we call that a conformation, to be able to actually accommodate each other. The physics behind this process is really complicated. If we're going to try and predict using first principles methods, let's say in this case, binding of a molecule to a protein, it's pretty complicated. Again, we're gonna talk about that. Okay, now let me just spend a few minutes showing you pictures of cats.
Now I'm gonna talk about machine learning, okay? This is a very useful way of doing that. What is machine learning? I think it's really important to understand that, to figure out sort of how are we gonna use these different methods, machine learning, physics-based methods. Here's what machine learning is. I think you all know this, but again, I think it's useful to show this. With machine learning, you train a model. In this case, we're developing a machine learning model for detecting cats in photographs. We obviously have to train it on a bunch of pictures of cats.
If you have enough variation, right, a big enough training set of pictures of cats, now when you try and predict the, whether the thing that you're looking at, which in the bottom there is another cat that obviously isn't exactly like the ones above, the model works pretty well, right? If you have a good enough representation of every type of cat. What happens in these models when you show it another animal that hasn't been trained on? Obviously, the machine learning model knows nothing about dogs, right? I mean, it's only been trained on cats. It will probably mischaracterize this dog as a cat. That's the limit. That's the challenge of machine learning, is developing a training set that's representative of the thing you're trying to predict. Now that's pretty clear, I think.
Let's consider now, applying this to, design of molecules. I'm just gonna show you, this is just a slide to just show you. I'm about to show you a bunch of pictures of a molecule that's pretty complicated, but instead of showing the whole molecule every time, I'm gonna represent the complicated part of the molecule with a yellow sphere and just focus on this group on the right. That's all that this slide is, just so you're familiar with what I'm about to show. Okay, now let me advance. It looks like one of the molecules is missing on the right, but that's okay. I'm gonna explain it to you. This is the training set. The training set is two molecules.
One molecule has an atom, you see there, the green atom. The other molecule in this picture that's missing is another molecule, looks a lot like that one, okay? It just has a slightly bigger group than that, green atom. Okay, so now when you present it with. That's the training set. Now when you present it with a new molecule, you see what the group there is, and you'll have to use your imagination here, is somewhere between the size of the small atom, that green atom, and the big atom or the big group that's on the right there. So this will probably work pretty well. This is interpolation.
In other words, if you think about those two molecules, again, the one you can't see, the one you can see as two cats, you know, and this kinda looks like a cat, yeah, that'll work, and it'll probably work pretty well. The model can correctly predict that that molecule is potent. Okay, now let's present it with a new molecule. You tell me, does the molecule on the right look similar to the molecule in the training set? I'm pretty sure most of you would say, "Well, obviously it's pretty similar. All I did was just move it down one position, right?" It's the same, actually the same atoms, just a slight variation. It turns out that is a profoundly different molecule, actually, even though they kinda look the same.
In fact, the machine learning model here would incorrectly predict that this molecule is potent. The reason, by the way, goes back to that movie we just looked at. There are subtle differences in the water structure of these two molecules, the one where the molecule and that, what's called the para position, and when the atom is in this meta position. These turn out to be completely different species. Now think about that. All we did was just move one atom. Now, if you think about the diversity of chemical space, you would need a training set that's way bigger than every species of animal on the planet to be able to capture this complexity. Okay? I hope that's clear. Almost every molecule that you make is actually a different species. It's called chemotype in chemistry.
The training sets that are required to be able to capture all of this complexity are impossibly large, okay? It's not the same problem as image processing and, for example, detecting particular animal in a photograph. What you're looking at here is actually just a representation of the movie you just saw. It's the same thing. It's just a different way of looking at it. Here's what you have to be able to model when you're trying to use physics-based methods. You have to model the molecule in water. That's what's on the left. The protein in water, that's also on the left on the bottom. You have to model what happens when these molecules adopt the conformations that they're supposed to have when they come together. What happens when the water molecules leave, right?
When these things bind, obviously the waters that used to be around the molecule aren't there anymore. What happens when they come together? You have to be able to understand every one of these steps in order to actually accurately predict these properties using first principles. We already established that alone machine learning models can't do this because you cannot build a training set big enough to capture all of this complexity. By the way, the physics-based methods in this particular case actually accurately predict that this molecule isn't potent. It has an IC50 of 2.2 micromolar. I think that's a very exciting thing to think about, right? That this allows us to go beyond the limits of current knowledge, right?
By having these physics-based methods that can solve these problems using first principles and not relying on what we already know. There's no free lunch. These physics-based methods, I'm gonna focus on the right side of the slide. I just told you don't need a training set. That's pretty cool. That's exciting. You can extrapolate into new chemical space. That's really exciting. They're accurate. That's fantastic, but it comes at a cost, which is these are computationally expensive calculations. It's slow. Machine learning does have some very interesting advantages. They're, as I showed, very effective at interpolation. They're fast. I didn't tell you that, but they're incredibly fast. They can handle very large data sets, but as I just said, they require essentially impossibly large training sets, and obviously you can't extrapolate.
This is a good point, actually, to hand it over to Robert. Robert's gonna finish the story and tell you how we are integrating these two approaches to solve this sort of problem of each of them having sort of an issue. Okay, is that a good. Yeah.
All right. Thank you, Ramy. Highlighted by Ramy, there's really two complementary approaches that have been historically pursued to predict the properties of drug-like molecules. Those are machine learning technologies and physics-based technologies. Really very different, where Schrödinger has been breaking new ground is developing integrated approaches that combine physics-based simulation and machine learning methods to deploy methods to advance drug discovery projects that maintain the accuracy of physics-based simulations while having the computational efficiency and addressable scale of machine learning methods.
Our ability to pursue this has really been driven by a long track record of scientific innovation of the company going all the way back to 1994 where we were able to figure out methods to speed up and maintain the accuracy of quantum mechanical methods such that they could be applied to problems of interest to drug discovery. That was followed on with advances related to molecular mechanics, protein structure refinement, and eventually in 2012, our development of molecular mechanics force field. The underlying technology that describes all of those molecular interactions Ramy was mentioning that has a comprehensive description of medicinally interesting chemical space.
Once we were able to combine our advances related to force fields, molecular dynamics, molecular mechanics, we were then able to build out in 2016 a free energy calculation methodology that captures all of these things in a self-consistent way that could be used to accurately model potency, selectivity, solubility. In 2019, we were able to integrate this free energy calculation approach with modern machine learning techniques, in particular active learning to allow the accuracy of those free energy calculations to be applied to very large sets of molecules. More recently, we have been able to extend this into problems related to protein refinement so that we could apply these technologies to ever more interesting targets of medical interest. We're very proud of our publication track record.
We publish all of these advances so people can inspect the details of what we're doing. This is actually just a representative set of publications that have both been highly cited or particularly impactful. Again, we're very committed to our publication track record and would highlight here, for instance, our work on the Glide docking program, which has been one of the most highly cited references in the history of the Journal of Medicinal Chemistry. We would highlight here the middle reference, which is our introduction of the first free energy calculation protocol that included a modern force field with broad coverage of medicinally interesting chemical space that allows for this accurate property prediction.
More recently, work we've been doing to combine these free energy calculation methods with the addressable scale provided by cloud computing and integration of modern machine learning methods. Side by side with these scientific innovations, we've been building out the computational platform, which gives life to these scientific innovations and their ability to advance drug discovery projects. If we go all the way back to around the year 2000, the major focus was supporting virtual screening, which is a way to support hit identification so that we can initiate small molecule drug discovery projects. It was obvious we needed to do a lot more work.
By 2010, we had broadly built out a whole suite of functionality that was allowing us to support not just hit identification, but also target validation and tractability assessment through improved protein refinement and druggability assessment of putative small molecule binding sites. Then by the modern day, we had built out a broad suite of functionality that was allowing us to support target validation efforts, hit identification, and also really support lead optimization, which is where those whack-a-mole problems Ramy's referencing really come to the fore, where we need to be able to accurately model potency, selectivity, solubility, permeability, these other properties to find that one molecule the project team is hunting for, which will allow them to select a development candidate molecule and move into preclinical development.
We've also been excited that we've been finding opportunities to adapt these technologies to support preclinical development to expedite entry into the clinic for such molecules. I wanna emphasize the deep integration we've been able to achieve between physics-based methods and machine learning methods. We have here a relatively simple example. Project teams have an enormous number of molecules they could hypothetically synthesize. We can easily enumerate billions of such candidate molecules, potential molecules. We can't score a billion with full free energy calculations due to the computational cost Ramy referenced, but we can easily take 1,000, a representative sampling of that full space of 1 billion, and overnight, using cloud computing resources, compute 1,000 free energy calculations.
that creates a large virtual data set that we can then train a machine learning model to. This will be a very approximate machine learning model, but it can be used to re-rank that full set of one billion molecules to find the best 5,000 or so we can advance to free energy calculations. Then we have a rank ordered list of some of the best molecules of that full set of billion, and we can have the project team experimentally synthesize on the order of 10 molecules. Through really very extensive retrospective and prospective profiling, we would expect about eight of those 10 molecules to materially advance the program along its intended endpoints. I wanted to give a sense of how this works in practice.
The experimental drug discovery project progression is described on the right side of the slide, and there's a computational cycle that's described on the left. We can use these technologies at every stage of the drug discovery process to evaluate billions of ideas. This facilitates very close interaction between the computational chemist and the medicinal chemist on the project that through the enterprise informatics systems we're building out, have access to all the computational data and all the experimental data at the same time that facilitates better decision-making regarding what molecule should be advanced into the experimental drug discovery project progression, which allows for development candidate molecules to be identified more rapidly and with better properties than would likely happen otherwise.
That's really just summarized here, where at the top of this slide in a traditional drug discovery paradigm, where one has to basically use intuition, human judgment to try to guess the molecule one should synthesize. One typically makes thousands of molecules over four to six years to ultimately end up at a development candidate, which may have substantial property issues, where one is forced by the realities of the pressure the drug discovery project is under to move forward with that development candidate. Using this type of computational analysis, we can support faster identification with experimental synthesis and characterization of fewer molecules, typically only a few hundred to identify drug development candidates with more optimal properties than would be achieved otherwise.
With that, I'm happy to turn it over to Karen and Hamish, who will present a variety of ways these technologies have been used in the context of active projects.
Thank you, Robert. You've just heard from Robert how the platform works. Ramy described the physics-based methods. I'm going to talk to you today about how these methods and this platform have been integrated into our collaborations and our proprietary pipeline. Just give you a little bit of a sense of the descriptions here. The collaborations are where we work, our computational chemistry experts work with other companies on drug design, and our proprietary pipeline, which I'll describe in more detail, is where our internal team of cross-functional experts, including computational chemists, work to develop discover and develop candidates for the clinic. This is a long history. As Ramy said, we've obviously been developing the platform over a long period of time, but in addition, we've been establishing relationships for more than 15 years.
The collaborations really, as I said, where our computational chemists are working with outside companies, there's a variety of these types of relationships, starting with Nimbus, which is a company that was co-founded by Schrödinger around 2010. A series of other companies that we have been involved in the founding of or at the inception of their pipelines. In addition, we have collaborations with Big Pharma, who obviously use our software and buy our software, but who work with us and our expert team on some of their programs. In addition, over the last five years, much more recent, we have developed an internal team that includes, of course, computational chemists, but medicinal chemists, biologists, pharmacologists, toxicologists.
That group has been involved in both the identification of projects that would be appropriate for us to pursue, both with our platform, but also from a therapeutic standpoint. That portfolio of proprietary programs has been evolving over the last five years. Some of these are wholly owned, and others, we've partnered with other companies. That includes the most recent transaction that you're aware of with Bristol Myers Squibb, where five of our programs were partnered with that company. Today, as part of the Platform Day, we announced a new relationship, this is with Lilly. This is a small molecule program. We have aligned on a target of interest. Schrödinger will be responsible in this relationship for the discovery and optimization of small molecules.
In addition, Lilly will be participating in the program, use of animal models to characterize those compounds, and will also be responsible for preclinical and clinical development and co-commercialization. Associated with this deal are the milestones you can see here. We're eligible to receive $425 million of discovery, development, and commercial milestones for this program. We have royalties ranging from low single digits to low double digits. The combined portfolio of collaborations, proprietary programs, and partnerships has continued to evolve, and we're excited by the number of programs that have transitioned from the discovery phase now into the clinic. What we're going to do today is give you just a few vignettes.
We're not going to be able to go through too many of these case studies, but what you can see from this view of the collaboration programs is that we've worked on many different target types, many different therapeutic areas, with a lot of companies, and indeed, nine of those programs are already in the clinic. I've already talked about our proprietary drug discovery. As you're aware, we have a number of programs moving through discovery, and this year, we announced that our MALT1 program is the subject of an open IND. We'll be talking about that as a case study. In addition, a number of programs have been partnered already, and you can see that, on the right side of this chart. I do just want to point to the Zai Lab relationship.
Our business development relationships have continued to evolve in terms of the type of structure. In the case of Zai Lab, we're eligible for milestones, but we are also able to participate in co-development and co-commercialization. We have that option at a certain point in the program. Our Lilly collaboration is now on this portfolio. Now we are going to switch to telling you a little bit more about some of the historical programs we've worked on that are now in the clinic, and we'll also talk about some of our discovery programs as well. I'll hand it over to Hamish.
Okay. Good morning. I think we'll start the case studies off with a discussion of our novel ACC inhibitor program. This is a collaboration, as Karen mentioned, with Nimbus Therapeutics. I think we're all aware of the public health toll that is imposed by NAFLD and NASH, where non-alcoholic fatty liver disease has a prevalence as high as 25% worldwide by some estimates. Now, acetyl-CoA carboxylase, or ACC, is the first step in the fatty acid synthesis pathway. It's also a rate-limiting step, and for these reasons, it's been a drug target of interest to address NASH and associated metabolic disorders.
Now, inhibition of ACC1 and ACC2 results in reductions in fatty acid synthesis, tissue triglycerides and body fat and body weight, and improvements in fatty acid oxidation and insulin sensitivity. In so doing, ACC inhibition shows promise to address liver disease, type II diabetes, and dyslipidemia. Now, historically, efforts to target ACC were focused on the carboxyl transferase or CT domain. Now, the binding site here is highly lipophilic, and efforts here resulted in very low yields, so low numbers of molecules generated with very poor drug-like properties. The design challenge was really to expand the search of relevant chemical space to discover novel chemotypes to address these limitations. In contrast to the CT domain, the biotin carboxylase, or BC domain, had not been targeted before.
However, the availability of a crystal structure at a resolution of 2.3 angstroms with sorafenib bound, which is a natural product, suggested the availability of a novel binding site that could be druggable. Drugging this binding site would allow for disruption of the homodimerization of the ACC functional units and may be able to address both ACC1 and ACC2, which had been a historical challenge for this field. Using this co-crystal structure, scientists leveraged Schrödinger's WaterMap technology, which reveals the location and energetics of water molecules in the binding pocket.
What you're seeing here on the left is that sorafenib does not displace the two water molecules, the two red spheres, but suggested that this may provide an opportunity to capture increased potency if we could find chemical matter that could do that could displace these water molecules. Indeed, what you're seeing on the right-hand side is that a potent ACC inhibitor displaces these unstable water molecules and captures additional potency. In summary, in this case study, WaterMap was used to reveal an allosteric novel binding site in the BC domain. NDI-010976 was discovered as a dual ACC1/ACC2 inhibitor, and this allowed for maximal impact on the target and the reversal of lipid accumulation.
Interestingly, NDI-0976 interacts with proteins that are expressed on the liver, and this allows for a very favorable PK profile where the compound partitions to the liver. Gilead acquired the program in the second quarter of 2016 based on phase I data that showed proof of mechanism, and specifically on endpoints associated with de novo lipogenesis. The compound was renamed GS-0976 or firsocostat and is now progressing in a phase II-B trial. At this point, I'd like to turn it over to Karen.
Moving on to another Nimbus example. The JAK/TYK family, I think everybody is aware, has been a very productive family, important roles in psoriasis and a variety of other inflammatory diseases. In fact, there are four members of this family. They are complex in the sense that they are multi-domain. I want to importantly point out that the kinase domain has been the focus of many of the marketed products that have come from this family. As you're aware, there are a number of approved JAK inhibitors. However, they are associated with some safety issues that involve heart function, clotting, and thrombosis. The idea behind the TYK program was really to design an exquisitely selective TYK2 inhibitor. TYK2, as you know, has a lot of genetic support.
There are many diseases that have been shown in humans to be associated with TYK mutations. Knowing what we know about the JAK inhibitors, the question was, for the design challenge here, how can you design a TYK2 inhibitor that doesn't hit the JAKs? In this case, the regulatory domain, also known as the JH2 domain, shown in this light blue color, was an opportunity to design allosteric inhibitors and avoid the kinase binding site. This movie just describes the challenge of the kinase domain. What you're looking at here is tofacitinib, bound to overlapping structures of TYK2, JAK1, JAK2, and JAK3. What you can see is that they are incredibly similar. It's going to be very difficult to design a compound that has unique properties for one of these, unique binding to one of these.
This was the modeling challenge. As I said, rather than focusing on the kinase domain, the focus was on the allosteric site. This was a really challenging program for the reasons that we've just stated. Using free energy perturbations, which you've heard Ramy and Robert describe, we were able to score many thousands of compounds using FEP to identify molecules that were more selective for TYK2. Importantly, and I think this came through in the description so far, having a co-crystal of the compound bound to the protein and understanding the conformational changes that are induced when that compound binds to the protein is absolutely critical. Ramy showed you a video of what happens when a compound binds to a protein.
Having this co-crystal really opened up a new opportunity to understand the binding mode and the opportunities to optimize the chemical matter. You can see that continued scoring of compounds once we had that co-crystal, that was produced by colleagues at Nimbus, this allowed a further move up in terms of our ability to drive selectivity between TYK2 and JAK. Finally, there was a breakthrough in the program. As you've heard, we can enumerate billions of compounds, and what that does for you is it gives you an opportunity to find very unique and diverse chemical matter. The collaborative team, the joint team, identified a new subseries that basically took advantage of some non-conserved residues between TYK2 and JAKs. You can see that had a profound impact on the program.
After synthesizing under 400 molecules, we were able to identify compounds with about 560-fold selectivity. Let's wrap up this very brief vignette with a look at the clinical compound, also known as NDI-034858. You can see from this that the joint team was able to identify picomolar TYK2 compounds that had the desired effects in functional assays, IL-12 and other important assays, including interferon. As was set out as a goal for the program, these compounds have very poor, in fact, almost no affinity for the JAKs and other off targets, which are often the cause of safety issues in programs, as Ramy mentioned. These highly potent and selective TYK2 inhibitors are being progressed by Nimbus. They have clinical data in humans showing good tolerability, also good target engagement.
This program is now in phase II-B, as I'm sure you're all well aware, in moderate to severe psoriasis. I'll move on to another of our collaborators, Morphic. As you know, Morphic is focused on the integrins, and the company was founded by Tim Springer. We were involved very early on in this company. The focus of this particular design challenge was around alpha four beta seven. It's an integrin that's very important in the trafficking of cells, T cells, from the circulation into the gut. This is a validated mechanism. There is an antibody called Entyvio or vedolizumab that is marketed and is helping patients. Thousands of patients actually take Entyvio.
The opportunity here was around the fact that this antibody is administered bi-weekly as an infusion and ultimately bi-monthly as an infusion. The quest was to go after a small molecule inhibitor that could be taken easily at home. The challenge, though, was that there are some off-targets, in this case, α4β1, and its interaction with a different protein that potentially would have caused issues. Here you can see that the goal was to achieve potency, of course. This is a fundamental property of all drugs. Also importantly, selectivity.
As I just pointed out, the VCAM, a protein, is bound by alpha four beta one, and this is associated with a risk of PML, which is a rare, but devastating form of brain infection. The sort of Goldilocks situation here was unique in that we were looking for potent molecules. They had to be selective to avoid this off-target, but they also had to be orally available. They had to have very unique PK properties. As you are aware, Morphic and Tim Springer, who's the founder of Morphic, brought some very unique science to this field, and that's a hallmark, actually, of the companies we like to collaborate with. This was outstanding work to examine the binding mode of existing molecules. There were attempts to go after small molecule inhibitors previously.
This case here, which is this Roche molecule, it was pretty clear from this publication and the work that was done from the Springer lab and Morphic Therapeutic that existing compounds were potent actually against alpha four, beta seven. The structural explanation, structural biology explanation for why they also bind alpha four, beta one was a breakthrough that was published in this paper. It led to a very deep understanding of actually how to drive selectivity. Morphic Therapeutic have produced over 100 proprietary crystal structures. Again, as I said, those conformational changes every time you bind a molecule are absolutely critical. Looking forward then with those unique insights, what was the goal on this slide? Well, you can see down in the corner a molecule called AJM. This was a benchmark.
This was an early small molecule inhibitor that would have had to be administered a gram 3x a day. It was sort of orally available. Very not the best profile. That was kind of a benchmark. You can see that in the yellow star, using FEP and a variety of methods, we were able to identify in the gold star potent molecules. We're not just focused on potency, as we've described this morning. We're also focused on other properties and balanced properties that make for an excellent drug candidate. On the y-axis, you're looking at permeability. We've already described here that oral availability, when you're switching, we call it modality switch, from an antibody to a small molecule, is absolutely critical. The progression from red to green is actually how this program progressed.
Not only did we find more and more potent molecules, but we were also able to use pKa simulations and modeling to drive compounds. That MDCK approximately 5, we needed to be above that line. Using our platform, and you can see here 8,500 free energy calculations to predict potency and also multi-parameter optimization to balance properties, we were able to identify the candidate. That candidate is MORF-057. As you're aware, Morphic is progressing this molecule into the clinic. This is a fabulous collaboration between Schrödinger and the Morphic scientists, actually across a number of targets. This happens to be the most advanced. You can see that the compound is potent and selective against α4β1 in the same way actually that the antibody vedolizumab was.
Unlike other compounds, I already mentioned the AJM molecule, not as potent and certainly not as selective. So the program is moving forward. Morphic is conducting a phase II- A trial in patients with moderate to severe colitis, and a phase II- B study is planned for later this year. With that, I'll pass over to Hamish for the third case study.
Thanks. Cool.
All right. It's a pleasure at this point to pivot to a case study, starting with our proprietary programs. In this case, we'll talk about cell division cycle seven or CDC7, which is a kinase that is a key cell cycle checkpoint for DNA repair. Starting in the cartoon on the left, you know, demonstrating here the cell cycle. The transition from G1 to S phase is a highly regulated event. In S phase, DNA replication occurs. CDC7 has an important role in S phase and in the initiation of DNA replication, also known as origin firing. In healthy cells, CDC7 activation resolves certain stresses that occur to repair DNA and allow the resumption of cell division.
In contrast, cancer cells have a high level of DNA lesions, high replication fork instability, and high genomic instability, all of this leading to replication catastrophe, that is the inability to successfully replicate. For instance, in the case of administration of PARP inhibitors, which cause trapping of the PARP DNA repair enzyme on the DNA. This cause a stalled DNA replication fork. CDC7 would typically be able to overcome this through its activity to protect and then restart those replication forks. Now, in the presence of a CDC7 inhibitor, such as our development candidate SGR-2921, what we see is that the replication fork collapses and the DNA breaks.
This leads to apoptosis and cancer cell death. The design challenge in the context of the CDC7 inhibitor space has been one that's characterized by having compounds that really have not been very potent and also demonstrate very poor PK and poor selectivity. If we were to go back sort of 20 years, the state-of-the-art there was were compounds that were sort of double digit in terms of potency, double or triple digit in terms of potency, and having poor PK and poor selectivity. Now, if we fast-forward 10 years from there, very modest gains. Only one order of magnitude had been identified or discovered in those molecules, and they were still showing relatively poor PK and poor selectivity.
Now in contrast, Schrödinger's scientists took on this design challenge, were able to discover picomolar inhibitors with drug-like properties that achieved our target product profile all in a span of about two years. Okay. So, the cross-functional project team leveraged modeling at scale and deployed Schrödinger's technology to do this in order to drive very rapid decisions between chemistry, biology, and DMPK scientists. Now we know that drug discovery requires a high degree of collaboration where functional area experts are constantly you know in communication and sharing data. That's represented in the flowchart on the left-hand side of the slide.
Schrödinger's CDC7 inhibitor project team leveraged our global collaboration tool, LiveDesign, in order to in real time share chemistry data, biology data, PK data, and other data in order to drive this program forward very, very quickly and identify a development candidate in about 25 months. How did they do this? Leveraging simultaneous modeling of potency, selectivity, solubility, and permeability, we were able to achieve the target product profile. We'll start on the left-hand side, where novel type binding cores with favorable drug metabolism properties were identified, and those compounds were associated with a potency that was similar to what was being described in the literature.
Schrödinger scientists leveraged the platform and identified molecules had improved potency in the model, but were also able to simultaneously balance selectivity and solubility in order to come up with better molecules with better overall properties. Continued work with the technology allowed for the discovery of even more potent molecules and simultaneously improvements in permeability that allowed the scientists to overcome a design challenge at that time, which related to the in vitro, in vivo correlations of the data that were being generated on those molecules. Now importantly, these gains were made with a very small number of molecules being made or synthesized in the medicinal chemistry wet lab. Over the course of all the series in the prject, only 226 molecules were synthesized.
From the chemical series that led to the development candidate, only 20 compounds were synthesized. All that work led to the identification of our development candidate, SGR-2921. Now, we believe this to be the most potent CDC7 inhibitor publicly described. This compound has been assayed in cell-derived xenograft models and patient-derived xenograft models and demonstrates really robust effects in AML models, sufficiently so that we were compelled to submit this molecule for IND-enabling studies in preparation to initiate a phase I trial in relapsed refractory AML. What I'd like to finish with is this slide, which I think nicely captures how the computational platform was leveraged by the scientists to drive the program from inception to the identification of a development candidate.
I'd like to draw on an analogy of using a population of people greater than the world and being able to winnow that down to finding an individual. Starting on the left-hand side, the project team scored 79 billion molecules with machine learning. This is obviously a population greater than the world. From that was able to select just over 24,000 of these for scoring via physics-based methods. This is similar to honing in on a particular town. From that, making only 226 molecules in the medicinal chemistry wet lab, similar to honing in on a particular street in that town, and then ultimately coming up with the individual we were interested in, the development candidate, in a span of about 25 months.
At this point, I'd like to turn it over to Karen for the WEE1 inhibitor story.
I'll wrap up on our last two case studies. You've just heard from Hamish about the CDC7 program. As you know, several of our advanced programs are in oncology. We're also working on WEE1. We won't go into the mechanism of WEE1. It's a target in the DNA damage repair pathway. I think Hamish introduced you to that general space that includes PARPs and other agents. Interestingly, WEE1 has been shown in the clinic to have important evidence of efficacy in humans. In particular, I'll just point out in uterine serous carcinoma and in ovarian cancer, WEE1 monotherapy has been shown to be associated with 30%-50% ORRs, and those are two high unmet medical need cancers.
In addition to that, there's evidence that WEE1 has potential in a number of other solid tumors. The design goal here is really informed by, as was the CDC7 program, the activities in the landscape. There are several companies, as you're well aware, going after WEE1. It became pretty apparent to us actually from the very beginning, that discovering highly selective molecules for WEE1 that also had balanced drug-like properties, and I'll go into what I mean by that, was important. In addition, avoiding drug-drug interactions, we believe WEE1, as are many cancer agents, will be important as a monotherapy in some cases, but will also be used to treat patients in combination therapy. Sort of starting with the end here, in the sense that we identified as we had set out to do, very selective WEE1 inhibitors.
I think you're familiar with AZD1775, a compound that had reached phase II studies and also ZN-c3, which I believe is in phase I/II studies now. These molecules are all potent, but as you can see from these kinome maps, the selectivity of these two prior molecules had much room for improvement, and that was one of our large areas of focus. The other that we felt was important as I just described was the drug-drug interaction liabilities. It's pretty clear, I think, from the published data that perhaps similarities between the existing chemical matter had left this challenge unsolved with regard to CYP3A4 and perhaps other characteristics of the molecules.
I showed you the endpoint of some of our chemical series, but I do want to go back and just describe what really, I think, exemplifies the power of running these programs internally at Schrödinger. After much work by the project team, over the first couple of months of the project, we identified a very diverse set of compounds that were very unique. Actually, they looked nothing like any of the existing WEE1 inhibitors. They were predicted to be potent inhibitors of WEE1, though. When we sent these compounds out for the kinome scans, we were pretty shocked when we came back what looked to be compounds that were less selective in fact than the innovator molecules. Let's call them that for the moment.
Coming together, the drug discovery team, the platform development team, it appeared to us that there were some patterns, and so we used protein FEP, which is where you are essentially perturbing the protein, not the compound in this case. On the lower right, the white is the compound, and above that, you can see side chains in the binding site changing, and those changes represent the characteristics of different kinases. It became very clear suddenly that we are able to use protein FEP to characterize different kinases and how they interact with compounds. We used that retrospectively and actually indeed validated that we could predict the profile, the albeit dirty profiles, of some of those early compounds.
We immediately leveraged that finding to prospectively now interrogate large numbers of compounds, and you can see here, 445 million, not quite the billion yet, but 445 million compounds that were enumerated and scored, using this approach, FEP calculations on a fraction of those. Forty-two compounds synthesized, 22 of which had what we originally were aiming for, which was exquisitely selective profiles. Again, just to recap, these highly selective compounds were predicted based on this protein FEP in a prospective fashion. We used a panel of 20 kinases to sort of check our work as it were. In one cycle of synthesis, you can see that we went from these very sort of promiscuous compounds to extremely clean compounds and across a number of series.
As I've said, in addition to potency and selectivity, there are a number of other characteristics that need to be optimized, including oral availability, permeability included, as well as other characteristics. We're very happy to report that a number of our advanced molecules not only have solved this potency and selectivity challenge, they're also not CYP3A4 time-dependent inhibitors. We took these molecules. Another big question was, well, we've come up with completely unique WEE1 inhibitors. Do they actually do what all the other WEE1 inhibitors were able to do? Indeed, we recapitulated in a model that's been used for clinical compounds, the A427 lung model. You can see here that AZD1775, our compound were dosed for 26 days.
When you stop dosing, you could see that, yes, the AZD1775 adavosertib was effective, but once you stop dosing, the tumors grew back. In the case of one of our molecules, STC-8123, which was published at ACR. After dosing the compound, continued essentially to not just inhibit the growth, but actually essentially cure the mice. Now, the question, of course, going forward is how do these work in the clinic? We've seen this very durable single agent activity in our preclinical models, and our goal now is to select the best-in-class compound with balanced properties, and that preclinical characterization is in progress. We are in the process of selecting our DC across these multiple candidates in this program.
Turning now to our last case study before I turn this over, the MALT1 program. MALT1 is genetically validated actually in a number of different immune disorders in people. Early on in the scientific exploration of MALT1, it became pretty clear that it's important in the BTK pathway or the NF-κB signaling pathway in B cells. You're all familiar with BTK. There's a number of agents on the market and many in development addressing the question of relapse and resistance in B-cell malignancies. MALT1 is actually downstream of BTK and has a very important role in regulating NF-κB signaling. This CBM complex is an important complex, of course, in not just B cells, but it also turns out in T cells and a variety of other cell types.
Our focus for the moment has been on B-cell malignancies, but there is an emerging and important literature set about the role of MALT1 in T regs and T cells in general, as well as NK cells. In addition to B-cell malignancies, there are potential indications in solid tumors in combination with checkpoint inhibitors, and also in autoimmune disease. I'm just going to show you quickly, this is a complex protein. It actually operates as a dimer. If you can see that as sort of a mirror reflection. It has multiple different domains, and I've sort of used this schematic here to describe the design challenge. As I said, it's a multi-domain protein. The paracaspase protease active site actually was identified quite some time ago by academics.
The first compounds that actually were shown to inhibit MALT1 activity were peptidic in nature. This is a pretty large pocket, actually. Unfortunately, none of those molecules have entered the clinic. They are not, as you can imagine, very drug-like and are short-lived in blood. They were used, though, to demonstrate in vivo, in animals, that MALT1 has an important role in regulation of B cells. Sort of fast-forwarding now to the second generation, if you want to call them that, small molecule MALT1 inhibitors. These are mostly allosteric. They're acting on the Ig3 domain, and that was the focus of our program. The initial allosteric compounds have entered the clinic, the first examples of which were not very potent and not optimized. Now, you've seen something similar to this in previous slides.
Using our technology, we identified a number of unstable water molecules in the binding site. This is actually a very old molecule that was shown, I believe, in a phenotypic screen to bind to MALT1. This is, I guess, an antipsychotic that binds weakly to the MALT1 allosteric site. You can see very clearly that there are a number of sub-pockets here where this compound doesn't reach, and this gave us some insights into how to drive additional potency into MALT1 compounds. Fast-forwarding here to the progression timeline of the program.
We were able to identify compounds within a couple of months that not just had great potency, but actually fell short a little bit of our target product profile, but they were sufficiently potent and also had excellent ADME properties that allowed us to go in vivo in, I would say, a very short period of time, to establish whether these compounds indeed inhibited MALT1, which shown that in biochemical assays. Did they have efficacy, and did they have the PK/PD relationship we were looking for? That was a rapid breakthrough in the project. We knew then that this series that we were pursuing had activity. The question is, could we make them more potent?
Over the course of the following 10 months, using a variety of our technologies, we were able to enter lead optimization at eight months and then identify the compound that eventually became the DC and covered the TPP by 10 months. All of that across about 80 compounds that were synthesized. You can see here a sort of representation of the multiparameter optimization challenge that we were pursuing. If you look at these three graphs, these are sort of representations of the balancing of solubility and permeability, solubility versus potency and permeability versus potency. What's pretty clear is that the vast majority of compounds scored didn't meet all of those criteria. In fact, the compounds we were looking for were in this lower right quadrant.
After modeling around 5,000 ideas, we were able to identify 43 designs that met all criteria. Returning to this sort of Google Maps world analogy, throughout the course of the MALT1 program, we pursued 8.2 billion compounds scored with machine learning. Then the subset that was scored by physics-based methods was about 12,000, and we synthesized 78 to find our development candidate. We have begun to benchmark our compounds against molecules that are out there in the literature. We do know that we have what looks to be the most potent MALT1 inhibitor, but as I have expressed throughout this presentation, and Ramy as well at the beginning, we actually wanted compounds that had very well-balanced properties that included clearance, permeability, efflux, solubility, bioavailability, characteristics that we believe will lead to a best-in-class profile.
I do want to point out that the site that we were targeting was one that meant compounds had lower solubility than in some other compounds. We had a very interesting collaboration with our material science group that helped us improve that solubility through formulation. What you can see here is in vivo activity in an ABC-DLBCL model. We're comparing our activity of 1505 to ibrutinib. You can see that ibrutinib clearly works in this model. MALT1 inhibitors show signs of regression. When you combine these two agents, you see really exciting activity. There clearly is single agent activity with MALT1 inhibitors. Recent disclosures from the JJ program do indicate that there are exciting and interesting responses in CLL and NHL in combination with BTK.
Our IND is open, as we've previously announced, and we are initiating our phase one dose escalation in relapsed refractory B-cell malignancies this quarter. With that, I just want to wrap up by saying that we've given you just a few examples, vignettes of the types of programs we've worked on, where selectivity, optimization of properties for best-in-class programs, first-in-class, where we're going after modality switches, but where selectivity and optimization properties are also important. I also want to sort of zoom out for a minute and look at the sort of history of our combined portfolio. You can see here that we've completed discovery on a large number in light blue of programs over the years. We have a large number of ongoing collaborations which we've already described. We gave you a couple of examples from Nimbus and Morphic.
Over the last five years, we've increased the number of ongoing proprietary programs, some of which are partnered and some of which are wholly owned. We're very focused on improving the productivity of the drug discovery phase. What we've looked at here is actually the metrics of programs over the last 10 years. We've compared that to industry standards. There's a paper that I think everyone references from Lilly scientists. What you can see here is that the cumulative portfolio success, which we anticipated would be impacted by this powerful platform, is turning out to have a pretty profound effect on the success rate, programs moving through stages, including through IND-enabling studies and into phase I.
Now, of course, our data set is smaller than the industry, but we think this is a very promising trend, and we're excited to continue monitoring this as we continue to work on more programs and expand the portfolio. Just to reflect that, there are a number of programs in the clinic. I already told you about that. Not only have they moved into phase I, several have now moved into phase II. In addition, our wholly owned portfolio, where we've ideated around these projects, they continue to progress, moving from the early stages to lead up and then over into IND-enabling studies and our first phase I planned this year. With that, I'll turn the agenda back over to Robert.
Yeah. Thanks, Karen, and really exciting to see the platform being applied in all these different ways. We're also very excited about how it can be further improved to provide even more value. The biggest area of or one of the biggest areas of investment is actually increase the number and types of targets our platform can be used to progress. There's two major areas of that investment. The first is to enable structure-based drug discovery for nearly all targets through combining some of these physics-based simulation methods with some of the recently reported de novo structure prediction methods combined with basic structural biology investments.
The other area related to this is to further improve our hit identification technologies, and we see avenues now to expand the application of those hit ID technologies to historically difficult targets that have yet to be drug targets where small molecule binding is really unprecedented. The second major area of major investment is to further improve the effectiveness and efficiency with which our platform can progress drug discovery programs. There's three major avenues by which we're aiming to achieve those improvements. The first is to continue to develop physics-based methods of the future, and that's related to investments we're making to more accurately model metals, where the quantum mechanics of metals can get pretty tricky due to the large number of electrons.
Explicit modeling of more complicated quantum effects, things like pi cation interactions, pi-pi interactions, as well as further development of hybrid methods that build upon this integration of the machine learning methods and physics-based methods we described in some of the earlier parts of the presentation. Second major area of investment is to support more comprehensive modeling of ADMET. Optimization includes more explicit and more accurate modeling of things such as rate of clearance, time-dependent inhibition, and HERG binding. The third major area of investment is further improving our de novo design technologies. Currently we can support evaluation of on the order of 100 billion potential idea molecules, but this is still only scratching a small portion of the total idea space available to project teams.
Chemical space, the amount of molecules a project team could consider synthesizing really is vast, and we're going to further invest in these technologies to more readily identify the best possible molecules for those teams to consider for synthesis and experimental characterization. I wanted to give a concrete example of where we're seeing these investments start to produce potentially very high value future applications. Back in 2019, we published an exploratory use of protein residue mutation for energy calculations. This is a technology very similar to what Karen described that was used to advance the WEE1 project. We prospectively predicted that the L528W mutation might be problematic in clinic for BTK.
That was an exploratory use of the technology, and it was very interesting to find, earlier this year we now have clinical characterization of that resistance mutation as being one of the more frequent resistance mutations for that system. With this prospective success in hand, we're now taking a step back and asking for all of our projects where clinical resistance mutations might be relevant in the future, can we adapt this technology to try to produce molecules that might be more robust to the emergence of resistance? Could be a very exciting area for us in the future. Last major area of investment is to expand the applicability of our technology platform to new high-value areas. Three are highlighted here.
The first is expanding into preclinical development and formulation by investing in technologies that can help with solubility optimization, excipient selection, and optimize things such as process chemistry that could help with small molecule scale-up. We're also very interested in new modalities, including biopharmaceuticals, protein degraders, molecular glues. Very interesting technology development happening connected with all three of those areas. Lastly, we're of course continuing our big push into materials with a heavy focus on energy, chemical reactivity, and polymers. Again, emphasizing the ways in which the technologies we're developing can be used in a number of different avenues. Back in 2017, we had published avenue to use protein residue mutation to improve antibody design. A more recent capability that started to emerge is constant pH simulations.
We now think that by combining protein residue mutation with a constant pH simulation capability, we think we can start to facilitate design and optimization of pH sensing recycling antibodies, which could have all sorts of positive applications in reducing, for example, dose or increasing the effectiveness of antibody-based drug therapies. Very excited about the future. With that, we'll turn it over to Jeff Porges to discuss business outlook and opportunities.
Thank you, Robert. It's an honor to be here with the Schrödinger team today. It's 50 days today since I joined the company. Guess what? It doesn't qualify me as a chemist. I won't be talking about chemistry. I would like to share with you how we see the business from inside the company, which I think is quite different to where the investment community sees the business. I'm gonna focus on how we create value from this remarkable platform that Ramy, Robert, Hamish, and Karen have described today. As Ramy started by explaining, we have three different ways of creating value. First is through software licensing, second is through collaborations, and the third is by investing in our proprietary pipeline.
These are quite different in terms of the contractual relationships we have, with ultimately with our customers. Software licensing is really an arm's-length relationship. Customers license the technology, then they deploy it as they see fit. We don't decide how, what, where they do with the technology. We have limited ongoing relationship with those customers. We do provide services you would expect, but we don't participate in any of the value created by the deployment of the technology, through these sort of contracts. It does provide us with a lot of validation of our technology, and we get a lot of feedback from those customers. Again, we don't see any long-term upside in terms of share of product. Collaborations are much more. It's a closer relationship with our customers.
This is typically where someone in the industry, typically a big pharma or biotech company, comes to us with a particularly challenging target or a difficult drug design problem, and they ask us to deploy our computational chemists, our internal scientists, against that problem to help them fulfill their project objectives. Now, in return for that, we get paid for providing that service, but ultimately, we also get participation downstream in that value. We get opportunity for milestones, future royalties, but the IP is fundamentally owned by the partner. Lastly, of course, we have our proprietary pipeline. These projects are conceived and driven by Schrödinger. We own the IP, we start the project. We complete all the internal computation, discovery, and development. It's an at-risk investment by us in building our own portfolio.
Now, of course, ultimately, we can out-license these assets as hits, leads, development candidates, or even clinical programs, or in the future, we can potentially retain some or all commercial rights. Obviously, we'll make those decisions on a case-by-case basis. Again, we expect to receive upfront payments, discovery development, commercial milestones, and ultimately even commercial participation. Now, where we are today, as you've heard, as mentioned before, we have roughly 1,600 customers worldwide. This is pretty much the entire universe of life sciences customers. We have 12 active collaboration projects right now, and we have 18 active programs, proprietary programs. This includes those that are partnered already and those that we are retaining and advancing on our own account. Now let me just give you a sense of the business characteristics of these sort of ways of creating value.
Software licensing is really a here and now opportunity for us. The revenue is already substantial. It's growing nicely. It's operationally profitable. It provides that validation I mentioned. It's a very extensive customer universe, and we expect continued growth via increased adoption. Collaborations are near and medium-term opportunity. The work is funded by the partner. We see inflecting revenue growth from collaborations and proprietary programs. Those collaborations develop and enhance the platform. It's a limited and curated list. We're pretty careful about the number of collaborations we enter. Obviously, we have obligations under those collaborations that we need to fulfill. We can't infinitely scale up our organization, so we take these very seriously, and over time, they cycle out. We complete our obligations to the collaboration customer, and then we have room to add new customers.
We see accelerating growth in the collaborations over time, but also in the number of targets in collaborations, and frankly, in the value created as the programs advance, the milestones increase. We expect there to be more value there. From the proprietary programs, it's clearly a medium to long-term investment. These programs are in investment mode. We expect to start to harvest them in the next couple of years. Working on these proprietary programs ourselves enables us to leverage but also enhance the platform. This is very targeted and carefully thought out. We put a lot of thought into which programs we actually invest our resources on, and we do intend, of course, to advance them to monetization, whether that is early development in the clinic or even commercially.
Let me just share with you some of the metrics for these businesses, and this is now grouped in terms of how we report revenue. Our software licensing revenue for the first half of the year was $63 million. That was 25% above the comparable period in 2021. Our total active customers, I mentioned 1,600, but the 70% of the revenue comes from the top 100 customers, which of course, as you would expect, is where all the big pharma companies are represented. Our customer retention rate is extraordinarily high. And there are really no differences in these numbers, having looked at them. Once people start using our software, it's highly valued by them, and we see very little attrition there.
In collaborations and proprietary programs, we reported revenue for the first half of the year of $24 million. That was more than double the revenue for the first half of 2021. We can see that the number of active proprietary programs is growing steadily. Karen's already alluded to this number, including both partnered and unpartnered programs, we're now at 18. What's particularly striking is the total available milestone opportunity. If you assume all of those programs are successful, partnered programs that is, that are already partnered, is about $5 billion, and the number of programs now in development eligible for royalties is 15. Clearly, there's been a massive increase in the number of programs, both milestone and royalty eligible since 2018. We expect that to continue increasing steadily.
Just to summarize, I think all of the metrics that we track for our three sources of value are progressing very strongly. We are measured in how we deploy our capital, so we're not gonna expand, massively expand the number of collaborations or proprietary programs, but we do think that we're creating an enormous amount of value for shareholders that we can expect to harvest over the next few years. I think now I'll turn it over to Ramy.
I've just got one slide just to sort of summarize the day. I hope you'll agree that we showed you that we are the leader in molecular design software and have a real true track record of scientific innovation. As Robert showed also a clear path to continuing to be the leader and continuing to innovate in this field. I hope we also have done what I said at the beginning that you should be demanding from us, that we demonstrate that the platform really works. I think we've shown some really nice cases of success, and I think that plot that shows or the chart that shows the pretty dramatic improvement on success probability is that should be something that you know we should be pretty excited about.
That's a really important obviously feature of what the platform is accomplishing. You also saw that we have a strong financial position and the balance model. What we mean by that is, of course, this balance model that Geoff showed, the software licensing, the collaborations, and the proprietary programs is really providing many opportunities for value creation in the future. You saw again, in the data that you saw that we have steady revenue growth from the software business. You heard about the announcement today of the new collaboration with, or new partnership, excuse me, with Lilly that's adding to our growing pipeline of programs. Again, Geoff gave you some nice details on that.
You saw, I think, pretty clear evidence to suggest that the revenue from the collaborations and proprietary programs are inflecting. I hope you'll agree that we accomplished the goal of talking about the platform and the validation. I really just wanted to close by acknowledging the really extraordinary work of our employees who have made all of the things that we showed you today possible. With that, I think we'll open it up to questions, Q&A, right? I think we're going to start here with any questions that are from people who are here, and then I'll remember to let Jaren allow people from the webcast also present. Then also I was supposed to.
I'm supposed to remember to tell everybody. I think because people on the webcast won't be able to hear you, wait until you get handed a microphone, right? And ask the question into a microphone so everybody can hear. Okay. Oh, and everybody's here to help answer the questions. Okay? I think I saw already somebody, right? Yeah.
Thank you very much for the presentation. When you look at the software offerings and how it is utilized across the licensing side of the business, the collaboration side of the business, and the proprietary part of the business, how does the software capability differ from business to business? When you have someone like Lilly who enters into this partnership, they've previously been on the licensing side, how does that relationship get managed?
Yeah, that's a really great question. There are three aspects of the sort of differences, if you will, between what our software customers have access to and what we have access to in our proprietary programs and of course in our partnerships. One is that, the scale. We've talked a lot about this. We have actually even. This is something we can measure. The scale with which we are able to deploy our platform is on the order of 10-100 times higher than what even our largest customers can deploy. What does that mean? That means those WaterMap, you know, the maps we were showing where we're showing we can analyze hundreds of billions of molecules, score tens of thousands of molecules.
That's something that our customers are just starting to be able to do. You can see from what I showed at the very beginning, it's extremely important to be able to explore enough of chemical space to solve that multi-parameter optimization problem. The other is, this won't surprise you, I don't think, that it can take a year, two years, even longer, to productize new solutions. You saw from what Robert showed, we're constantly advancing the platform. That's technology that our group has access to, of course, before our customers do. The final thing is, this is something we're trying to address, but this is still the case, that of course, the expertise that we have internally is very difficult to replicate at a customer site.
We have side by side next to the people who are actually deploying the technology, the whole entire development team that actually developed the technology. That obviously presents a, you know, real opportunity to sort of, you know, to have that expertise that would be, again, difficult to replicate, but also we can solve problems in real time. That's what leads actually to the advances that Robert was showing is issues that come up during the program again, can be solved in real time. Then again, going back to the second thing, that doesn't get put into the product right away. Anything else, Robert, you think?
No.
Yeah. Does that
It's perfectly.
Yeah.
-captured.
I think you asked how these partnerships or relationships are managed. We obviously have a very long history of creating joint collaborative teams, as well as the scientific collaboration. We have, as you see, very many collaborations, and so we have an alliance management team. We typically have a joint steering or joint research committee with our partners to make important strategic decisions about the progress of the programs, if that's what you were asking.
Yeah. Did we answer your question? Now I'm worried that I didn't answer the question you were asking.
Just to expand on that.
Sorry. Close to you.
Oh, yeah. Get the microphone. Yeah.
Just to extend off that, Lilly and their non-partnered parts of the business with you, where they're doing licensing still, what are their capabilities on those?
Yeah.
for those projects versus the capabilities
Yeah.
that are within the partnership?
Well, that's a great question, and I think there are going to be two, sort of benefits to the partnership. One is obvious, which is we're gonna be developing medicines that actually help patients. The other, and this is definitely one of the goals of the project, is you can imagine this kind of collaboration or partnership is an excellent way to transfer the know-how to Lilly. That's one of the goals. They have access to the software at a pretty decent scale, and they're going to have a front row seat, right, to our deployment of the technology at this huge scale and all this expertise that we just talked about. That knowledge transfer will be a part of the partnership.
It isn't there now, but we expect by the end of the collaboration or, you know, through the collaboration, that we'll make a lot of progress toward getting them to the point where they can do what we're doing on the partnered program on all of their programs. Sorry, you have to wait for the microphone.
Hi. Thanks. Gary Nachman at BMO. Thank you for the great presentation. First, you highlighted the balance of potency, solubility, and permeability, that, you know, are some of the difficult components. What are the more challenging components going forward from here that maybe you didn't completely touch on, you know, as we think of the progress going more through clinical development and also into potential scale of manufacturing? What are you thinking there? Then the new partnership for Lilly, for small molecule development, is that indicative of future partnerships that you're thinking of from here? What's the actual capacity that you have for these additional collaborations?
Yeah. Let me let Robert answer.
Yeah. We're generally investing to predict all properties relevant to drug discovery. Some are clearly harder than others. Rate of clearance, for example, is an area where we're making big investments and having more predictive, more accurate methods to model rate of clearance will facilitate more rapid identification of those molecules that are best to advance. Likewise, you had mentioned preclinical development, also working to develop technologies that can support and assist with various challenges that come up in preclinical development. Broadly investing to support all of those activities. Rate of clearance, if you want me to pick, like, a single property that I think is particularly tough, I might lead with that one, but that doesn't mean we can't make progress there.
It just means we have to invest to make the progress there we would like to see.
Karen, do you wanna address this?
Yes. So you also asked about the Lilly collaboration and whether this is indicative of future collaborations and what our capacity is. We described today that we're running about 30 programs between the collaborations and the proprietary and partnered collaborations, and that number has been pretty steady, actually. It's been growing, obviously, since the very beginning, but over the last few years, 30 programs is about the capacity for the discovery team. We expect, because these programs last about two or three years, as those programs complete, we have the opportunity to obviously initiate wholly owned programs or to partner with external companies. I will say that obviously our business relationships have been evolving in terms of both the style in which we collaborate with people, but also how much we take on.
We want to optimize that and have a balanced approach to how we take on partnered programs or also our wholly owned programs, and the terms continue to increase. There's been a steady trend to increasing terms, and we expect and plan for that to continue. I don't know if that answers your question completely.
Yes. That's good.
Sorry. Yeah, maybe you should hold on to it.
No. I'm sorry. Also, specifically on the small molecule development versus other types of development programs, whether with antibodies or
Right.
biologic type programs.
Well, I will say in our BMS collaboration, we did add a degrader component to that. We have a new modalities team that Robert's team is working on new modalities, including biologics. I would say that you can expect to see obviously more small molecule programs, and collaborations, partnerships, but I think you will increasingly over time see additional modalities into that portfolio as well.
Yeah.
Thank you. Thank you for the presentation. Really nice. This is Yi Chi. I'm here on behalf of Michael Yee from Jefferies. Two questions. First question, how are you thinking of your strategy when picking different targets to either go after yourself or with a pharma partner? If you could talk about that a little bit more. Also just help us benchmark the biologics program. Where is that in relation to what you have with the small molecule and how you see that moving forward? Thank you.
I'll start by talking about our program selection strategy. As you can see from the vignettes and the case studies we shared, human validation is a really important piece. We are excited about first in class opportunities, but our focus is on the design challenge, getting to the best molecule for a target that has been de-risked from a biological point of view. Now, that human evidence comes in a variety of different flavors. You heard about programs that are in the clinic, where there is a biologic that's treating patients, and we are seeking to make a small molecule version in a modality switch. You also heard about genetically validated targets like TYK2 and MALT1, where human evidence comes from genomics and other approaches.
I'd say that's one of the most important factors in addition to having a very clear line of sight to the differentiated target product profile. That obviously underpins how we deploy our platform.
I think.
Yeah. Biologics, we're investing quite a bit, and we're very excited to see how the technologies are improving in ways where in the future we believe we'll be able to support biopharmaceutical discovery in a very tight analogy to how we're supporting small molecule drug discovery. So we're excited about that area and hard at work and are seeing that hard work pay off in various ways. Yeah.
Hi. Thanks. Michael Ryskin from Bank of America. It looks like the royalty structure of your newly announced deal with Lilly varies slightly against that your deal with Bristol. Can you talk to the factors that are at play here and how you're thinking about balancing more broadly a near term milestone recognition versus the back end of economics? Thanks.
Yeah. Geoffrey, do you wanna-
Sure.
Yeah.
Okay.
Yeah.
You're right. There's a slight difference, but we think that the range is very similar to the Bristol collaboration.
Right.
I think it's also worth noting that the extent of our engagement in the Bristol collaboration is probably going to be significantly greater per program than in the Lilly collaboration, where we've got a very specifically defined specific set of responsibilities. Then in terms of the question about balancing milestone payments and royalties, we do think there's a lot of value in the royalty component, and that tends to be overlooked by many companies in the early days. If you have a high single digit average royalty base, then we think that's a really valuable asset, you know, especially when we build a portfolio.
As I said, we've got 15 royalty bearing programs, and as Karen alluded to, the royalty opportunity, as this technology is becoming more and more validated, is going up progressively with the collaboration. You know, each time we do another collaboration or partnership, we think we set new benchmarks. Maybe just to add the benchmarking for these deals obviously also takes into account the therapeutic area for the target, right? It's not just random, it's informed by comps and metrics.
Okay. Sorry. Tom first.
Hi. Joe Kim, Piper Sandler. Thank you for the presentation. Ramy Farid, you talked about how your usage of the software is 10-100 times higher than your customers. Could you talk about how efficiently your customers are using the software now? It appears very broad, a lot of offerings, and how much training is involved, and will they ever get to the ability that you're able to do internally?
Yeah. We're making progress. We're definitely making progress towards that. I mean, in a few years ago, it was a case that there was a huge sort of knowledge gap. There was a lot of training that had to be done, and we've invested in that. We have invested in everything from online courses designed to train medicinal chemists and modelers on how to use computation to a significant number of workshops and so on. You know, almost something like on the order of over 50 a year, you know, something like that, workshops where we're that are designed at training. So we've made progress. Our customers are definitely becoming more and more sophisticated in being able to deploy this technology at scale. Will.
Will they ever get to the point where they can do what we're doing? That's our goal. That is absolutely our goal. It will take some time. It's again, like as I said, it's kinda hard to replicate what we have internally with software developers that have developed the technology, right, sitting next to the people who are using it. But it's absolutely our goal. Our goal through training and also, by the way, more automation in the software. This always happens when you develop new technologies. You know, at the beginning, it's only the experts that can use it. Then you start investing in the graphical interface, and you start developing ways to automate it, and we're investing in that. I think through automation, we also.
We'll also make pretty significant progress in that direction. Yes.
A second question for Karen. I just wanted to get your thoughts on what the role in cancer for your CDC7 inhibitor is. You talked about combination with PARPs and BCL2. Seems like two different directions. Is there a place where the monotherapy potency would be beneficial in a particular tumor type?
Sure. I'll also obviously invite Hamish to add additional comments here. From our assessment of CDC7, we believe there is a monotherapy opportunity. We have studied it in AML models and have seen in these PDX models a very favorable responses in terms of antitumor activity. However, we've also looked at a range of DDR and replication stress targeted mechanisms where we see combination activity and also potentially synergy. Now, the question is really about overlapping tox. As you're familiar, the PARP inhibitors, for example, I think it's a great class to kinda compare this to. There's a range of tox profiles across the PARP family, and combinations with PARP inhibitors have a range of overlapping tox profiles.
We believe that that's a space for future investigation in the clinic, which is what are the most compelling activity combinations, but also how does one focus in on those where there is less overlapping tox?
Do you want to add anything to that?
No, I mean, I think, you know, what we've seen is combination activity with a number of different agents that could be applied to a number of different sort of disease areas with solid tumors as well as liquid tumors. As Karen mentioned, I think what we've seen in the AML space in our AML models in particular is very, very compelling. So that sort of guides us for the moment.
Thanks.
Thank you guys for taking my question. This is Stephanie Yasko from [XPB]. I have a high-level one that might be particularly relevant for the health tech folks on the line. Is your ultimate goal when you look at your software licensing revenue to have more folks go the way of Lilly and move them over into that partnership side of the world? In which case, I should voice this over to my biotech colleagues. Or is this one thing that's kind of a one-off, and then they'll grow in tandem?
Yeah, I'm not sure I'd characterize it as one-off, but what I would say is that we don't believe it's practical or really the goal to transition every one of our pharma customers to a collaboration where we're involved in every one of their programs. That's not possible. I'll go back to what I said before, and thank you for the question. We are absolutely committed to facilitating that transformation of our customers' ability to deploy the technology at scale. You can see how enormous the opportunity is.
If we're using the software at a scale that's 10-100 times the largest customers, not the average customer, but our largest customers, and to the extent that that throughput, by the way, is correlated to the licensing cost, you can see the opportunity. We're committed to it. We're investing in the education and the workshops and the courses and in these collaborations that can facilitate that know-how. You can imagine how much more effective that can be, where we're really working together, and all the chemists and the modelers of the company can really, I mean, literally have a front-row seat to watching the software get deployed at this scale. I hope that, yeah.
That's super helpful. I have a follow-up then. You have this target of $100 million for your drug discovery revenues very soon, and I imagine this Lilly move probably changes that pretty rapidly. That puts it in line with the scale of your software revenues. Is there any way to separate those two, so we can have two different points of coverage?
I hope we emphasized this during the presentation, but maybe we didn't, so let me try and emphasize it here. We see really significant synergies between these businesses. We've sort of already touched on that, right? You can imagine how much feedback we get from these thousands of customers, tens of thousands of users. All of that feedback is fed back into the software. It improves the technology, which then we use in the collaborations. You've already seen very clearly how critical, and I think this is unique at Schrödinger, and I think when you look at other companies that are claiming to have platforms, I think you'll see a difference.
The validation that comes from using the software on proprietary programs and the collaborations, I think provides the kind of validation that's necessary to convince pharma companies, which are pretty big companies that aren't nimble, right? I'm not sure how you characterize them. We think having these two units in the same business is really critical. I think it's the reason why we are where we are. The technology's working as well as it does. The software is growing. I think it would be very difficult. I hope I just convinced you that it would be difficult to separate them, right?
Okay. Thank you.
Although I know it would be easier for you and the coverage and so on, I think we have to stick with this complexity to be able to leverage the synergies. Yeah. Jeff.
Stephanie, it'll.
Yeah.
If I may add one more. It also gives us a unique financial profile.
Yeah.
I meant to say this, but the fact that we have the revenue from the software business means that we don't have to go out and constantly be raising money to fund our drug development activities, but yet we still have the upside that, as we all know, the massive asymmetrical upside of those drugs coming through and being successful. I think we've laid out for you. We think their odds of being successful, at least so far, look better than industry averages. That's sort of the opportunity, but it's great that we don't, up markets, down markets, whatever, have to be constantly raising money.
Yeah. Maybe just one other point of clarification, since you brought up the Lilly collaboration and its impact on our revenue for next year. As you can imagine, we're constantly, 'cause we have relationships with so many pharmas, in discussion about potential to partner or collaborate. We've been talking, obviously, to Lilly for quite a while, so I'm not sure that we should necessarily think that the Lilly announcement today is gonna have a profound impact on the $100 million next year.
Yeah.
I did just wanna capture that.
Yeah. Thanks.
We don't have a lot more time, and I wanna make sure we have time. Joe, Karen, did you wanna?
Yeah. Thanks, Ramy.
Yeah.
Let me ask you a couple of questions that came in on the webcast. The first one from Matt Hewitt at Craig-Hallum. The magnitude, and we'll go back to Lilly. The magnitude of the potential royalty stream with Lilly was a bit higher than what we typically see with large pharma biotech, who typically pay more upfront with smaller downstream payment potential. Is this a function of the increasing value proposition of the software, as you suggested, Ramy, or are there other factors in this collaboration that contributed to the royalty rate? I think you can.
I mean, I think there was an earlier question that was sort of similar. I think every target, every indication, every therapeutic area is associated with a set of comps, right? For the royalty rate that pharmas tend to agree to. I do think, and when we've shown you the sort of evolving nature of our business development relationships, that these more recent deals, we believe represent very competitive negotiation on our part, and we continue to raise the bar on the types of economics for deals that we're working on. We didn't disclose the upfront. There is an upfront, obviously. We do focus, as was pointed out by Jeff, on the long-term value that we're creating with these assets and these programs.
The royalty rate is very important to us because we will be participating not just in the near-term discovery and clinical milestones, but in the eventual success of that program.
Thank you. I'll just remind the webcast, as we're about to run out of time, please email ir@schrödinger.com with anything that you would like to follow up on. The final question I'll ask this morning from the web is very specific to the machine learning landscape and perhaps a question for Robert and Ronnie. If you could comment on OpenEye, you know, they were recently acquired by Cadence, and how do you view them within the landscape, and do you view them as a competitor to Schrödinger?
You wanna take it?
Maybe you, Ramy.
Yeah. You know, we generally don't comment on other companies in the space, but I think given the acquisition, it's probably fine. We can make comment. We, with regard to machine learning technologies, this is a little secret that we'll tell you. Everybody has access to the same underlying technology. It's all open source, and we have access to it. We actually worked with one of the people that took the TensorFlow, you know, technology from Google and really made it chemistry aware. That's Vijay Pande. So we worked with him. He's on our SAB. But again, we're not claiming to have developed machine learning technologies that are necessarily so different from what everybody else has. Everybody has access essentially to the same technology.
The only difference really in machine learning is what we talked about at the beginning, which is the training set. That's really what differentiates one machine learning model from another. And I think we showed very clearly that if you rely solely on experimental data to train machine learning model, that's very, very, very limited. And so, obviously, the advantage that we have is we can generate massive training sets using these physics-based methods. And that's really differentiating. And so we think that's probably the answer to the question. Do you wanna add anything to that, Robert? Or
No, I think that's perfectly captured.
Yeah. Good. Right on exactly on time. Oh, we did have one more question.
So-
Do you want to go ahead?
Yeah. Is that okay or no?
I think at this point we'll conclude.
Okay.
The formal webcast.
You can.
I want to thank everyone for joining in today. We really appreciate your time, taking the time to hear more about the company and the work that we're doing. We'll sign off. Again, ir@schrödinger.com with any follow-up. Thank you.
Great. Thanks a lot.
Yeah. Thanks.