Discussion on the design and rationale of the VIVIAD phase II-B study of our lead program, varoglutamstat in Alzheimer's disease. We are very pleased to be joined by our moderator of today's event, Dr. Philip Scheltens, in addition to guest speakers, Dr. Stephan Schilling, Dr. John Harrison, Dr. Sietske Sikkes, and Dr. Willem de Haan. Following the moderated discussion and brief presentations, we will host a Q&A session. As a reminder, you are welcome to submit questions in advance via the webcast portal. You can find the respective tab in the upper right corner of the webcast player. Next slide, please. Before we start, I would like to remind you that during this conference call, we will present and discuss certain forward-looking statements concerning the development of Vivoryon's core technologies, the progress of its current research and development programs, and the initiation of additional programs.
Should actual results differ from company's assumptions, ensuing actions may be different from those anticipated. You are therefore cautioned not to place undue reliance on such forward-looking statements, which speak only as of the date hereof. I'm incredibly pleased to be joined by such esteemed colleagues today to discuss varoglutamstat development from its scientific roots up to the advanced clinical development stage it is now. We will focus largely on the VIVIAD study, and I will now introduce to Professor Philip Scheltens, who has agreed to moderate today. Professor Scheltens has been very important for the development of varoglutamstat, as he was the PI of previous studies. So can I hand over to you, Philip, please?
Thank you very much, Frank. It's a real pleasure and honor to be here today. It's exciting times in the field of Alzheimer's, and we are in between the readout of the donanemab study that you all know of, and which has the same target as the Vivoryon drug. We are awaiting the readout of the phase II-B study, the VIVIAD study, that we will discuss today, early next year. It's a very, very exciting time. I think this virtual event is rightly placed at this moment, and I'm really very pleased that we have such a sort of a large array of very distinguished speakers that will go into the details of the design of the study, the reasons why we chose endpoints like we did, and also how they matter and how important they are.
So first of all, of course, it starts with basic science. It starts with the target that is that we are discussing today, and no one better than Professor Stephan Schilling can do this. And he is actually a professor of drug biochemistry at the Anhalt University of Applied Sciences, and the head of the Department for Drug Design and Target Validation at the Fraunhofer Institute for Cell Therapy. I'm very, very pleased that he joins us today, and Professor Schilling, the floor is yours.
For this kind introduction, and first of all, thanks for inviting me to that really nice session here. So I'm going to talk about the role of pyroglutamate modified Aβ , or for short, N3pE-Aβ, in Alzheimer's disease. And I'm going to review in the next upcoming 12 minutes most of the key studies that finally sum up to a final picture on the role of pGlu-Aβ , pyroglutamate modified Aβ , and glutaminyl cyclase, the target for varoglutamstat in Alzheimer's disease. Next, please. So certainly we are all aware of that frequently and often cited pathological hallmarks of Alzheimer's disease, so Aβ deposition and plaques, tau tangles, and finally, widespread inflammation and cholinergic degeneration.
However, often cited, it's minimal common sense that this is a more or less incomplete picture of, especially the molecular, things that are going on in Alzheimer's pathology. So we meanwhile know that especially, oligomeric forms of Aβ are triggers of neurotoxicity, and also the molecular species that underlie Alzheimer's pathology are extremely heterogeneous. About 80% of Aβ is N-terminal, truncated, and modified. Next, please. One of these modified species is, the species we are talking about, which is a major target of therapy. It's N3pE-Aβ. The molecule is generated by an enzyme called glutaminyl cyclase, and this is shown here in that scheme from Thomas Bayer.
So there is an N-terminal truncation of Aβ , and at position three, there is a glutamic acid residue that is converted into pyroglutamic acid, and this is shown in that scheme on the left-hand side. This modification renders Aβ very stable to degradation, and it changes the characteristics of the molecule. It increases the hydrophobicity, and this is certainly a driver of toxicity and oligomer formation. Very importantly, it has been shown by us and others, and this is shown on the right-hand side, that pyroglutamate progressively accumulates in Alzheimer's progression, and it's underrepresented in normal aging, and also glutaminyl cyclase as the enzyme is shown in that scheme that is responsible for pyroglutamate formation, is upregulated in Alzheimer's disease compared to normal aging.
This is actually what is going on in a nutshell, and I am going to share with you now the key studies that finally sum up in a bigger picture on the role of glutamine cyclase and QC, and pGlu-Aβ in Alzheimer's. One of the seminal studies that showed that there was really a decisive role of pyroGlu modified Aβ N3pE in Alzheimer's, was done by Dietmar Thal and coworkers. They studied forms of non-AD pathological forms, non-AD and pathological pre-AD, showing Alzheimer's-like pathology but no symptoms, and they also compared these data pathological to symptomatic AD.
What they found, and this is shown in the middle of that scheme, pGlu-Aβ N3pE is emerging exactly at late stage 3 pathological pre-AD, indicating that this might be a decisive factor for the development from pathological aging to symptomatic Alzheimer's disease. Interestingly, and this is shown on the right-hand side, this was backed by a study of Dietmar Thal from Amsterdam, investigating the glutaminyl cyclase QC expression exactly also in pre-AD and symptomatic AD. And it showed that exactly at a similar pathological stage, glutaminyl cyclase comes up, clearly suggesting that in human AD, glutaminyl cyclase and N3pE are co-regulated and appear at a decisive point, turning point in Alzheimer's pathology. Next slide, please.
We did that together also with as University of Leipzig at the Paul Flechsig Institute of Brain Research, and we continued that work, and we did correlational analysis of glutaminyl cyclase QC RNA, shown in the middle, the A picture, and pGlu- Aβ. And we clearly see a significant correlation of glutaminyl cyclase expression and pGlu emergence and concentration. And on top of that, we also observed in that study that the pGlu- Aβ or N3pE-Aβ concentration negatively correlates with cognition, suggesting that the more N3pE is present, the lower the MMSE of the patients in reductions, further corroborating the findings of Thal and coworkers. Next slide, please. So we might, of course, ask now, why and how does N3pE-Aβ trigger such a toxicity?
There are numerous studies out there investigating the physical properties of pGlu- Aβ. It is not possible to summarize all these, but in a nutshell, if we look at all these studies, it clearly turns out that due to the internal modification, there is a structural change of the Aβ molecule that is unique to N3pE, and this leads to formation of specifically structured oligomeric forms of Aβ, and these oligomers exert then extreme toxicity. Some studies showed that by us and for instance the group of Dieter Willbold, indicating that there is a specific seeding capacity for small oligomeric forms because of the structural pyro. Next slide, please. This is one of the key studies investigating exactly the oligomer formation and toxicity.
It is a very busy slide, but I would like to guide you through because it's one of the key findings we did together with the University of Virginia, together with George Bloom. So please concentrate first on the left-hand side, and this is a dot plot analysis of the oligomerization kinetics of full-length Aβ. And what you see is, that in the dot plot, the Aβ 1-42, the non-modified Aβ, quickly forms larger oligomers, 10-mers, 12-mers, or even larger oligomers, within a few hours. At the same time frame shown in the middle, you see just by comparing the dots, that with pyroGlu, you see emergence of smaller oligomers, dimers, trimers, and these persist for a much longer time. And obviously, that oligomers 10-mers, 12-mers are not formed.
And interestingly, and this is the intriguing factor here and the special role of pyroGlu, if you co-aggregate both 1-42 and pyroGlu shown on the right-hand side, you see not the characteristics of Aβ 1-42, and you see directly that little amounts of pGlu-Aβ induce an oligomerization behavior as pyroGlu itself. So it's like similar to prion activity, pyroGlu induces formation of extremely toxic oligomers. As shown on the very right slide, you see that those fractions that contain these small oligomers have higher toxicity on neurons, and also down below on the left-hand side, you see that those fractions that contain pyroGlu exert a strong influence on long-term potentiation, which is a measure for neuronal synaptic function. Next slide, please.
So however, these oligomers obviously also not only act on neurons directly, it was shown by a group of Michael Kreutz from Leibniz Institute for Neurobiology in Magdeburg, that these oligomers directly induce release of cytotoxic cytokines, especially TNF-alpha from astrocytes. He could show that if you cultivate astrocytes, you stimulate with pGlu-Aβ oligomers, and with 1 to 42 oligomers, you see that only the pyroGlu forms are able to lead to release of TNF-alpha, a highly cytotoxic cytokine. And this conditioned medium, this is shown down below, induce then synaptic dysfunction. So clearly suggesting that the oligomers not only act on neurons directly, they also act on the inflammatory conditions in the brain. Next slide, please. This was a very recent study.
This just came out last month, but I included it because it's a very intriguing study showing that pGlu-Aβ directly also induces astrocyte toxicity by influencing the autophagocytosis of lysosomes in astrocytes. So the pyroGlu forms damage the lysosomes in the cell, and the proteases get out and induce synaptic damage. So, and next slide. The last slide, before I sum up all these studies in a bigger picture, I would like to present, was done by Johannes Attems at Newcastle University. He did a very interesting study in human Alzheimer's post-mortem brain, where he compared the distribution of pGlu-Aβ and the concentration of pGlu-Aβ , and paired helical filaments of hyperphosphorylated tau.
What he could observe is shown here on the right-hand side, that there is a significant correlation of the pGlu-Aβ load with emergence and load of paired helical filaments in the brain of these patients. Clearly suggesting what we have seen also before in other studies, that the toxicity or pGlu-Aβ toxicity is dependent on presence of tau. Obviously, pyroGlu and tau also have a direct connection, that pyroGlu induces tau hyperphosphorylation and paired helical filament formation. Next slide. I know this is again a scheme, but it summarizes in a more or less complicated way, all these findings and key studies that I have presented you before. Please, let's have a look first on the top. There is APP processing going on in the brain from cradle to grave, and this does obviously not change too much.
Usually, there is formation of full-length Aβ 1 to 4 1 to 42. Parts of these Aβ molecules are transported out of the brain and the bloodstream. However, it has been shown by many researchers that a lot of those molecules are degraded by insulin-degrading enzyme and neprilysin. Two enzymes, IDE and NEP, shown here. So usually this is a detoxification. If everything goes well, this, the brain gets rid of. However, during aging, obviously these proteases, insulin-degrading and neprilysin, are downregulated, leading to an increase of the accumulation of full-length Aβ and alternative pathway of APP and Aβ degradation. This leads to truncation. Some of the culprit molecules are shown here, proteases like before, APA, Meprin. These result in truncated Aβ , and this generates the substrate for glutaminyl cyclase, which is highly expressed in brain, also in physiological conditions.
This QC activity generates the culprit, pyroGlu N3pE Aβ , and a very recent study has shown that the pGlu-Aβ directly is inhibiting insulin-degrading enzyme, which finally leads even to more accumulation of N-truncated. The formation of N3pE triggers oligomers, as we have seen, and these directly influence neuronal physiology. However, it triggers also inflammation, and this inflammation leads to upregulation of QC, again, turning to a point where more pyroGlu is formed. So we end up in a vicious cycle. Next slide, please. That is like a system, a toxic parody. I call it a toxic parody of N3pE and glutaminyl cyclase activity that ends up in the picture that we see: accumulation of N3pE, neuronal dysfunction, neuronal death, and more QC activity.
And this is the vicious cycle and the working hypothesis for QC inhibition in Alzheimer's disease, and the rationale for development of our glutaminyl cyclase inhibitor as a treatment in Alzheimer's. Next slide, please. Thank you very much for your attention, but I don't want to close this talk without thanking many people that were partners in all these studies, so especially Hans-Ulrich Demuth, Dagmar Schwenk, and Holger Cynis, former collaborators at Probiodrug and Fraunhofer. Many thanks to all the partners at Probiodrug and Vivoryon for a very nice collaboration for years. And you see, there are many partners more, throughout the world who contributed. Thanks to all, and, thanks for listening.
Thank you very much, Professor Schilling. This was really very clear and such a sort of a well-put rationale for all that is following now, and for why Probiodrug and Vivoryon has pursued this way for such a long time. So we are going to listen now to Professor John Harrison. He is a professor for neuropsychology, actually, King's College and also Amsterdam, and nowadays also CSO of Scottish Brain Sciences, and he will dive into some of the endpoints that are used in the current trial. Professor Harrison, the floor is yours.
Thank you, Professor Scheltens, and thank you to Vivoryon for this opportunity. So let's proceed straight to the next slide. This is simply a declaration of my perceived or actual conflicts of interest. I won't dwell on the slide today, but this is freely available online at some of the links that I'll share at the end of today's presentation. The most critical one for today's purposes is that I have previously received remuneration from the sponsor, Vivoryon, for my participation in their trials. And if you go to the next slide, this is broadly what I'd like to achieve in the next 10 minutes or so. So I want to talk a little bit about cognition. That's front and center for me. Cognition has really been my principal interest in Alzheimer's disease for now, very nearly 30 years.
I want to talk about meeting the challenge of developing new treatments for this, this disease. And in this, I don't think I'm gonna be saying anything very surprising. I, I think largely what I'm gonna be presenting is simply a combination of rational drug development with the application of clinical evidence. So I don't think there's anything surprising. I think it's really just a very standard and sensible approach to the development of new therapies. And as always, I'll end with some summaries and some conclusions. So if we get started with the next slide, let's think a little bit about cognition in early Alzheimer's disease and even earlier, so in mild cognitive impairment. There's really two messages from this slide. The first is encapsulated on the right-hand side. So here are the areas of cognition.
If you think of cognition as essentially thinking, we find it helpful to divide cognition into a variety of different subtypes. I'll talk a little bit about how we measure these in a later slide. For the purposes of this slide, I just want to really quickly outline what we mean by episodic memory, essentially your capacity to encode and remember new information. Working memory is the part of your cognition where you would do problem-solving. Attention is just a psychologist's fancy term for concentration. Executive function is sort of an umbrella term, but what we really mean by this is your ability to organize, to plan ahead, and to think out of the box in problem-solving. So quite a few things, but broadly referred to as executive function. Praxis is your capacity to operate meaningfully on your environment, and language is your capacity for communication.
So in very broad lay terms, that's really what we mean by cognition. The other key message to take from this slide is in the various earliest presentations of Alzheimer's disease, what we refer to as mild cognitive impairment, we have a taxonomy which acknowledges that people presenting at clinic can do so with very different cognitive profiles. So I think we're used to thinking about Alzheimer's disease and MCI as being a disorder of episodic memory, and that's often why we would see people present at clinic. However, very important to acknowledge that it's actually a disorder of cognition, so it can be any one or any combination of these areas of cognition on the current slide that could be impaired on first presentation.
A very substantial proportion of people that come to see us do so because they or somebody close to them have acknowledged that there are some changes in their memory, but lots of people come to see us because their executive skills are starting to fail, they're finding it difficult to concentrate, and a variety of other combinations. So key message, Alzheimer's disease is a disorder of cognition, and that includes memory, but is not solely limited to just memory difficulties. And if we go on to the next slide, this is, essentially a critical review of the ADAS-Cog. So this is the key outcome measure used in a variety of Alzheimer's disease studies historically, and I want to give a very balanced interpretation of its use. So I'm gonna begin by saying that the tests of memory that compose the ADAS-Cog, I think are not bad tests.
So people that are age-matched to Alzheimer's disease patients don't perform perfectly. The blue bar here in the first three lines illustrates that there is a memory deficit, that people don't perform perfectly, and it also shows that patients with MCI do worse than normal individuals, and that patients with Alzheimer's disease do even worse than patients with mild cognitive impairment. So a very predictable and expected pattern. And the ADAS-Cog memory tests do have the virtue of capturing what's important and interesting to us in the domain of memory. The challenge, unfortunately, comes is if you look at the other subtests listed here on the Y-axis, you can see that there's essentially very little or no difference between the performance of age-matched normal controls and patients even in the mild stages of Alzheimer's disease.
So to many of us, this is a slightly odd thing to have selected. If there are patients performing perfectly at the beginning of your study. And keep in mind, there's only one version of the ADAS-Cog, so even if they don't know what a stethoscope is on the first visit, they can learn it for subsequent visits. If they're at perfect performance at baseline, and they stay there for the duration of your study, no matter how good your drug is, it would be impossible to see improvement on these measures. The really key thing to take from this slide is that there are no ADAS-Cog subtest measures of executive function, attention, or working memory. Three areas of cognition that we know from clinical experience can be compromised very, very early in the disease process.
So again, reinforcing the idea, Alzheimer's disease is a disorder of cognition, and the ADAS-Cog is a good test of memory, but sadly a very inadequate test of the other key areas of cognition that we know can be very important. Final thing to say about this slide is that the idea we might take a raw score for a test and compress it to a scale score is very well established in our use of the ADAS-Cog, and that's essentially a methodology that we've carried forward into other test reviews, like the Neuropsychological Test Battery. So some fun summary facts about cognition. Alzheimer's disease is a disorder of cognition, not just memory. Worth mentioning also that healthcare practitioners, when they comment on the consequences of treatment with already marketed drugs, mostly report evidence of improvement on executive function.
There are improvements on memory, but the most conspicuous improvement reported is in executive function and working memory. And also worth mentioning that actually, although in previous studies, perhaps of drugs like galantamine, where the ADAS-Cog was used and didn't show evidence of efficacy, tests like Digit Symbol Substitution, which I'll be talking about today, have previously demonstrated positive treatment effects. So some very important facts to move on from cognition now to our understanding of what goes on from a regulatory perspective. Very important to say there, there's no evidence, and I've never seen it written down or even said, that either the ADAS-Cog or the Clinical Dementia Rating Scale Sum of Boxes are mandated for use. And if you look at the 2018 guidance from the FDA, in fact, neither of those two measures are even mentioned in the text of that paper.
However, in contrast to the absence of mention of ADAS-Cog and the CDR, sensitive neuropsychological measures are repeatedly stressed as being important and helpful in our understanding, characterization, and measurement of new treatments for Alzheimer's disease. So very important, just to put the story straight in terms of expectation from regulators about what would be required to demonstrate cognitive efficacy of any new compound. And the Digit Symbol Substitution, which I will talk about, this is a test with extensive experience of using in Alzheimer's disease. It is acknowledged by the European Medicines Agency as being a timed executive function test, and it's always very helpful to have recognition from regulators that they acknowledge that a test is an appropriate measure for use in Alzheimer's disease. So in developing a new drug for Alzheimer's disease, what's the challenge that we have to meet?
This is a very quick account of the history of this particular compound and the approach that the sponsor has adopted in seeking to demonstrate efficacy. In the phase II-A study here referenced on the right panel, we did a very sensible proof of concept study. We evaluated all of those key areas of cognition that were discussed earlier in my presentation, and the best evidence we had is that the positive treatment effects tended to be seen on tests of attention and working memory. Now, that's not to preclude the possibility of improvement on episodic memory and other domains of function, but the most conspicuous evidence we had was that attention, people's ability to concentrate, working memory, their ability to solve problems, seemed to be the two areas where there was the most beneficial effect.
It makes perfectly good sense, I think, from a proof of concept study, where you have that evidence to then look to replicate in a phase II-B study, the ongoing study that we're discussing today. So the approach we adopted to meet that challenge was to select people based on difficulties with attention and working memory. The Digit Symbol Substitution Test, also known as the coding test, was our methodology for doing so. In the current study, you have to demonstrate an impairment on that test in order to qualify for inclusion in the study, and then to seek to measure efficacy of attention and working memory using other well-known tests of cognitive function. So the tests that we chose to use are Cogstate tests.
I'll come to some description of those very shortly, but before I do so, this slide is a very helpful place to start. So we've understood that attention is a key area of cognition, very relevant to clinical meaningfulness and functionality, but also one which we have a very reasonable expectation will show progressive decline in people with Alzheimer's disease. I think this is not surprising. Progressive cognitive decline in a neurodegenerative progressive disorder, where cognition is the de facto prima facie presentation, is perfectly a reasonable expectation. But this is a really helpful picture because it shows that, in fact, in the study, progressively across a very long period of time, every single individual with Alzheimer's disease did decline on a key test of attention. And you can see the control group here, shown in black at the bottom, who did decline at a very modest level.
I'm afraid that's the bad news for all of us. We all do get a little bit worse on these tests, even if we don't have Alzheimer's disease. The important message here: people with Alzheimer's disease do progressively get worse at a rate that exceeds expectation. If you move to the next slide… What we'll see here is just a very quick picture of the kinds of measures that we've adopted for use in the primary outcome measure, as well as the secondaries. These are illustrations of the tests that we routinely use in clinical and experimental psychology. The detection test is a simple reaction time test. What happens is, as the card turns over, the study participant must hit a button on the computer as quickly as possible. In the second test, if the card turns over and it's red, they press one button.
If the card turns over and it's a black card, they press a second button. The third measure is, have you seen this card before? So you're shown a sequence of playing cards, and if you have seen the five of hearts previously, you press yes, and if you think it's the first presentation, you would press no. And finally, we ask you the question: Is the card you're currently looking at, the one that you just saw? And that's a very good test of working memory. But the premise here is these are very familiar materials, the instructions are very simple, and the levels of compliance and accuracy that we get, even from patients in these mild stages of Alzheimer's disease, is remarkably high.
Very important that we employ tests that people understand, as well as being able to allow to perform as well as they possibly can. So they're the tests we used. This is a very general gestalt picture, just to illustrate that there's nothing remarkable about the use of these kinds of measures in studies of patients with Alzheimer's disease. This is a selection of a few that I could have shown you, but really just here to make the point. These kinds of metrics are very, very commonly used clinically, scientifically, and academically, and routinely also in clinical drug trials, of which the current study is simply one example. So if you move on to the next slide, this is some very preliminary blinded analysis of the Cogstate data that we discussed just in a couple of slides ago. Some important caveats.
The trial is ongoing. What you see here is a characterization of performance at baseline for the entire cohort. The end of treatment cohort is a modest-sized sample, not relatively the same as the baseline. So you here see baseline based on 250 individuals, and end of treatment, we have 36 individuals that have reached that stage. But the message here is essentially the expectation of progressive decline has been met on the evidence to date. So we've picked out some measures, we had high expectations that patients would decline on, and indeed, that seems to be the case. And if we move on to the next slide, this is just a test-by-test illustration of that general principle.
So you can see that on all the metrics that have been adopted, and the three to the left comprise the primary efficacy measure of measuring attention and working memory. What we're witnessing so far is that there has been a decline from baseline, and therefore, that provides us with the opportunity to look to see if treatment has rescued performance. If we go on to the next slide, this is really just my attempt to illustrate that actually, there's nothing really particularly radical about what's been done with this study. If you look at traditional measures and traditional trials, what we do is we enrich the population for memory difficulties. That's usually because memory is the target for our efficacy expectation.
To get the right kinds of people in, we use a test ostensibly of memory, the Mini-Mental State Examination, and then we measure memory on a different scale, and the ADAS-Cog is usually the test of choice. What we had, based on the evidence of the proof of concept study here, was that attention and executive function seemed to be the core, key areas of interest. Simply mimicking the traditional methodology, we then selected based on performance on a test of attention and executive function, the coding test rather than the MMSE, and we sought to measure efficacy with valid, reliable, and sensitive measures of both those two cognitive domains. The argument I want to make here. There's nothing radical or unusual about this study.
What we're doing is we're playing to the evidence that we saw in the proof-of-concept study to maximize our chances of detecting efficacy where present. And if we move on to the next slide, this is simply a, a very quick summary of the things that I've just presented. Very, very important. I want to make this message as a clinician, a researcher, as well as somebody involved in drug development. Alzheimer's disease is a disorder of cognition, which includes memory, but it also includes a number of other cognitive domains, all of which are worthy of rescue with treatment, if we possibly can.
Deficits in executive function and attention are highly clinically relevant. They often correlate well with measures of function, which my colleague, Dr. Sikkes, will talk about later. They're very prevalent. We've established that with our baseline analysis. On the basis of evidence to date, they are capable of being rescued with marketed drugs. At that point, I'm going to thank you for your attention and pass back to the chair. Thank you, everybody.
Many, many thanks, Professor Harrison. You surely has kept our attention lively for at least 20 minutes. A very, very clear presentation, and you've made some very important points. And I think the next speaker will also detail what's it all about. I mean, cognition ultimately has to lead to function and improved function, hopefully. And cognition also. Loss of cognition leads to a loss of function in Alzheimer's disease. So it's very important to measure function. And Sietske Sikkes is a assistant professor of psychology at the Faculty of Psychology and also at the Alzheimer Center Amsterdam, and she's done a lot of work on measuring function in daily life. And I will give her the floor now. And please get your questions into the chat, and we'll address them at the end.
Well, thank you, Philip, and I'm very happy to present our work here today. Actually, when I started my day today, we always start with sharing some good news, and my good news of the day was that I was here to present this work to you today. So when we consider outcomes that matter to patients, people living with dementia and their caregivers, everyday functioning and functional impairment often pops up as an important aspect. And what is functional impairment? When we look into that a little bit more details, we often come up with the instrumental activities of daily living, and these are those activities of daily living that are affected by the cognitive problems. And one could think of handling finances, cooking, but also handling everyday technology, such as the mobile phone, for example.
My colleagues , now led by Leonie Visser, also asked patients and their care partners what were important outcomes that matters to them with regards to prognosis? And there also, it became clear that cognition and these instrumental activities of daily living were important. So this is also in line with what is shown on the next slide, the AD-PACE Initiative, where everyday functioning. This was an initiative where they captured what matters most to patients and caregivers. And what you see in this figure is that they try to map existing legacy instruments to what matters most to patients and caregivers. And in red, you see depicted an exact match with the instrument.
In blue, it's somewhat of a match, and as you can see, the majority is in white, and that indicates that it is not covered by legacy instruments. So, that is actually really important that we see that there are outcomes that do really matter. It's a remarkable finding, but it's not adequately captured. And as you can see in the next slide, we build upon that, because that was also our, our observation in clinical trials. And, what we found was, that when you select the optimal outcome measure, you should work from your way out towards the core quality characteristics of outcome measures. But it starts, of course, with the mode of action, the target population that you're studying, and then the relevant cognitive and functional domains.
Then you go on into selecting relevant tests and items and look at the quality, characteristics of these tests and items. Instead of using legacy instruments because they are used, because they are simply used all along. This leads to the work that we named, Why a clinical trial is as good as its outcome measure. We observed that there was indeed some limitations, and we highlight them, these in this table, and there's no need to go through this table, but it's good to know that the authors of this paper have thought about putting this table on a T-shirt and wear it to increase visibility. There are basically three main challenges, and that is what to measure, how to measure, and who to measure.
So when we look at what to measure, what we see is that the tests that were used should actually focus on the cognitive processes relevant to the specific stage that you're studying. How to measure, we often observe that tests were not sensitive to the disease progression in the early stages, and also that it is unclear whether the changes you observe in tests are clinically meaningful. Then the last one was who to measure, and that relates a lot to cross-cultural application, that you would want your test to be translated to other languages, but also to other cultures.
From these lessons learned, I would like to provide an example, which is the Amsterdam IADL Questionnaire, and I would like to highlight a couple of studies in which we try to tackle these current issues that we face in clinical trials. The Amsterdam IADL Questionnaire was actually based on a clinical observation that a lot of the existing instruments were not really able to capture the subtle impairments that we saw in everyday functioning. This scale is completed by a study partner, and it encompasses activities that also relate to everyday technology. They do incorporate a little bit more of the modern everyday activities. We selected activities applicable for both male and female, but also suitable for a broader age range and not only the oldest old.
It has a digital administration, and the scoring is based on item response theory, and I will go back into that a little later. But what was most important for the quality of the instrument is that we developed the content in collaboration with clinicians and, most of all, patients with early-onset dementia and their caregivers, to ensure that the activities we included were clinically relevant. A little background on the type of scoring, which also relates to the relevance of the scale, is that when we developed the items, we model basically the difficulty level of all the different activities. So, when we developed a shorter version of the scale, we could model all the activities and select the activities that covered the entire range of everyday functioning.
You see two examples highlighted here. When you look at the figure on your left-hand side, you see paperwork highlighted, which turned out to be a very complex activity. On the other hand, on the right-hand side, you see the activity paying with cash, which turned out to be a relatively easy activity. By looking at all the activities in the whole breadth of everyday activities, we selected activities that were relevant to patients and their caregivers, but also were relevant in capturing the entire spectrum of everyday functioning. We did that across cultures so that we would capture activities that were relevant to different cultures. If we then go on to the next slide.
Of course, what we really want to know is that the scores that we generate are also clinically meaningful, and part of that is based on the content validity. So a first exploration was that we saw that the Amsterdam IADL scores corresponded to the global impression of everyday functioning. And this was the impression by the caregiver, but also by the clinician. And in the figure on the right-hand side, you can see how these global impression corresponded to the different Amsterdam IADL scores. We also demonstrated a high test-retest reliability. Another important aspect is the construct validity, and that means that the scale would relate to other scales that you would expect it to relate to and not relate to potential sources of bias. So here on the left-hand side, you see that we found that it related to AD specific neurodegeneration.
On the right-hand side, you can see that already in the early stages, so in... It would distinguish between different diagnostic groups, but this was already to be detected in the early stage of subjective memory complaints. And other than that, we found that the scale was related to cognition and to traditional measures of everyday function and quality of life, but it was unrelated to age, education, gender, and mood, which were often seen as a potential source of bias in everyday functioning. And then another important aspect, and this is a bit of a busy slide, but I will walk you through is the sensitivity to change. When we're developing an ultimate outcome measure, we want to know that it is sensitive to change, that it can actually detect changes over time.
I'm showing the results of two studies. The CatchCog study is an observational cohort study where we performed a longitudinal construct validation, and you see the results on the left, in the bottom left. Here you can see that the results were most pronounced in the MCI group, the mild cognitive impairment. What we saw is in pink is the Amsterdam IADL, where you see a decline over time, and we compared that to the ADCS-ADL, which is one of the legacy instruments, and there we saw no change over time. Which means that within one year of time, you would expect some decline, but it was not detected.
In purple, you see the CDR sum of boxes, which actually increased over time, so the people became better over time, which is surely not an effect you would expect. On the right-hand side, I'm showing you some results from the TARIELL study. Here, there were two analyses performed. First, we looked at whether it was possible to discriminate patients with prodromal Alzheimer's and mild Alzheimer's disease. We showed that the Amsterdam IADL, compared to the ADCS-ADL, was better able in distinguishing between these groups. When we looked at the change over time, and you see that figure depicted here, we could see that actually all scales measured a decline over time, but specifically for the Amsterdam IADL, it had a higher dynamic range in detecting change over time.
This together demonstrates that specifically in these early stages, the Amsterdam IADL is more sensitive to change over time. But then a next question arises, because we can detect changes in IADL, and here you also see this in this figure, that it flows from patients with subjective cognitive decline to MCI, to dementia, a steeper decline over time. But is that decline actually it's statistically significant, but is it also clinically meaningful to patients and their caregivers? To answer that question, we performed another study, which can be seen on the next slide. Here we looked into clinical meaningfulness, and by using a qualitative novel mixed method approach, we determined what amount of change caregivers and clinicians determined as important....
We came up with these two cutoff values that we subsequently validated in 200 memory clinic patients, where we showed that about half showed a meaningful decline, and that that specific meaningful decline was also associated with disease stage and medial temporal lobe atrophy. This is truly a new approach in how to define clinical meaningfulness. Another aspect is then, can we translate that to different cultures, which is a relevant aspect in different everyday activities. We validated this across 8 countries in Europe and the USA, and we saw some differences in endorsement of activities. Driving is, for example, more common in the US compared to in Europe, but we saw no evidence of a systematic bias. What this means is that when we can truly compare scores across different countries and cultures.
So again, a final note, which is on, which is actually an important project of the past year, where we compiled normative data for the Amsterdam IADL. And in this figure, you see depicted the demographically adjusted normative data, and this is an important step for applying the Amsterdam IADL also within the for clinical use. So with that, I would like to provide a summary. I showed some evidence with regards to the content validity, the reliability, construct validity, responsiveness, and the cross-cultural value.
What this all brings together is that it contains relevant activities that matter to patients and caregivers, and it has a very high psychometric quality, meaning that if you see a signal on the scales, it truly means that there is a signal, and it is not due to potential other factors. So with this, I would like to end, and I have some recommended reading as well, and of course, a thank you to the team and the many collaborators. Thank you.
Thank you very much, Dr. Sikkes. Clearly, a really very well-validated scale, but I'm a bit biased coming from Amsterdam and from the center, of course. But we'll hear more about it when more studies are using the scale, as we know, there will be more results also in the practice of clinical trials. I would now switch to something completely different, to EEG, which many of you may know, from epilepsy detection and perhaps in sleep as well.
But EEG is also really very well capable of sort of determining changes in the brain upon synaptic function and also under the influence of certain interventions. I think EEG is upcoming as a biomarker in clinical trials, and we have Dr. Willem de Haan, who is a neurologist and clinical neurophysiologist, working in Amsterdam at the Alzheimer's Center, to tell you all about this, this exciting technique. Willem, the floor is yours.
Thank you, Philip. Yeah, it's my great pleasure to talk to you today and share my enthusiasm indeed for using electroencephalography or EEG, in short, in dementia. In our clinic, we have been doing so for about 20 years, and we think it can be a very valuable tool for diagnostic purposes. But for today, of course, the main question is, is it also useful as an effect monitoring tool? These are my disclosures. So, as with our EEG lab, we perform central EEG analysis for various trials. But I receive no personal compensation from them. I'm fully employed by the Academic Hospital in Amsterdam, and I do have a few personal scientific grants on a different topic.
So why do we want to do perform EEG in Alzheimer's disease or in other forms of dementia? Of course, EEG is a technique that looks at large-scale brain activity or function, and I think if you think about the whole disease mechanism, it's very obvious why we would be interested in activity. Because we know that brain pathology or the structural damage, like amyloid deposition, influences and disturbs neurons, and their activity is changed, their behavior is changed. They will start communicating differently, and on a large scale, the brain will be acting differently. And of course, that leads to all the cognitive symptoms that we know so well.
So it's very not just for understanding the full disease mechanism, it's, it's logical to look at brain activity, but also, to see if we can extract markers from that, that we can use for various purposes. So, I want to first briefly go back to the basics of EEG. What are we actually measuring with EEG? And, I would like to focus your attention on the middle picture here. So it's a close-up of a neuron, part of the neuron. It's not fully covered, but we see the cell body, and the branch on the right is the axon, where the axon potential or the firing of the neuron will go to when it passes on a signal to other neurons.
And all the branches on the left are actually the incoming pulses and the dendrites, and you see a few synapses of other neurons. And this is what a neuron is actually doing. It's summing up all this incoming information. So we have a lot of excitatory information, but also inhibitory pulses, and these summed up will decide if this neuron will actually fire and pass on a signal to others. And this is very relevant, of course. Please, could we go back to the previous slide? Yeah, thanks. So this is very relevant, of course, if we are interested in drugs that actually act on the synapse or on the neuron itself. And if you focus on the left picture, I do realize it might be a bit small, but...
When a neuron is active, it generates a small electromagnetic field, and well, from one neuron, we would not be able to measure this from the outside with an EEG electrode. But due to the fact that these neurons are really nicely aligned perpendicular to the scalp in large quantities, they generate a larger electromagnetic field, and this is something that we can pick up with our electrodes from the outside. Now, when you see the picture on the right, that's a typical one page of EEG signal. So this is what we are looking at a lot, a large time today. So we see about 20 wavy lines, 20 oscillatory patterns. These are 20 electrodes from different regions of the brain, and this is about 10 seconds. In these waves, you can see really a lot about the patient.
So you can see if somebody's awake or sleeping. You can see if there are any healthy or pathological states going on, and we may... for dementia, we mainly look at slowing of brain activity. So, to briefly mention the different types of analysis that we are doing on EEG, but fortunately, we're not just visually looking at the data, but we have software that can extract quantitative measures from the activity. And on the left, you see the different types of analysis we do, and the main one, the most important one, is the level one or the oscillatory or spectral analysis, which really means that we're looking at local activity all across the brain, but we're looking at local activity.
Then more advanced levels are like connectivity or functional connectivity analysis, where you can also look at the interaction between regions. How strongly are regions communicating? We are able to quantify that from the signals, and then you can go on even further and look at this whole network of interactions as some kind of dynamic network and see how efficient this network is. But that's the one I will leave out of the talk for today. On the right, you see something important as well, because when we're talking about activity, we're always talking about frequency bands, and what do they mean? Here I've shown you the five major ones going on the top from the fast gamma activity all the way down to the slowest delta activity.
The reason why we analyze EEG activity in these frequency bands is that they have different meanings. So we know that some of these bands are tied to specific cognitive processes, some are really tied to pathology, some might be tied to routing of information, and some might also be really tied to artifacts that are happening in the data. So they can have multiple functions, multiple causes. I think in general, it's good to remember that the fast bands, so the gamma, the beta, and the alpha bands, they are the good, fast activity that you actually want to have. These slower theta and delta bands are the slow bands that you would see in sleep, but in an awake person, they are usually a sign of pathology.
In Alzheimer's disease, for example, we see a slowing of the activity, so we see a gradual increase of the theta activity mainly, and I will, I will come back to that. Here, this is again the local analysis, so we're trying to look at this slowing, and the best place you can see that is in the back of the brain, actually. If you look at these head plots on the lower right, you see six head plots of the different frequency bands again. It's the slow delta and theta activity, and then the higher bands, alpha, which is split in two here, and beta and gamma.
What you can see is this red blob in the back of the brain in the alpha bands, and that's actually the nice and healthy alpha peak around 10 hertz. There's a rhythm of 10 hertz that we want to see in healthy patients. And what we see in Alzheimer's disease, that this is actually slowing down very gradually over years towards 9 and towards 8, and when we're getting near 8.5 or 8, that's definitely pathological, and it goes down even further. So this is one of the measures that we actually use to quantify this slowing of brain activity. Now, that's the natural course in Alzheimer's disease, and that was one of the most robust signs, but of course, for now, it's also important. Does that also help you in looking at effect monitoring?
And would you expect that if you introduce a drug, would you then maybe, would you be able to reverse this slowing? Now, here you see a very, relatively old, study, more than 20 years ago by Adler and Brassen, where they looked at EEG after a use of cholinesterase inhibitors, and it's currently still the drugs, the only clinical drugs that we have available. And they actually really showed nicely that in the patients who responded favorably to the cholinesterase inhibitors clinically, that they also showed a decrease in delta and theta activity. So here you see delta on the left and theta on the right.
Yeah, when you see the two lines, you think, "Ah, this is this placebo versus treated," but this is all in the treated group, but you they split the hemispheres here with the colors, but, and, and it's a positive scale and number, but it was actually a logarithmic scale, this. So this means that there was actually a decrease in the delta and theta activity, nicely accompanying the clinical improvement. So yes, there was a partial reversal of this slowing of brain activity in the old cholinesterase inhibitor trials. So then, for a long time, EEG has not been used that much in clinical trial purposes.
Fortunately, in previous years, more and more again, like Philip was already mentioning, and I think in this regard, the SAPHIR study was very important because this is actually the first large trial where we were able to, in the treated group, see a stabilization of the theta, of the theta power, so actually a reduction of the slowing... And this was, the absolute changes are very small, but that's also the, the gradual increase over time during the disease is a, is a relatively small change. But the fact that it is, that it stabilizes and part of the people actually improved, yeah, that's, that's not. That's really different than a spontaneous course, and that really has to be attributed to a treatment effect. So that was quite exciting after a 12-week period.
If we go to the next slide, it is also a fairly busy slide, but to show that we also looked at the functional connectivity with a measure, and there we saw that in the good alpha bands or in the faster activity bands, there was actually an improvement of the functional connectivity also after this 12-week period. This was also not yet really seen before with any of the other amyloid-targeting compounds. So I think very exciting and of course, also the reason why we or why now again, in the VIVIAD study, we're looking at the EEG outcomes.
So, yeah, I think the SAPHIR study is really important in that regard because, yeah, it confirmed our belief that the theta power is a very robust measure, not just for diagnostic purposes, but also for treatment monitoring. And, it also helped us to think about sample size calculation and looking at the effect size and things. So, yeah, the theta power as measure of the slowing remains very important, but we also have looked at a lot of other exploratory measures of connectivity and network organization. So we have a chance to... well, to see different characteristics of the data and to better understand what the effects actually are, and potentially maybe also to find new EEG markers in there.
And then, of course, we also look, we'll look at correlations with neuropsychological test scores, 'cause as we have seen also in the SAPHIR study, there was a nice correlation between functional connectivity and One Back test for working memory, as John Harrison already showed you. So, yeah, to wrap up my talk, I hope to have shown you in this brief talk that I think EEG is a relatively direct representation of synaptic and neuronal function, and it's widely available, and it's a patient-friendly tool. So, yeah, we're still quite enthusiastic about the potential of EEG.
Yeah, so when you look at the EEG parameters, we have seen so far that a reversal of the slowing that you see in Alzheimer's disease can accompany a positive clinical effect, and that's what we presume is also beneficial for brain function. Yeah, in that regard, yeah, there's not a large body of evidence yet from clinical trials using EEG. The SAPHIR has been one of the important ones in that, and fortunately, now, more and more will be coming, some more results will be coming. The upcoming CTAD conference next week, there will be more, and we also have a few posters there, general posters about the implementation of EEG in clinical trials. So if any of you are attending and interested, definitely, feel free to, come along and talk, with us. Thank you for your attention.
Thank you very much, Dr. de Haan. Very, very interesting to see how this develops over time. I think we have now sort of covered all the groundwork of the Vivoryon program on varoglutamstat, the basic science, some of the endpoints that are being used in the SAPHIR study, but also in the VIVIAD study and also in the VIVA-MIND study. So it's now time to go to Frank Weber, former CMO, now CEO of Vivoryon Therapeutics, to inform you on the current ongoing trials and the future plans of the company. Frank, the floor is yours.
Yeah. Thank you, Philip. Thank you, all the speakers, before. Quite impressive. I wanna wrap it up quickly in order to leave some time for question and answers. Please read the disclaimer carefully. Of course, I'm the CEO and have an interest in the company, and I am paid by the company, and I hold options. So next, let's look back into what is this R&D today about. It's about the VIVIAD study, and I read the title again because it is a multicenter, randomized, double-blind, placebo-controlled, parallel, dose-finding study to investigate the safety, tolerability, and efficacy of the small molecule varoglutamstat in subjects with mild cognitive impairment or mild dementia due to Alzheimer's disease. So that is what we are all about, and the talks today were actually, how do we go further? This was the next slide.
Yeah, next slide, please. Yeah. And, what we want to, discuss today is actually, how do we measure the effect of varoglutamstat? What are the endpoints, and how logical are the endpoints built in that study, and how do they work together? And we have heard about the cognition, which is the key primary. We heard about the activities of daily living, which is the key secondary. We heard about a pharmacological effect on the EEG, which is also a key secondary. What we didn't touch today is the safety part. We, informed on that on previous, earning calls. But of course, we measure also safety. So there are the key endpoints of the VIVIAD study. And, moving forward to the next chart, please.
Let's not forget that there was and is a dose-finding part in that study integrated, because, after 90 patients were randomized and treated for 24 weeks, the DSMB decided in 2021 already, that 600 mg is the dose to go forward, and 300 mg patients should switch to the 600 mg dose, and new patients to be randomized should start directly on the 600 mg dose after having their titration period of 12 weeks completed. And that leads, to a proportion of patients in the active arm at the end being treated with 600 mg twice daily, and that is 75% of the patient, approximately, and approximately 25% of, the patient, respectively. The treatment weeks in the study are covered by the 300 mg.
So it's a little bit of mixed results we see at the end, but predominantly will be dominated by the 600 milligrams twice daily. Also, be kept in mind that the 600 milligrams twice daily is given at the end of the study, so everybody switched to it, so everybody had an exposure to it. Next slide. The other part and specificity of the study is the duration of the treatment, because the protocol stipulates that patients should continue treatment up to 96 weeks or until the last patient randomized has completed 48 weeks of study treatment. So patients have variable durations of treatment between 48 and 96 weeks in this study.
And the Cogstate cognition tests are performed every 12 weeks, and for analyzing and comparing a cognitive progression or progression of cognitive deficit, we will make slope analysis for each patient of all the tests derived from the assessments, and then create group means and compare the group means between placebo and active. So that is how we will analyze the data. And just to give you some information, preliminary ones, for 48 weeks, we have less than 10% of randomized patients. Patients being 60 and 72 weeks treated will be about 30% of randomized patients, and the majority of patients will be 84 and 96 weeks treated, so it will be about 60% of the patients. And this leads to a mean treatment duration of 80 weeks.
And that design has certain advantages because overall, the study stops early after the last patient being randomized, but a high number of participants have a long treatment exposure, and you can see long-term treatment effects. And also, that, we have to keep in mind that the progression of the cognitive decline is very well represented by the Cogstate every 12 weeks assessment. So it is not a baseline to endpoint, analysis, where the middle stays blind. It is taking all the data over the 2-year period into account from each visit and really builds the slope of the cognitive decline of an individual and over time. Next slide. This study is embedded in a, in a very robust development program. Just to remind you, this study doesn't stand alone. There was a phase I before.
There was the phase II-A SAPHIR study, which was mentioned a couple of times. Now we are in the phase II-B VIVIAD study, and just to keep in mind that we have a parallel ongoing phase II VIVA-MIND study in the U.S., where the CDR-SB is the primary endpoint. Well, coming to the end, and this is quite important, to recognize and really to read and understand. So while a good study is not an approved medicine, and the product is under development for the treatment of early Alzheimer's disease, and all data we presented today of the VIVIAD study are blinded and preliminary. The study is still ongoing, with final readout expected in the Q1 of 2024.
So clearly, there is always a pro and con whether you would disclose some blinded data of an ongoing study or not. We decided to disclose some because it makes it more clear and, more understandable what can be expected at the end of the study, at least, from the conceptual point of view. But the data itself will clearly change, still, and the final results are the ones you should count on. And therefore, the data presented today shall not and cannot be interpreted in respect to whether varoglutamstat study is safe or effective.
You can make no conclusions about the efficacy of the drug presented, today. And finally, the data of the VIVIAD study presented today should also not be seen as information, how the study results will look like in the Q1 of 2024. Having said that, really clearly announced, we can now switch to the moderator, Philip. Moderator Philip, you have some questions for us?
Thank you very much, Frank, and putting this into a very good perspective and also to be mindful of what we cannot conclude from today. But I would say, we can conclude that this was a very thoughtfully designed study with very modern measures included, that really, really sort of speak to what people—what matters for people with early Alzheimer's disease. That brings us to the first question, and I could have expected this, and this is for Professor Harrison again. You have explained the NTB with the composites at the endpoint, and can you sort of perhaps reiterate why we specifically, or you specifically, actually chose these endpoints, in light of probably also the mode of action of the drug at hand?
Yeah. No, thank you. I happily address that. So the premise behind proof of concept studies in phase II-A, particularly in Alzheimer's disease, and be mindful of the fact that oftentimes in past development, we've omitted this step, which I think has been not helpful. But the general approach we take, across, in fact, not just neurology, but psychiatry, is to say, when we have a drug first time to patients, it's really important to characterize cognition as broadly as possible, just to see is there any evidence of which particular areas of cognition might be benefited by treatment. Now, the opportunity we took with the phase II-A study was to say, let's not make any assumptions about which areas of cognition we would expect to see enhanced, but let's actually evaluate that using a broad assessment.
And Dr. Sikkes has used the expression, good content validity. That's what we sought to do. So ensure that all the relevant and important domains of cognition are mapped. Based on that evidence, what we saw was that the most profound effect of treatment was in the areas of attention and executive function, and worth saying that working memory fits in as part of the umbrella term for executive function. So based on that clinical evidence, the rational approach we took was to suggest that they were our best areas of cognition to take forward into the phase II-B study. Hence, we nominated those two areas using the same outcome measures as a means of detecting efficacy, if present.
Yeah. Thank you very much, John, and it's good to reiterate perhaps what you said, is that there is no law, there is no mandatory sort of, explicit, way of thinking that you need to include the ADAS or the CDR in any of the trials that you design. And you make the point that you really need to design the endpoints with the study, with the mode of action in mind, and actually to show the proof of concept.
Yeah.
Thank you. Professor Schilling, I mentioned this in the beginning, and this is, of course, I think, very logical. If you think about the donanemab study, they sort of do plaque removal, as evidenced by amyloid PET scanning, while the varoglutamstat actually inhibits the formation of the neurotoxic peptide, as you explained very clearly. Would that also... Could we, from that difference, perhaps also extract a hypothesis whether the sort of effect, ultimate clinically of varoglutamstat versus donanemab will be better?
So, I would be careful to directly do that rationale in that way. So, what we can expect is that both drugs target the same molecule, pyroGlu or N3pE. However, the mode of action is quite different, and there are certainly three factors that I would highlight in that comparison. So on the one hand, the enzyme inhibitor is a small molecule, which is also acting intracellularly, where at least certainly part of the N3pE is formed. And we know from many studies on that antibodies, that especially intra-nuclear formation cannot be blocked by the antibody. If the Aβ is released from the cells, then the antibodies are active. The second thing, so this is a discriminative feature, where I would say varoglutamstat has some advantages.
So the second thing is, and this is probably one of the important features, all the antibodies that we see are trigger factors for inflammation. So that's actually the key of their activity. They mark those molecules and donanemab marks N3pE, but it induces an inflammatory response due to phagocytosis. And this finally also leads to the ARIA events that are all observed with donanemab, also with lecanemab. And this is a feature, as we have seen, if you block the formation of N3pE via varoglutamstat, then there is no induction of inflammation. It is the opposite that we observe. So we expect, and we see in some cases also in the phase II-A study, where we saw some effect on YKL-40, which is in line with rather an anti-inflammatory effect.
We see that, and this gives us quite confidence that this is a second discriminative feature where I would say, inflammation, strong induction of inflammation is not so good. And the last thing is, pyroGlu formation is upstream by QC. So you inhibit at an earlier stage, intracellularly and extracellularly, and this is certainly the third very prominent discriminative features, where I would say from the mechanism of action, it is different, and there are some factors that might be advantageous, especially with varoglutamstat.
Just a very quick follow-up question. Looking into the future, I mean, people discuss already having combination of therapies. Would there be any sense in combining, perhaps, plaque removal with donanemab, with, inhibit formation with N3pE?
Absolutely. Absolutely. So, because then two things come together. So on the one hand, varoglutamstat blocks the de novo formation of N3pE, and the preformed N3pE is removed by an antibody such as donanemab. And the pro-inflammatory effect of donanemab can be reduced by the anti-inflammatory effect, potentially, of varoglutamstat. So all the studies that we have pre-clinically in mouse models, of course, this is not human-
Mm-hmm
... are in favor of a combination. Absolutely.
Yeah. And are there... Just one last question on the mode of action and the drug itself. Are there any other ways to inhibit production of N3pE? Is this unique molecule, or are there other ways that people have tried to inhibit the production?
So to the best of my knowledge, production is not. So there are only the two ways we had in mind, antibodies and-
Yeah.
varoglutamstat. To the best of my knowledge, no.
Okay. Thank you. Frank, question to you, I guess. Will varoglutamstat, of course, if proven effective, only help in the MCI patients, or is it also foreseeable that we could administer this in a later stage of Alzheimer's disease?
Or earlier, Philip?
Or even earlier. Yeah.
Yeah.
Yeah. The floor is yours.
Yeah, there is thoughts about doing life cycle management early if the results of BBIA are positive. We have a stratification by MMSE, so we look in separately into, let us say, MCI patient due to AD, which basically have only minimal cognitive impairment and those which have mild AD. And of course, these results will guide us to probably where to go next. If those which have mild AD and more to the border of moderate with an MMSE around 21, 22, we have a couple of those. For good treatment effect, I think, just for ethical reasons, it's good to go there.
But, I think the most promising area is moving to the asymptomatic patients, because if you can slow down and stop the disease there, then the likelihood that Alzheimer's disease as an individual, but also as a social and health economic problem, will disappear, is much more likely. So I think, going earlier is probably from a future perspective, more promising for patients and society. But moderate patients also deserve, of course, that the study will be studied.
Yeah. Thank you very much. Just a question for Dr. Sikkes. I mean, you talked about the Amsterdam-ADL. You showed the results of the TORIELL study, baseline, and also the follow-up results. Are there any other studies at the moment that we can expect to have results with the Amsterdam-ADL in the near future? Randomized controlled trials, I mean.
Yeah. Yeah, there are some, some other studies in which the Amsterdam-ADL is being employed, with, results, hopefully coming out soon, including this study, of course. But, yeah, there are, other studies, but what is nice about the TORIELL study is that they did this head-to-head comparison with the ADCS-ADL. And, what is nice there is, especially when you look at these earlier stages, of course, it is very difficult to detect a signal there because a lot of the, when you look at the ADCS-ADL, that was specifically developed for later stages. So it will be difficult to detect a signal, even though if you would adapt it a little bit, it will be difficult to detect a signal there. So what was nice was that, we were...
That you could see the distinction between the prodromal and the mild AD somewhat better there. And what was also interesting is what we, of course, did previously was in the CATCH- Cog study, was an observational study, and there, that is somewhat different than the clinical trial population, so it's always good to have that confirmation as well.
Yeah.
So know a little bit more of what, what kind of signal to expect.
Yeah. Thank you. Very, very clear. John, a question for you again. So you made the point very clearly that AD is more than just memory, and MCI is also... You have amnestic MCI, but you also have patients who have probably more of an executive dysfunction. If you can split the MCI group on the basis of either memory or executive function? And how does that relate to the development of, I would say, more dementia later on? That was a question somewhere here. I try to-
Yeah. Thank you for that. I'll do my best to address that. So, yes, so clinically, if I think about... and in fact, you and I were part of a panel three years ago in Kyoto, where we presented a really interesting case that we encountered in the outside clinic in Amsterdam. So a very high-functioning individual who was performing relatively well on traditional measures, some of the ones we've discussed today, but was reporting that she just felt unable to do her job, which was a very demanding job. And when we did very detailed analysis and really characterized her cognition in a lot of depth, we established that her memory wasn't great, but it wasn't very far progressed, whereas her ability to plan and organize was very obviously challenged.
There was a really interesting series of cases reported in the Mayo Clinic in Florida, where the primary presenting cognitive difficulty was one of executive function, in not a complete absence of memory difficulties, but certainly accompanying them, and sometimes of a level of problem that was more problematic than the memory deficits that people reported. So I think we have a pretty clear picture. I think the interesting thing as we delve into tau pathology is also that people tend to present not just with differences in cognitive presentation, but also in preferential areas of biological and pathological change.
So I think we'll learn a good deal more about that. I think the presumption we would make based on longitudinal data, the kinds of things that Dr. Sikkes mentioned we did in Catch Cog, that there's a very high probability of progression across all domains of cognition. When you reach the moderate stages, it is actually quite hard to find an area of cognition where people would be intact. It would be really interesting going forward, and the big prospective studies that we have ongoing, both at Scottish Brain Sciences and elsewhere—where we're picking up people really, really early to see if an executive function deficit is indeed predictive of a very different pattern of progression. And that data we'd hope to present at a later date.
Yep. Just, just one follow-up question. Somebody really paid attention to your one of your final slides, that attention and executive function can be addressed with rescued with marketable-marketed drugs.
Mm.
What do you mean with this? Which one did you have in mind? Are there any marketed drugs that actually improve executive function and attention?
Yeah, yeah. I think the best example would be Vinblad and others from 2008, which was a study of galantamine in mild cognitive impairment. Very large-scale study, I think about 2000 patients in total. No evidence of efficacy captured on the ADAS-Cog, and in fact, in the discussion, the authors make the point that it was always very unlikely just because of the lack of impairment on that test. But interestingly, a very substantial subcohort were tested on the digit symbol substitution, which is the means by which we recruit people for VIVIAD, and treatment effects were witnessed on that. So I'd refer people to that as probably the best example.
Yeah. Thank you very much. Frank, I think a very interesting question here: has the FDA given any recent indication that they are still flexible on the efficacy endpoints as they were in 2018, as Dr. Harrison indicated, since the drug had been improved now using CDR sum of boxes, does that mean that any next drug has to use the CDR sum of boxes?
I think John was very clear that there is no mandate to use it. But for the VIVIAD study, we also have to see where we are. We are in a phase II-B study. That is not the final confirmatory study, and that study is probably best designed as it is now, to look at a holistic view of cognitive function and functional outcome, rather to go to a higher aggregate score, CDR sum of boxes. In the development strategy, we addressed that, that we have a study with CDR sum of boxes ongoing, which eventually could work and help as a confirmatory study to be seen.
So from my point of view, we have planned diligently our development program, addressing the deepness of the effect of the drug on cognitive function in VIVIAD, and go to a more aggregate level in VIVA-MIND. That should serve the FDA needs to really assess in depth the benefit-risk ratio or the benefit part of the benefit-risk ratio, but also from start.
Yeah. Dr. de Haan, just a question for you. I think you have explained very clearly that EEG can be of benefit, especially when the drug is targeting synaptic function. How feasible is this in clinical trials? Can you just briefly address how this works in practice?
Yeah. We like to think of EEG as an already quite well-known tool, so it's available in most hospitals and clinics, and mainly used for other purposes like epilepsy or sleep studies, like you already mentioned. But yeah, I think what we need for dementia testing is really a fairly routine EEG, so no specific cognitive tasks during the recording are necessary. We have a very strong focus on resting state data because there's already a lot to be seen from that. No provocation test. It's just about 20 minutes of eyes closed sitting, and from that, we can get good data.
So yeah, our experience from the previous years is with several larger multicenter trials in European and American countries mainly. Yeah, it does take some communication and coaching of all the sites to get good, to harmonize the data, to get good quality data. But yeah, this can definitely be done. So I think it is a straightforward enough technique to use for large clinical trials. Yeah.
Has it been used for any antibody trial, as you know?
Yeah. So, well, in the previous years, we've been working on about 6 or 7 different... Some of them are. Most of them are actually still running, so cannot really talk about any results. There will be a few results at the CTAD coming up. But yeah, we see some more. I think I expect some more exciting things coming out, but it's still a bit too early to tell. But.
And there was a question: can you use the information from different trials actually, or is it such a—is it trial-specific? Is there so much intertrial variability that would limit the comparison between trial A, B, and C with your measure?
Yeah. So we try to... Historically, what has been lacking, unfortunately, is really good prospectively gathered longitudinal EEG datasets. So, fortunately, we have been, since we are, because of our involvement with different trials, we have been able to... Because these are, of course, really interesting, good longitudinal datasets. So from these different studies, we've been looking at the placebo group also really to get a better feeling for the natural course of the activity and how the theta power is declining or increasing, for example.
So, yeah, I think we are learning more and more about which are the most robust measures, and I definitely think that if you do that correctly, you can make comparisons between trials. But yeah, I think we also need to be frank and say that how brain activity is related specifically to or to cognition exactly, there are still a lot of open questions. So we have to do a favored set of markers that we think-
Mm.
are now most valuable to pursue, but it's probably not the final word. There will be... And that's also, yeah, we hope to develop stronger and better markers, also looking at the trial data, so.
Yeah. Thank you. Very exciting, sort of field in a sense. I'm looking at the questions. Let me see. Oh, yeah, I have a question here. I think to Professor Schilling again. It's, let me read the question as it is stated. It's understood why Lilly reported that they couldn't find soluble oligomers containing modified A-beta peptides in their characterization of donanemab. This statement has colored the field for a decade. How disputable is the notion that N3pE is found across all beta aggregates, as Lilly believes that N3pE is plaque-specific? Is that a question you can answer?
Yeah. Honestly, yeah. I try to answer it, and I think that's a great debate that we also had quite well while I was Lilly people. So honestly, I cannot give you a rationale why they don't find pyroGlu oligomers in brain. So we do, and we did that always. So what we agree on is that usually pyroGlu is really scarcely detectable in CSF, for instance, among all the other species. There are now descriptions that it is possible. There are very new techniques, but this was for a long time. So the reason for that is certainly that the hydrophobicity that I have been talking about that is specific for pyroGlu beta mediates a very strong interaction with hydrophobic surfaces, especially on the cellular surfaces and certainly also plaques.
And this is, in my opinion, the rationale for Lilly, that they claim it's purely plaque-specific. But this is something I would say contradicts in many terms to our findings. And it is rather, I would say, a rationale to propose the activity of donanemab, that plaque should be the target. That is my answer to that and my... Yeah, I try to explain and interpret what they are doing.
Yeah. Yeah. I think you made that point very clearly already. So I have a last question here in the chat for now. I think, Frank, you're the best one to answer this. Is there a difference between regulators in the U.S. versus the E.U. in terms of allowing novel endpoints in AD trials? Any experience?
Good question. It's a good, it's a good question to answer, but the FDA has approved drugs. The EMA has not approved drugs. The EMA has made a clearer final guidelines, and they are probably the first who were NTB specific and probably allowed no neuropsychological tests as endpoints. But this has to be seen always on the background, on the effect size. I understand that people think about an instrument being important to achieve marketing authorization. But the truth is that regulators look across primary and secondary endpoint, look at the consistency of primary and secondary endpoint, look at effect size, and look at benefit risk ratios and make then a conclusion.
It is not the single primary endpoint where you say, "Oh, I have a P value of 0.049," and in my career, I've seen drugs which have a P value of 0.049 was significant, didn't make it through the finish line. And I've seen drugs and contributed to drugs being approved, which had even a P value of 0.08 and made it through the finish line. It is not necessarily what the primary endpoints does or is alone. It has to be seen in the context of all the evidence you provide. Of course, you need to have robust evidence of effectiveness in the U.S. and Europe. You need to have robust data, but then the, the kind of endpoint probably doesn't really matter.
What we try to do at Vivoryon is really to go at the breadth of endpoint and go for, in VIVIAD, for in-depth and breadth and for CDR, more at high level and aggregate endpoints. And to look at both, because at the end, you don't wanna be blind on one eye. You wanna give the full breadth of information of your product, and that's what we're aiming at.
What will be the successful VIVIAD in your mind? What would be a successful outcome for the study for you?
I mean, what we hope for, what we planned for together, was that the primary and key secondary endpoints have a significant P value below 0.05.
Yeah.
That is what we want. Now, that is something which we designed the study for and what we hope we will see. At least the hypothesis we generated, and that now we have to wait until the Q1 to see what comes out. That is good, but from a patient perspective, and I think that should also be seen. I think what patient wants to have is disease stability, maybe a minor improvement for as long as possible. That is another thing which we look at in a responder analysis. We want to understand in how many patients we really can impact and stabilize disease for over 2 years. This is why we do a two-year study, maximal duration here, and to really see what we can bring to the patient, as, as support and help with the drug.
Thank you. We have run out of questions in the chat. There are a few questions that we will address bilaterally, because they're very specific and not sort of to one of you in the panel. So let me thank the panel, really, for all the efforts, for the time, for their diligence in answering the questions and sharing their views and their expertise in this very exciting times. Thank you, Frank, for preparing and sort of making yourself available to present the trial. And we're really very sort of looking out for, of course, for the next quarter, 2024, for the final results. Let me thank also Vivoryon for making this possible.
I wish you all a very good rest of the day, and again, thank you all. We'll be back on the chat bilaterally for some questions. Thank you.
Thank you. Bye-bye.