Thanks, everyone, for joining. Hi, I'm Ashish Sabadra, and I cover Business and Information Services sector at RBC. We are excited to host a conversation on Moody's generative AI strategy with Steve, President of Moody's Analytics, and Nick, the Chief Product Officer. Steve and Nick, thanks for giving us this opportunity. I'll jump right into the questions. You've said GenAI is a generational opportunity. I was just wondering if you could provide more color on why you believe that to be so.
Nick, we could have a. We're gonna have to have, like, a contest to see who goes first here. We'll arm wrestle virtually. I'll take a crack. I, I certainly said that. I think that we are in a unique spot. You know, the, the Internet may have been the greatest library in history, and with this GenAI opportunity, we may have the greatest researcher in history who can sit right next to us when we're doing our jobs every day. I just think it's just astounding how quickly this technology has developed and how practical its use can be.
Yeah.
We really do think of it as a generational opportunity for all of us and, and for us to help our customers. It's tremendous.
Just adding on that, Steve. Part of the reason why we're so excited is because I'd say there's as many internal use cases as there are external use cases, and we're kind of exploring all of them. It's potentially, for Moody's particularly, a little more important and advantageous for us to enter into this space. Because we have such a vast array of data assets, research assets, and opinion assets, the ability to be able to access them now has a much lower barrier to entry. We're really excited by the expansion of the people that can interact with our stuff just by being able to do that using natural language. That's in part why we're so excited and why we think this is so generational, because of that expansion of audience.
That's great. We were also very pleasantly surprised how quickly Moody's has embraced these technologies. Can you just give some color on how you've achieved this?
Yeah. Nick, do you wanna, do you wanna talk about what we've done in terms of just the leadership team here first, or what do you think?
You know what? It's probably worth one small step backwards for some people, which is to say, in relative terms, this is not new. We've been working on machine learning and AI for decades, and we have onboarded some amazing people and some amazing capabilities to be able to inject that in our in our workflow platforms and in the way in which we kind of manage ourselves. That's a pretty good starting position to be in.
Yeah.
We've also spent a significant amount of time organizing our data assets and our data estate, and so it's kind of indexed and preset, and so these are really good preconditions to be able to kind of launch into this. Then I think what Steve was alluding to is this idea that as a result of some of that heavy lifting, feel like already having been done, we really latched on to this to say: We think this is going to be able to drive a much better and different customer experience, and so therefore, we just wanna jump in with both feet. The jumping in with both feet is itself a little bit different for us.
The idea of having a much broader and expansive, innovation group, so the whole company, and we've been very public about the fact that we're, we're using every employee as one of our testers and as one of our experimenters to be able to really explore this technology. That approach, I think, was exciting and new for Moody's, and it is bearing fruit. The, the number of use cases and ideas and examples and ways in which we can adapt the technology is really being driven by our whole company. If there's a change, that's really the change. It's this idea that we're gonna be kind of jumping in with both feet. Steve, I'll kind of throw back to you for this.
There's an accelerant here, which is the relationship that we've established with Microsoft, to be able to really accelerate and go after these kinds of opportunities in earnest.
There's a couple embellishments I could offer there, but let's stick with this Microsoft point. We have a tremendous partnership with the folks at Microsoft. We're really pleased to be working with them, and they have been really helpful in lending us some expertise to accelerate the development of especially this Copilot technology, this Copilot capability that we're leveraging in our Research Assistant. You may have seen the video that we've put made available out there on the investor relations site of this, I'll call it a release or sort of the preview version of what we expect will be a new module we'll add to our product array.
Microsoft really helped us with some tremendous folks in, in terms of accelerating our understanding and, and, maybe, maybe some of the tricks of the trade, if there are any yet, as it relates to, you know, applying, applying techniques in prompt engineering to, to really deliver results that are going to be effective for our customers. We're not actually licensing technology from them here. We've actually done a lot of the coding ourselves, but they've been really helpful in terms of speeding our learning-- our, our way through the learning curve.
Yeah.
One other just quick point, you know, Nick mentioned this before. We mentioned in the earnings call, I think in, in the second quarter, it's worth highlighting. There's a slide out there, if anybody's interested, where we just highlighted just three of the products that we have had in market, leveraging what we think of as our AI franchise. We've, we've got literally several hundred, probably over 1,000 customers that are paying us money for tools that are really based on natural language processing or based on machine learning techniques in order to deliver value for customers. We've been at this for a long time. The first, first time I think we worked on machine learning algorithms was in the mid-'90s. This is the kind of thing that has been around.
To Nick's point, we now are able to leverage these large language models in a, in an amazingly cost-effective way. We didn't really have this transformer model capability before. That's true. That's only five or six years old. Now the cost of doing this is at a point where it's commercially reasonable for us to actually make all of our content available to produce results for your single prompt. This is pretty, pretty appealing for us in terms of the way the barriers to entry are, are now getting to the... Are, are gonna help us deliver some value.
That's great color, Steve and Nick, and we will jump into the Microsoft partnership later because I think that's very important as well. I just wanted to segue into the comments that you, that you made on the Research Assistant.
Yeah.
I was wondering, again, we were very pleasantly surprised how quickly you were able to launch the Research Assistant. As you said, like, because you've been working on these technologies for so long, and as Nick highlighted, how the proprietary data, data and data being indexed really helped you provide the right framework. I was wondering if you can just talk more about the Research Assistant product, if it's possible to give a demo or just provide an overview, and also talk about how you were able to launch this product so quickly.
Yeah. Nick, you wanna... I can take a crack, maybe, and then, Nick, you can keep me honest. You know, the first thing I'd say is we had a few people who were pretty experienced with techniques that were similar. You can imagine, we do a lot of work with language. You know, as a, as a producer of content, a curator of news stories, I think there's literally almost 1 million news stories that we process for sentiment. We read the words and then determine what that story's about, and then meta tag each of those stories every day. The people who are in that group were extremely helpful here in terms of thinking about other ways we could use language models.
We grabbed a few members of our team that were ready and able, and then we said, "Let's go." I, I think the thing that was... You know, we've kept a very small team here, classic kind of software development strategy with a small team that's effective at getting things to an MVP kind of level of, of sophistication or, or capability, and then we can go out and see some customers. I think we're a little bit beyond that. At the moment, we're at the point where we're preview with customers, so we can start to get feedback. The other thing I'd say, and, and this I think is really important, we declared internally, culturally, that this is going to be a big deal. Right? We, we felt like this was generational.
We felt like this was an innovation that we hadn't seen maybe in my 33 years at Moody's. We actually asked each of the leaders in the organization to join us in a workshop just to culturally get everybody acclimated. We literally talked about how large language models work, how to think about the development of the coefficients, how to think about vectors within the models. Got into the details so that people could see how they might be able to leverage those, and they could understand what they were working with. That takes a lot of the apprehension out of the air and enables people to be much more creative. Small team building some software, big picture, kind of cultural elements to try and set the stage for the company to make sure that innovation is something that we can be driving.
It, it's worth picking up on that note, Steve, because you've got to remember, the Research Assistant is just the first of what might be multiple assistants that we publish and make available to customers in the market. You know, I won't get too much into the technicalities. If you think of the Copilot as the platform that we've built, that we're leveraging both internally and externally to be able to develop generative AI applications, and then the first commercial application is the Research Assistant. You can imagine that there might be a lending assistant, or an underwriting assistant, or a, a reporting assistant.
This is just the first of the commercial applications, because we've done all of that early hard work in establishing and building the platform that allows us to say: What's the dataset, what's the persona, and what's the group of customers that might be interested in being able to leverage this technology? How do we group that together and make that available as a commercial offering? We think our ability to more regularly do this at the same cadence is increased by the investments that we've made in the platform itself.
Yeah.
Yeah, Nick, that... Again, Steve, this was very helpful color, and maybe it's a good segue to jump into, as Nick, you mentioned, this is just one of the first of multiple different products to come. One of the questions that we get from investors is: How should we think about this monetization opportunity from GenAI? And maybe I was wondering if you could expand further on these products.
Yeah. I mean, the honest answer is, it's an evolving topic. We think about it, I guess, initially this way. We have always tried to offer as much value to our customers as possible. We really kind of price and go to market based on the value that we deliver to customers. We think of this as driving additional value. We think of this as being an opportunity to be able to deliver more value to customers, either because there are different types of people at those customers that will be able to access the content, or because the actual process that they undertake is just gonna be better and easier.
If you're a researcher and you're trying to find out something about a company, or a sector, or a segment, and you're trying to do that, your ability to do that using natural language is an increased value to you. What we're exploring with customers as part of this preview mode is exactly what that pricing strategy and monetization strategy looks like, grounded on the idea that it will be about delivering additional value to customers. I, I know that's a not as specific an answer as some people might like, but this is what we're really testing with customers. We're just working out what do the mechanics look like? How are we going to be able to construct something that allows us to deliver the maximum amount of value-
Yeah.
To be able to recoup economic rent in the process?
Yeah. It starts with value, right? What's useful, what's not. Focus on the stuff that's useful and, and make sure we create the feedback loop that, that we can leverage for commercial purposes. I would envision this as a, a capability that we could offer to any of the products where we have large user communities. You could think of this as something that would be a module we can add to our CreditView franchise, where we're really delivering all of the research created by the rating agency through our website, moodys.com. We might also make this available through CreditLens, which is a, a software application we provide to lenders. We literally have... At any one bank, we might have 1,000 users that, that leverage the CreditLens software application. Sometimes it's more than that.
Those lenders are asking questions like: What would Moody's say in light of this circumstance? They can take their circumstance, deliver that in a prompt, and then get the large language model to help us generate a response to that that might be useful. Maybe it's just as simple as creating the credit memo for a loan. You could imagine how much more effectively or quickly you can create that memo now. There's several of these products where there's big, large user communities, and we think this assistant concept is gonna make a lot of sense. I'll say there's a next generation that it's too soon to talk about, right? Where in addition to generating text, we can use these large language models to help us think about what other models might we engage.
You can imagine we have a fantastic credit scoring franchise for commercial lending, especially, and for anybody who's interested in understanding the credit scores on companies, but we could engage those models, depending on what the prompt is looking for, to help evaluate the financial capacity that, that firm might bring to the table. This, you know, the, the, the beauty of this is that there's tremendous opportunity to take the natural language request that you have and bring in all sorts of capabilities from Moody's in order to address that question. You can imagine, what would be the impact on credit for a company located in Fort Myers if Ida was twice as powerful?
Yeah.
Right? If we can engage our catastrophe risk model, our credit model, and the text generation capability in light of what Moody's usually would say about something like this, and design, I'm sorry, create responses to those prompts, it'll be extremely helpful.
Yeah. I mean, the cross-sell and upsell dynamic is pretty significant here because the way that the models themselves work and the way that people will interact with them using natural language, again, blurs the boundary between entitled data assets.
Right.
Because people don't care about that. What we're starting to say is, actually, this is an amazing opportunity for us to be able to very seamlessly cross-sell and upsell on the guide, that we're actually just delivering additional value because we happen to have all of these proprietary data assets that are leverageable.
I, I, I would actually say that slightly differently. We're extremely excited about the possibility of bringing different areas of expertise to the table in order to help our customers analyze that situation. You know, who am I doing business with? What should I know before I do business with them? I can answer that question now with credit, with cyber, with financial crime, with catastrophe risk concepts in mind, with supplier risk concepts in mind. You know, with respect to their supply chain, is there anything I'm missing?
Yeah.
I mean, we can literally ask that question, and you could get a response that would be quite useful. There's also all sorts of limitations here, but in this era of e-exponential awareness of risk, and we've been using this phrase a little bit lately, right? This era of, gosh, I ought to better understand how these things interrelate, and, I, I need to have a holistic understanding of who I'm doing business with. These large language models are the perfect technology to bring these things together. It's like a rifle shot right at Moody's strategy, and we're very excited about the possibility because of that.
Yeah.
That's fantastic. I think now, the comments that Rob made on the call around, this really being, like, GenAI being the catalyst to propel the risk operating system makes a lot of sense. If I understood it correctly, you obviously have the ability to charge incremental for these assistant products which are built in, but also, Nick and Steve, as you mentioned, significant opportunity to cross-sell and upsell, as it improves the content discovery, but the value of putting all those data together becomes a lot more powerful in solving the customer solutions. That's great.
Yes.
Maybe if I can ask another question, and maybe you alluded to it already, but is there also an opportunity that you may have now new use cases with the data that you historically may not have targeted? Is there another way to think about it, or new customer base, new use cases, which is possible by combining these different data assets together?
Nick, you want to shout, or you want me to go?
Preference thing.
Yeah. I think, first of all, just, just I'd say that the, the opportunity with the existing use cases is tremendous and very much what we're focused on at the moment. If we can find a way to leverage this technology to help our customers do their work, not just faster, but more effectively, that's tremendous. The cross-selling opportunities are, are... You know, we have a great customer base, and, and, this is just a great way to make even more of our capabilities available to them. That's, that's definitely the, the starting point. There's gonna be a day we're not just leveraging Moody's content, but we're leveraging your content. You know, there may be a use case, Ashish, you might be one of our potential customers, right?
We could take what would Moody's say and add it to what would Ashish say, and in light of those two, what do you think the reasonable synthesis is of those two opinions, right? This is the kind of thing where your content and our content together, because maybe you have a certain approach toward thinking about an issue, but you're interested to weave into the report what Moody's might have said about that economic environment or that economic scenario or whatever it might be. Those concepts of going to our customers and maybe finding a way to bring some of their capabilities to leverage the same technology, I think is something that we're looking forward to as another set of use cases that we really haven't tried to explore before.
There's a pretty interesting dynamic here as well, Ashish, which is lots of enterprise customers are coming to us because we're Moody's and because of the way that we would approach the normal processes for risk management and for the kind of ethical use questions and all of these kind of, can we and how can we and what are the implications of being able to leverage generative AI technologies? We're having some pretty interesting discussions with those customers about how they can leverage the tools and the techniques that we've used to be able to onboard this technology ourselves. Again, we've been pretty public in the way that we've described allowing our employees to be able to access the technology and really driving this idea of experimentation. It's because we've built a platform that has a kind of safe and secure boundary.
So to Steve's point, customers are interested in that...
Yeah.
as much as we are to be able to say, "I want to be able to know that my information is not going to be exposed to a public large language model." The way in which we've gone about this is to make sure that that's a kind of sacrosanct approach, too, because we have all of this proprietary information that we ourselves want to make sure is protected from being exposed to a public large language model. The types of conversations we're having are as much about the commercial opportunity as well as what does the platform look like, what does the approach look like, and do we have all of the guardrails and safety and security procedures in place.
Yeah
... to be able to really use this as an enterprise application as opposed to a kind of magical tool that people get impressed by, in the kind of ChatGPT world.
That, that's great color. Nick, we do want to talk about, the privacy security aspect, the data protection aspect that you mentioned. Before we jump there, maybe, Nick, a quick question for you, because this is more about the ratings. I was wondering if you could talk about how you envision GenAI being used as part of the MIS analysts and research, how you see this technology being implemented within the ratings business. Oh, sorry, the MIS business, and in particularly, because it's a highly regulated business, what do you see the challenges there?
Sure. I mean, let's start. We have an all of Moody's approach to the ability to leverage the underlying technology. As we've said before, we have equally as many use cases about internal uses as we do kind of external uses. So, part of being able to do that was by creating this kind of safe and secure environment that allowed people to be able to just start doing that exploration, and that includes people in MIS. So, they are starting themselves to explore where is it going to be possible to be able to start using components or pieces of this technology to be able to just be more efficient and more effective.
We have a kind of overarching philosophy that says we always want to have a human in the loop, because whilst these tools allow us to be able to speed up or allow us to be able to have a kind of broader perspective because of the access that we can get to the underlying information, we do think that these are driving the effectiveness of our resources as much as they are just the efficiency question. So, people in MIS are, in earnest, exploring opportunities to be able to leverage. This is not a replacement strategy. This is not about being able to avoid scrutiny.
There's a pretty regulated, safe, and secure process, and we're working inside those boundaries to be able to work out where are there efficiencies that we can unlock, and how can we get our analyst resources to be able to be more informed, to get access to more information, to be able to drive better insight, to be able to leverage their time, to be able to do that kind of work, rather than necessarily some of the efficiency-type questions that we might have that would also be driven by the technology itself.
You know, in the... Just to make it very intuitive, a rating analyst, just like a securities analyst, is gathering information and often summarizing it and often trying to read as many things as they can to make sure they aren't missing something. You know, integrating different viewpoints in order to come up with the viewpoint we have at the rating committee level. It's very similar to what you literally do for a living and many of the people who are on this, on this call would be doing. We're looking. You know, we, we know for sure we're onto something here to not just improve. Again, it's not just efficiency, it's effectiveness, too.
That's, that's very helpful color, Steve. Again, Steve, going back to the response for the prior question, you're absolutely right. I think like sell side, including person like me, but also buy side, would be interested in the rich data set that you provide, that Moody's provide. Maybe it's much more easy access.
Looks like I make a sale while we're on this call here, so.
Yes. I, I do want to talk about the efficiency piece that Nick, you mentioned, but before we go there, I do want to talk about something you mentioned earlier on, is just some of the challenges with GenAI. The question around data protection and privacy, that's definitely number one thing that gets brought up. The second thing is explainability, and again, maybe Steve, as you and your team or the users, have they tested it out, ability to explain the response, but also, right, like, in the same vein, one of the challenges with the large language model is hallucination or confabulation, if I can, what they call it now.
Yeah.
How do you address all these different challenges? Thanks.
Yeah. Nick, you wanna, you wanna fire away there?
There, there's really two parts to the question. There's a safety and security angle to this. Again, very, very publicly, we've talked about the fact that establishing relationships with Microsoft and with OpenAI and with other providers of large language models has allowed us to be able to build an environment where we're onboarding those models into our environment. Just as an approach, that's allowed us to be able to safely and securely experiment, both for our employees and for potential commercial gain, but also to test the models themselves. There's a kind of growing question about which model is the right model to leverage in the right set of circumstances to be able to efficiently provide an answer.
Inside that environment, we have an ability to be able to onboard all of the different versions of models, test them out, work out what they're good at, work out where they have strengths and weaknesses, and we're able to use that to our advantage inside our platform. I think just as a, a starting position, this idea of safety and security and protection of assets has been solved by the establishment of these types of platforms. Again, it's a bit of a pattern that we're noticing is starting to be copied in the market. To the second question of hallucinations. This is why we're in this game.
We're incredibly interested in the role that we can play in being able to help our customers and ourselves ground their answers and provide citations back to the information that's, that's helping ground the answers themselves. We know that the large language models are unbelievably valuable and important, and they're really good at certain things. What people are starting to understand is if you want to use this in an enterprise application, there are other elements that are missing in the raw, large language models themselves. Essentially, that's the role that we're looking to play, to be able to provide a level of certainty, a level of ability to go directly to the documents, an ability to ground the answers, while still leveraging the best of the model itself.
It's the coming together of those two things that we actually think is an amazing opportunity here. Again, that's the kind of stuff that our customers are starting to ask us about. That's the kind of thing that they're really interested in when it comes to being able to leverage these tools for enterprise and scaled application inside their organizations.
Sorry. Sorry. Yeah, we're, we're spending a lot of time focusing on the intent of the prompt. What did you mean when you said, "Provide information on?" Is that information related to credit, or is it related to sanctions that we might be able to provide, or is it something else? Then, examining and projecting or understanding the intent using natural language is really, really helpful here. Then you can bring to the table... I mean, literally, we can talk about the deepest database on companies in the world, right? 470 million companies. There's 35,000 or 40,000 that are listed public companies. There's another, I don't know, 450 million, 470 million out there that we can also talk about.
When you generate results on a company you're interested in evaluating or making a loan to, or analyzing, or investing in, we can bring to the table facts associated with that company that we have indexed historically for decades.
Yeah.
Definitively tell you that on this date, this thing was true. Rather than search through hundreds of documents to try to establish what the connection might be, we can actually say, "Look, assuming this intent, look at this data," and then, of course, we have to do some user training to help them prevent, I'll call it a misunderstanding. That's gonna be something that we're trying to create in the prompts as well. The, the, there's gonna be a trade-off between creative, creative responses and limited value responses, and we're trying to test where those boundaries are with our customers to get their perception and preference sets, so we can set that, set the toggle where we think it ought to be.
Yeah. Again, we've been pretty public. We're using Retrieval-Augmented Generation, RAG, as a technique, prompt engineering as a technique, to be able to work out where those sliders and where that toggle sits. We have an ability to be able to engineer the, the answer, leveraging all of the underlying proprietary data assets, to be able to say: Where is absolute certainty required? Where is citation required? Where can we allow a little more of the large language model to kick in to provide-
Right.
you some color commentary?
Right.
How do we get that right in the right circumstance for the right kind of application?
Right.
The Research Assistant, again, is just one of them that says, "Here's the kind of activity that's being undertaken. This is the kind of persona of the person undertaking the activity. Where do we have to write the rules? How do we engineer the prompt? How do we use RAG to our advantage.
Right.
to be able to make sure that you're getting what you want in that search?
Where do we put the throttle in place to say, "Wait a minute, you've just asked something I don't actually have the ability to answer." Let's be clear about that. I don't think I can answer this one. Why don't you look over here, right? That's another big feature that we're trying to make prominent in the products, right? Make it obvious that when we don't know, we're gonna tell you.
Yeah. I mean, we're uniquely positioned to be able to do this. We run a regulated business where we're publishing ratings. There's a requirement for that information to be true and accurate and up to date, a regulatory requirement to do so. What we're doing is really working on making sure that that can also be true in the context of generative AI, where we're providing, again, absoluteness and accuracy in some circumstances and allowing the model to take over in others.
Well, that's great. Again, thanks for sharing some of that underlying details on the RAG. Maybe one of the questions that we get, and at a very high level, how should we think about the timelines for some of this? Again, I'm not talking about exact dates or quarters, but is this, 3-5 quarters? Is this next couple of years, or is it 3, 5 years? Any high-level views there, whatever you can share at a very high level.
We're, we're in the market now gathering feedback with the assistant. You can see the assistant on that video, I think we have on our IR site. It's only... It's a short 90-second video, just to give you a sense of what's possible. We're out, literally, demoing and starting to trial with a very small set of customers that we know and that know us and can deal with the hallucination challenges that you were talking about, right? Let's make sure we understand what we're dealing with. Gathering feedback, like, literally right now. Productizing that, you know, these things don't happen overnight. There's, there's work to do there. We're hoping to have something that is... Well, I would say we're in the market now, and then we're hoping to have something commercialized by year-end.
I'd say there's a series of assistants that we'll be able to roll out over the course of the next 12 to 18 months, and maybe some other capabilities we hope that we can, we can commercialize as well within that timeframe, just to maybe explore some of those other new use cases as well. Nick, anything else you'd want to offer there?
It's probably worth just talking through the timing of the different unlock opportunities. We're talking about existing customers using existing tools and adding value to that process.
Yeah.
we're also exploring opportunities to be able to inject some of these capabilities into new channels.
Yeah.
We're also interested in whether, as a result of the barrier to accessing our research and content being lower, there's new types of customers that might be interested in being able to access these technologies. In parallel, we're also exploring this idea of a channel strategy or an embedding strategy that will allow people to be able to access this information in other and new ways.
Yeah.
That's fantastic. That's great to hear. moving on to the next topic, something that, Nick, you mentioned earlier on the efficiency piece. I was wondering, both Nick and Steve, if you, Steve, particularly, if you can talk about how do you think about the benefit that comes from the GenAI? I think in prior business update, there was a reference to one experiment where there was savings of as much as 40% of cost savings. Can you help us understand-
Yeah.
What's driving that savings?
Yeah.
Then also some use cases around it. Thanks.
All right. Maybe I'll take a shot. I mean, for me, the hardest thing to do is confirm that you have a hypothesis that will resonate with customers. We are definitely prioritizing use cases, value propositions, what's most valuable for the people outside. You know, if you're doing those experiments and doing that product development work, and you're also delivering a 10% ARR number with your core, it's hard to see a lot of savings, right, at the same time, 'cause we're ramping up while these other things are still happening. I would say, let's just put that in, in the... just to create that context and backdrop, but efficiency possibilities here are real.
I am biased toward leveraging them as productivity enhancers first, because we believe we have great growth opportunities, and we want to make sure we deliver the product that our salespeople can go out and deliver the growth with. You know, we're excited to leverage the productivity enhancements to do more with the same number of people, for example. That itself may be a way of creating some operating leverage or more operating leverage in the business. The thing you were referring to before is really the engineering benefits that we're seeing, and we have done some experiments already internally, where we've been testing, you know, the code that we get if we leverage some of these GPT tools versus do it by ourselves.
We are seeing some good improvements in terms of the time it takes to deliver a value, some unit of value. We do... I think Sergio was the guy who was on that, and he said: "Look, we've seen some 40% numbers." I wouldn't say that that is translated to 40% in expense, right? That's 40% of a developer's time on a particular experiment, we saw, we could see that improvement.
That in itself is very exciting, and I would say we're going to continue with our, "How do we grow the best way we can and continue to deliver margin and margin expansion to our, to our stakeholders, and what's the best way to do that, and what's the right mix?" That's the kind of thing we have to learn some more before we can make some declarations about. I'd say we're very excited about the opportunity to be more productive and maybe even save some money.
As Steve alluded to, the more complicated problem here is prioritization, not opportunity. It's that, again, we've launched and allowed people to be able to engage in the use of these technologies, and now we're starting to explore on multiple fronts, opportunities for people to be more efficient and more effective. Prioritizing those opportunities, building the base platforms that allow us to be able to unlock those opportunities is our focus.
Yeah.
Then we can start to say, right, where are we going to focus both internally and externally to unlock those opportunities?
Well, probably an important way to say this, we expect we will assemble value propositions often around our core capabilities. You know, a lending assistant will be made available, or a banking assistant will be made available to our lending customers, undoubtedly in the CreditLens software application. The engineers that work on the CreditLens software application are going to adopt a component that we, we've built at the platform level that they can then leverage in the banking experience or in the lending experience. You know, this is a, a growth opportunity first, a productivity enhancer as well, and maybe we may see some, some operating leverage develop as a result.
Steve, that's great color. The way I understand it, and sorry, Nick, I interrupted you, but.
Yeah.
The way I think about it is, it essentially you're shrinking the software development time, which will essentially help you get products to the market faster and better monetization. As you mentioned, like, over the next 12-18 months, you have plans to launch several new assistant products.
Yeah.
Incorporating as part of your software development process.
Yeah.
That's great to hear. The maybe, the follow-up question there would be just around, are there use cases also on the data ingestion and processing front? Like, software development is what we talked about, but what about... There's a lot of data that's used either in the MIS process or even that comes into for your private company data. Can you also talk about the data ingestion and processing and the efficiencies there?
Yeah. Experiments we're running, Nick, maybe just a example or two or?
Yeah, I mean, it sounds like it's a bit crazy. Yes, is going to be the answer to all of these things.
Yeah.
We think there are opportunities that come up. As part of that ideation process, there's something like 184 individual ideas, some of which relate to data capture, some of which relate to data ingestion, some of which relate to data processing, some of which relate to data storage. We are noticing an ability to be able to leverage these underlying technologies and base platforms to be able to explore all of those ideas. The models themselves are really good at being able to take a basic concept, infer something, and then fill in the blanks. If you think about that in the context of data curation, data creation, and ingestion, there are opportunities to be able to leverage what we already have and what we know to be able to expand and increase our ability to expand quickly...
Yeah
in, in the data set that we have. Again, we have the world's largest database of private companies, 470 odd million companies. Our ability to be able to go and say, what are additional attributes that I might be able to attach to one of those 470 million companies, and can I do that more efficiently using these technologies? Are the kinds of things that we're starting to explore.
Maybe just to bring it home. The first an experiment that we're currently running, and we're evaluating the effectiveness of it, is just in this Copilot capability that we've now made available to everybody in the company, and it's through this, you know, this secure environment that we've established at Microsoft and leveraging the OpenAI LLM. We now have the ability to summarize PDFs, right. This is something we can do right now on this call, and, you know, we can literally say: Do me a favor, take a look at this PDF, summarize it, and if it's, I don't know, something related to that company's reputation as it relates, I'm gonna make something up that's tragic, human trafficking, right. Or the labor relations problems in, in some faraway land.
You could pull that content out of that PDF if it is there, survey this PDF, summarize it, and give me any specific comments on human trafficking, right? That's the kind of thing we're now able to do while we're talking. We may get to the point where we can actually automate it and digitize the data also, but we're already at the point where we are now working on these kinds of things in the throes of our day-to-day work. This is what I... This is when Rob talks about 14,000 innovators, it's 14,000 people who are now able to leverage these new capabilities right now. It's just summarize a PDF, summarize a website. Let's talk about it while we're on this call.
Yeah.
It's pretty cool. It's pretty cool.
Yeah, the opportunity seems pretty big and pretty, as you said, a generational shift. Maybe just talking about investments, right? How do we think about, all of this requires, and you've already taken a lot of steps around the investment process, but how do we think about investments going forward? Do you believe that you have the right engineers, the, even in Moody's, or do you need to hire more in terms of other investments in technology, infrastructure? Are there other things that we need to be cognizant of at a high level? Thanks.
Can I just take a crack, and then you can tell me where I'm missing on this? One, I mean, the... 50% of this is orientation. This technology is literally brand new. In December, there's a, like, an epiphany moment for me that GPT-3 was a very big improvement over what we had seen prior to that, right? GPT-3.5 is even better, GPT-4 is even better, and then you have all the proliferation of all the other models we're aware of. Just being oriented the right way is, is a big part of this. Gosh, I lost my train of thought here. The, that, that's a big part of it. Remind me your question again, Ashish?
The investments, like-
Yeah.
Do you have the right resources in-house?
Have the right orientation. People with some experience with any of these things comes in handy. Having your data organized is a big deal, right? Being ready to ask a question of a database, being ready to interpret the prompt to determine that this is relevant for this database, is what we're doing now, that's sort of the innovation. Once you dive into the database, do you have your data structured? Do you have a historical record of everything that's happened in the last 25 or 30 years? It's pre-indexed and pre-ready to go, so you can incorporate it into the analysis. The people's attitude, having some people who actually have some experience with this, and then having the resources sort of ready to go, I think are three assets that we bring to the table here just to make this go faster.
Nick, anything else you'd, you'd, you'd offer?
Part of our strategy here is to be part of an ecosystem that's growing and developing all of this stuff because it's all pretty brand new.
Yeah.
The capital investment required in doing that, if you're part of an ecosystem, is different than if you're trying to build and manage and maintain all of the different componentry required to be able to unlock the technology. Again, we have established relationships with major players in this space. Again, they're leaning into us as much as we're leaning into them to say, "Actually, let's focus on unlocking and investing in and developing commercial opportunities for the things that we have control over, and where possible, just leverage the large language models that exist, onboard them into our platform." So we're picking and choosing the way in which we're making investments by leveraging the advances that are being made in the industry by being a player in that industry.
Yeah
by being part of that ecosystem. That's allowed us both to be able to do this really quickly and reduce the kind of the initial capital investment that might be required to be able to do lots of this stuff by just leveraging best of breed of what's available and working with those companies to be able to co-develop it.
Yeah. The, the probably the biggest investment is just learning-
Yeah
... while with everybody to see what we can do here. You know, we, we have a really good sense that this is a great opportunity. We're not exactly sure which way this tide's going to move. Let's just get in the water and be a part of this.
Yeah.
Let's see if maybe we can help people along the way.
That's great. I just wanted to go back to two things that was mentioned earlier in our conversation. One was obviously the Microsoft partnership, and we want to drill down further on that front. If you want to talk more about the Microsoft partnership, both from a technology perspective, but, and, but also the channel strategy, right? Nick, you also mentioned as part of the go-to-market strategy, like the channel strategy. Steve, also, if you can elaborate further, not like your relationship with Microsoft, not only as a technology provider, but also as a channel partner. Thanks.
Yeah, yeah, yeah. I mean, I'll take a crack at it. Nick, you and I have done, you've done a lot of work here. Microsoft is, obviously a tremendous organization, tremendous reach into fantastic customers, you know, hundreds of millions of customers. They definitely represent to us an opportunity to tap what I would consider to be very sophisticated individuals who might not be customers of ours. We also work with their largest customers, so we have a great symbiotic relationship here. There are some new channels. For example, the work we're doing to create some apps that would be available on Teams, for example, where we talked about, you know, summarizing a PDF in this meeting.
You know, Teams is one of those places where you can summarize a PDF in this meeting and present it on the screen. You know, these, these kinds of capabilities are, are resonant out of the box, and they represent opportunities for us to bring our content, and really, in a, in a democratized way, to a new set of consumers that we wouldn't normally have access to. Where normally we're selling to subs, you know, subscriptions to large institutions. This would give us the opportunity to help people that aren't necessarily a part of those large institutions, and maybe they, too, would like to generate text that is grounded in facts, right? Just like the big boys. This is the kind of thing I think we're looking forward to. Nick, you could offer a, a lot more, I'm sure.
Well, it's worth noting that we offer an opportunity to them. That's part of this arrangement. This is saying they're trying to establish platforms and canvases that service information workers, we have an unbelievably valuable asset-
Yeah
that allows information workers to have more certainty. This is as much about us being able to add value to their offering, particularly inside their Microsoft ecosystem, as anything else. A part of the relationship is about co-development. It's about jointly working out where there's additional value that can be added into applications that are inside the Microsoft ecosystem. As Steve mentioned, that opens up new opportunities because there might be types of individuals or types of customers that we might not have necessarily been able to touch before that can now start to leverage and access our grounded content because that barrier is a little lower. It, it's worth just noting one thing, because I know that there has been some confusion in the market.
Part of that relationship and joint development, we both use the phrase Copilot. It's worth just being specific that these are different things that have different go-to-market strategies. The Moody's CoPilot is the platform that we've built, that we're using to be able to build commercial applications like the Research Assistant. It's built on Azure underlying infrastructure and Azure technology. That's part of the relationship with Microsoft. The Microsoft Copilot is their own Copilot, leverageable inside the Office 365 environment, which is different than the GitHub Copilot, which is a specific application for engineers to use. Each of them have different go-to-market strategies, different approaches. They share some similar philosophies.
We've shared interest in how we're proposing to build them. The idea of a Copilot, the idea that this will help a human be more effective and more efficient, is really at the core of what we're trying to deliver. It kinda depends on whether you use a capital C or a small C when you're describing the Copilot. It is just worth noting that the blurring of the lines because of our relationship with Microsoft, doesn't exist in those two particular applications.
One, one thing I, I, I mentioned this, a little earlier, I'll just add, we're very excited about working together with our customers in common. I think, I think we bring to the table some content and some expertise in financial services that we think can add a lot of value to their customers that are in the financial services space, and they, of course, have some technologies that would add a lot of value in order for us to bring those some things together. We're excited about the potential, of working together with those customers that we have in common, especially.
Yeah, that's great. As Steve, as you mentioned, there's an opportunity to work together on customers that you have in common, but also seems like there's a huge opportunity of accessing the SMB customer base...
Yeah
... through this particular channel, and so that's a significant expansion of the TAM. Obviously, all these assistant products are also expanding the TAM that you already have in place, so I, I think that's very exciting. Maybe if I can just ask a quick question on, on the technology itself, and just like you decided to go with OpenAI, obviously, which is the market leader there, market leader in GenAI, but there are other providers. I was wondering, was there a specific rationale for using OpenAI?
Then just in terms of cloud strategy also, I was wondering if you could just provide some color on your hybrid cloud strategy and how that is also supporting, and as Nick and Steve, both of you, have talked about, like, having the right infrastructure, the right data, is also helping you propel your strategy faster, but any color on those fronts? Thanks.
Steve, I missed that question. I dropped off there for a minute. Let me know if I can help.
No problem. What I... I'll do the large language part. You can talk about cloud infrastructure and our hybrid cloud approach. As it relates specifically to the models, of course, we're impressed by the power and value of things like 4.0 version of the OpenAI model. We use the Azure OpenAI model inside the Copilot environment, but we have onboarded into our environment all of the different large language models that exist, and we're experimenting and exploring with the idea of being able to have a really specific strategy around model choice. Again, this is a growing and developing field, but because there's a variable cost associated with your ability to provide an answer that leverages a large language model, again, we're on the early adoption and sophisticated end.
We're starting to now have conversations about: How can we engineer our products so that we can leverage, and reduce the cost of goods sold element by being able to pick and choose which model we use to be able to derive which parts of an answer? We might do some pre-processing in a model that uses less tokens, for example, to be able to create a prompt that's an input into a more sophisticated model, where we wanna leverage the good parts of the, the vectors that exist inside that model. More and more, we're starting to explore: Can we have a really specific strategy around model leveraging and model use based on a deeper understanding of what the customer need is and how we might satisfy the customer need? Again, a growing area of experimentation and exploration inside the company.
Ashish, I, I had some, some connectivity problems there. I don't know if you guys could tell, but, I, I missed the tail end of the question about cloud generally?
Yeah. Yeah, it was just more about like, models and then cloud also, your hybrid, multi-hybrid cloud strategy-
Yeah
... how is that coming along, and how that is also, having that strategy in place, has helped propel, the GenAI initiatives.
Yeah. I mean, just if I start, just strategically speaking, we are investing in a platform element here when we talk about the Moody's Copilot capability that we are leveraging in the Moody's Research Assistant, right? That's a great example of a platform-level element that we can then leverage to assemble products faster wherever they, you know, that might make sense, and that's true across the board. We're doing a lot of work sort of underneath the floorboards to make it easier for us to assemble new value propositions for customers. Sometimes that happens at the almost infrastructure level, and sometimes that happens, you know, within the SaaS platform that we've developed for banks or the SaaS platform we've developed for insurance companies or maybe the SaaS platform we've developed around KYC and, and onboarding customers. You know, we're, we're intentional about creating capabilities that are cloud native.
Sometimes they are built on the, the platform capabilities that are provided by the cloud providers. That's, you know, and essentially, that's how you become essentially cloud native, right? Then we're also thinking about: How do we set ourselves up so that we can, I guess, be as adaptable as possible for our customers? You know, certain customers have certain preferences. We wanna be able to accommodate them in whatever way make-makes the most sense. You, you have a great example. This Copilot is today built in an environment that Microsoft provided with OpenAI. That's a great example of a component that we're, we're benefiting by that cloud relationship. We have some other ones? Of course, where we are, you know, we're cloud native, thanks to the work at AWS or others, and that's relevant, too.
We're, we're focused on adaptability and customer orientation and doing what makes the most sense for them because that's the best way for us to be as relevant as we can be. Flexibility and adaptability are pretty important to this strategy over the long term.
That, that's great color . That was very helpful conversation. Again, I think, thanks again for giving us this opportunity. I was just wondering, again, the way I think about it, as you mentioned, it's a generational opportunity, huge, expands your addressable markets, creates, much more, room for innovation and the pace at which these new products come to the market. I, I was wondering if you had any, both Steve and Nick, any closing thoughts, anything that we didn't cover or anything that you want investors and us to take away from?
There's one thing we, we didn't quite get to, or we, we started to address, we never really said it. The, the, the, the Copilot concept, this Research Assistant concept is just the first example of us bringing content sets and data sets and expertise to the table, that historically it was expensive for us to deliver, right? Usually, a customer would buy research from us or asset liability management tools from us. We're at the point where if you wanted to investigate this particular item in your ledger, in your balance sheet and understand: how did we get there, right? What's the story behind it? You can employ the Research Assistant to dive in on that at a relatively low cost kind of a way, kind of way. It's very accessible.
The cross-selling opportunity here to drive bigger relationships with some of the best customers in the world, are tremendous. You know, this, this idea of landing with a bank, developing relationships over time, that expand nicely over time, nice expansion rates within those relationships, has been something that we've relied on literally for decades as our growth engine. We're now at the point where we have a new tool that I think facilitates that even better because we can, you don't have to be afraid to ask the question. We have a lot of work to do commercially, don't get me wrong, and product-wise, but you don't have to be afraid to ask questions. What would happen if the economic scenario changed in light of this situation or this thing I'm looking at? I can now deploy my economic details on this property.
I can deploy my commercial real estate capabilities. With respect to climate, I can deploy my climate capabilities. I haven't talked about credit yet. I didn't talk about financial crime yet. These are questions that everybody wants to ask, but it was just too hard. The cross-selling and the ability to expand these relationships, we think, is just tremendous. It just. It's a ratification of the kind of land and expand approach that we've had that I think is going to drive some good, really good, healthy growth in the future.
Yeah. The thing that I'm most excited by, Ashish, is sometimes when you have these kinds of brand-new technologies come to market, there's a sparkle, and then the sparkle goes away, and the dust settles, and then people, are, I guess, less interested or there's less opportunity. This is one of those things where after the sparkle went away, we found that there's more opportunity, there's more work to do, there's more ability to be able to provide value to our customers. Even after the current hype cycle goes away, what we're learning is the valuable role that proprietary data provides in your ability to be able to leverage this at scale and for enterprise applications. That is right at the core of our strategy.
We think that part of the reason why we've jumped in with both feet is because we think there's an ability to deliver additional value to customers. Now what we're finding in practice is that that's absolutely true, and now we've got this field in front of us to be able to work out how do we prioritize it, and how do we get that value in the hands of our customers as quickly as we can.
That's great. Nick, Steve, this was very helpful conversation, and I'm really excited as well. Thanks for sharing. Thanks for the time.
Thank you very much, Ashish. I hope all is well with everybody on the Webex here. Nice to see everybody. Thanks.
Thank you.