Ladies and gentlemen, please welcome to the stage John Kristoff, Head of EXL Investor Relations.
Thanks, Alison. Welcome and good morning. We're really excited to have you guys with us here today on this rainy New York morning. We've got a lot to cover today, so I'll try to keep my comments to a minimum, my opening notes here. We are live webcasting this event, and the slides will be posted to our Investor Relations website later today. A replay of the event will also be available. As we're webcasting, we basically will have no breaks because it's a straight-through webcast. There is coffee in the back of the room. Restrooms are in the hallway behind us. Feel free to get your coffee, make a phone call, what have you during the event, because again, there's no scheduled breaks.
We're holding our questions for the end of the formal presentation, so we'll do a group Q&A up here on the stage after all the speakers have done their formal presentation. Finally, we're going to have lunch in the room over here to my left after the Q&A session for those who want to stay and have lunch with the presenters and the rest of the leaders of EXL. We welcome you to do so. Of course, there's going to be a lot of forward-looking statements made today. That's the whole point of the event, so we caution you on that. Just a quick run-through of the agenda here. Rohit will kick things off.
He will provide an update and progress on our data and AI strategy over the past year since we last met, and most importantly, how industry trends are driving AI into the workflow and how EXL can benefit from that. Anita will then cover how we're leveraging our domain data and AI to strengthen and sustain our strategic competitive advantage. Vikas will share, excuse me, how we're creating horizontal capabilities and solutions to make AI actually work in the workflow. Andy will then come up and talk about creating our differentiated solutions and our own IP, which is becoming increasingly important. Vivek will then show how we pull it all together and make AI work, not only generating value for clients, but generating value for EXL. Finally, Maurizio will conclude with a review of our financials. Without further ado, please join me in welcoming Rohit.
Oops. All okay?
I think that's two years in a row I did.
There you go.
Good morning, everyone. Really glad to have you all here. The supercycle of AI started a couple of years ago. When it started, I think there was a lot of nervousness. There was a lot of anxiety. There was a lot of fear. There was a lot of uncertainty as to what this might do for our business model and for others. I think the picture is now becoming a little bit clearer. I think the forward pathway for EXL is becoming much, much more certain. It is becoming much, much more visible.
What we'd like to share with you today is our view of where we think the market is going, why that is a huge advantage for EXL in terms of the skill sets and the capabilities that we have today, and why we think this is going to be a sustainable advantage for EXL to win in this marketplace. This could not be a more exciting time period for us in terms of the change and the speed at which things are evolving. We want to share with you how we've thought about this. Number one, there's a lot of stuff that is going to happen with the AI supercycle. A lot of new applications, a lot of new technologies, large language models, disruptions taking place in terms of the way things are getting done.
Our focus is fundamentally and predominantly about how can we help our clients embed AI into the workflow and take advantage of it. That's our goal, which is helping our clients over the next one, three, five, 10 years, embedding AI into the workflow and taking advantage of it. We think there are three critical skills that are necessary for being able to do that. Here's why we think that this change that is taking place is actually playing to our strengths and to our advantage. The market seems to be coming towards us. That's something which EXL is a big beneficiary of. We'll talk to you about our skill sets and capabilities around data and AI. We'll talk to you about what is it that AI in the workflow really means.
We will end that with why we think this is a sustainable and a sustained advantage. First off, we started our business 26 years ago. We started off as a BPO company. We knew that this was an important thing for our clients, but we never knew it was going to be ultra important in the world of AI. If you want to embed AI into the workflow, the very first thing is you need to have deep domain knowledge because that becomes the framework, that becomes the context for being able to ground the AI models and to be able to leverage AI. We have got domain capability deeply rooted in the entire breadth and depth of the company. We started to invest in analytics way back in 2005, 2006.
We started off with the acquisition of a small, tiny company called Inductus, and we started to build up our capability around data analytics. Again, for us at that point of time, it was just about building up intelligence along with the operational processing capability. Today, that foundation that we built up with data analytics is the bedrock for our deep understanding, knowledge, and mastery of data. You all know this, that data fuels AI. Particularly when you want to be able to leverage AI into the workflow, this is a critical ingredient. Finally, we made an early pivot towards digital, towards starting to focus in on how data can be used to develop insight, take those insights into actions, and be able to deliver better business outcomes for our clients. We started to play around with advanced analytics.
We started to play around with machine learning. And today, that skill set is a critical component of being able to embed AI into the workflow successfully. We'll talk to you a little bit about where the market stands as far as a successful implementation of AI is and where we stand. And there's a stark difference. In our minds, having these three capabilities are really, really critical. So what does really domain knowledge and subject matter expertise mean in the sense of being applicable and being relevant for AI in the workflow? Number one, we manage over 2,000 operating processes and workflow. Just think about that. Just the breadth and depth of capabilities that we have, that's enormous.
I can tell you, when we trained our very first insurance LLM model, it was these subject matter experts and these individuals who understood these processes that were critical in terms of training that large language model and getting to a much better outcome. We have a number of client relationships where we already work, and these clients have trusted us to run their operating processes. We run their operations for them. If we are going to be tasked with embedding AI into the workflow, we are a natural logical partner for our clients to help them modernize their operating stack and to be able to take advantage of AI. Finally, we've got 35,000 colleagues who do this work day in and day out. They are the ones who understand the intricacies of each and every decision and each and every part of that process.
That's a critical step in terms of having knowledge, having business context, and being able to put together a framework and having the canvas to be able to paint this AI picture. When it comes to data, we've done a number of different engagements with our clients. We've done over 200 implementations. We know how to use data, how to store data, how to make data available, what kind of model governance needs to be in place, what kind of data management rights need to be in place. We understand data. We've got more than 15,000 analytics and data professionals. That's a huge part of our workforce that is familiar and knowledgeable about data. Finally, we continue to strengthen our capabilities around data management.
Over the last five years, we've made three or four different acquisitions, bringing on new capability into the company and strengthening our data management presence. Finally, we've started to invest in building up strong strategic partnerships with folks that are embedded in data. We started off with our partnership with Databricks just about 18 months back. Today, we are a strategic partner for Databricks. The pace at which we are getting our employees trained and certified on Databricks, the way in which we can implement and execute data on Databricks, that's becoming industry-leading. That is something which is giving us a tremendous leg room up as far as data is concerned. When it comes to AI, there's a lot that we've done. We've got 4,500 employees who are what we would call as ready to work with AI.
These are people who can use AI and help embed that into the workflow. We have created a number of solutions that are there. We have created accelerators. We have created large language models, which are specific for industry verticals. We have got 16 AI agents already deployed. This is not just creating them and hoping that we will be able to implement it. These are Agentic AI agents that we have deployed. We announced the launch of our EXLerate.ai Platform for developing Agentic AI. What this does for us is it allows us to be able to build new solutions and new capabilities very, very rapidly. We can use all of those accelerators that we have got, put them into the EXLerate.ai framework, and adopt this very, very quickly. Finally, we continue to strengthen our partnerships. Our partnership with NVIDIA is unique.
It's a partnership that both sides have deeply invested in. From NVIDIA's perspective, they were seeking a partner that has true domain knowledge and that has customer access. From our standpoint, using their framework, NeMo, and using some of their software and technologies allows us to be able to optimally use the compute power of their chips. That is critical when you want to talk about efficiency at scale and when you want to talk about the cost being lowered and you want to talk about speed. For all those three things, that's a very, very critical relationship. Finally, with all the three hyperscalers, we've got strong and deep partnerships.
One of the things which is becoming very, very apparent to the hyperscalers is that a company like EXL, while it might be small in their relative size and scale of relationships, we are the one company that is helping drive usage on the cloud. The critical part for all of these hyperscalers is how do you drive usage and consumption in the cloud. That is what we are impacting. That is why we become very, very strategic to these hyperscalers. What does that mean? What that means is for EXL, we've been able to take our data and AI-led revenue. This is a new metric that we have started to share from this year onwards. We're going to report this out every quarter. This really shows the complexity, the kind of work that we are doing.
It will be a surrogate metric for how we are advancing on the AI journey. That metric for us, which is this data and AI-led revenue, has gone up from 38% in 2020 to 53% at the end of 2024. We think this number will continue to increase as we go forward. It's not only what we think about ourselves, but it's also about what others think about us and what the industry analysts and what the folks that look at other players in this space think about EXL. We are really, really proud that we are categorized as a leader when it comes to AI, data, digital, and implementation. The final hallmark of success for EXL is how do our clients view us. We've always carried a very high NPS score with our clients.
With the adoption of AI, because there was such a high failure rate associated with the implementation of AI, we were worried that that might be something that takes a knock. Actually, the opposite has really happened. We have become the chosen partner by our clients, not only for running and operating their processes, not only for helping them embed analytics into the workflow, but helping them embed AI into their operations and into their business. Last year, we received the highest NPS score from our clients. We have always been above 80%. Last year was way, way above that. That just shows how satisfied our clients are with the kind of work that we are doing. What this has resulted in is something which you already know, but this is what we are proud of.
We've been able to grow our revenues at a very fast clip and grow profitability even faster. Let me spend just a couple of minutes about what does AI in the workflow really mean. If you think about the start of the cycle, the supercycle of AI, when it was the early stage, you had ChatGPT just being introduced. LLMs were just kind of coming onto the scene. At that point of time, what was really being helpful out there is creating new content, being able to search, and being able to act as a copilot. Those were some of the initial kind of use cases of the early stage AI. Today, the whole world has gravitated towards Agentic AI. That means a completely different way in which AI is going to be used and adopted in the workflow.
From our standpoint, in the past, the implementation of AI was a lot more static and rule-based. You would implement AI into the workflow, but anytime there was a change, you had to change all over again. With Agentic AI, this is something which allows us to be able to implement the change. The processes can evolve. The processes can be dynamic and keep changing. The Agentic AI will automatically be able to adjust to that. That is very, very powerful. If you think about decision-making, there was a lot of support and help that AI provided earlier. Now you can actually do goal seeking. What that means is you can complete tasks. You need to be able to train your agents so that that particular task can be completed.
As long as the goal is constant, even if the process workflow changes, the agent will adapt to that and be able to manage that. If you took a look at how even a company like EXL was using AI a year ago or 18 months ago, we used to have AI applications on the side alongside with all the platforms and the mainframes and the main applications and the systems of record that our agents used to be taking a look at. Today, what we have is it's all integrated. There is only one screen and one workflow that our agents take a look at.
That is very powerful because when they work on a client's operating process, they do not have to toggle between the client's operating system and go to an AI, seek the help, come back into the workflow, and be able to constantly kind of go back and forth. What they see is an integrated screen, and they are able to work on this seamlessly. That is a big, big shift. For us, this change about AI moving from outside the workflow into the workflow is huge. Of course, what it means is we are no longer doing POCs, and we are no longer trying to establish whether AI works or does not work. Now we are ready for large-scale implementations and doing this at enterprise-wide scale. That is something which provides, obviously, a huge business benefit to our clients and creates a tremendous amount of impact.
I've spoken to you about these three ingredients which are relevant for embedding AI into the workflow. I'll just say, stand alone, these capabilities are great. Actually, the real magic happens when you can integrate all of them. Having deep subject matter expertise in the domain is great. Unless and until you know how to combine it with data which is AI-ready, and you can play around with AI, and you can iterate with the AI to make it effective and to make it work, and you bring all of these things together, it's really of no value. The real magic is about being able to integrate in these three capabilities. What we are doing is we're creating a lot of assets around AI of our own, which are proprietary and which we can use. At the same time, we've always been a technology-agnostic company.
What that means is we have no issue in terms of using best-of-breed capability that exists outside of EXL or outside of our client's domain as well. It is really the ability to orchestrate these three skill sets around data, domain, and AI, and then be able to use all the accelerators that might be AI proprietary, which might be customer proprietary, or which might be third-party proprietary, and bring that together. That is a critical skill set. What this is doing for us, and this has become very, very clear to us. In the past, we used to fear that whether our entire TAM will shrink and whether the work that we are doing, will that get cannibalized and get taken away, and is there a threat of AI to our business model?
Last year, when we came and presented to you, we shared with you why we think that that TAM is actually increasing. Not only increasing by a small number, but it was actually tripling in size. We actually think we made a mistake. It's not tripling in size now. For us, we think it's going to be going up more than four times. Therefore, the TAM opportunity for us is close to about $1.2 trillion. That's a huge market space for us to be playing in. The reason for this is the work that needs to be done in terms of keeping the AI effective in the workflow. You have to work on it on a constant and a recurring basis. It is not a once-and-done kind of an exercise, and then you no longer need to kind of work on it.
The reason you need to keep working on it is the data sets change, the process changes, the customer changes, the product changes, the geography changes, the technology that's available in the third-party ecosystem that changes. There are so many things which are changing at a faster and faster pace that you need to be able to help your clients in terms of implementing that and keeping that relevant. We think, actually, the market opportunity for us continues to expand. That's why we believe this is playing to our strengths, and the market is coming towards us. Finally, I'd just like to conclude by talking about how we are taking this flow forward. We are very clear that we are targeted on helping our clients embed AI in the workflow. That is our main goal.
We are very clear that domain data and AI are the key ingredients for making that successful. This needs to be done in an integrated manner. We have actually changed our operating model and the way in which we are organized so that we can do this on a sustained basis, and we can be much, much more effective. What I mean by that is we are creating a lot of horizontal capability. These are capabilities that can be used across industry verticals, across clients, across different ecosystems. We are creating a lot of this capability. Take, for example, the EXLerate.ai Platform that we have, or some of these 16 Agentic AI agents that we've created, or the LLMs that we've created. These are components that are going to be reusable.
Therefore, we are creating a lot of IP and a capability across which is horizontally. Finally, we are creating use cases which are specific to our clients in our industry verticals. We are going deep into our industry vertical and into our client's operating process and making it really effective as a use case in that client industry vertical. We are going horizontally wide and vertically deep as such, which is the shape of a T. That is something which we think will be really, really necessary as we go forward. There are a lot of players that operate on the horizontal stack. There are very few who operate on the vertical stack. I can tell you there are just a handful of players that will be able to manage both the horizontal stack and the vertical stack and bring that together.
What this is going to do is it's going to create three things which we believe are going to create a sustainable advantage. In this whole game of implementing AI into the workflow, we think there will be three things that will be really, really valuable. Number one is going to be proprietary data. Ultimately, those companies and those providers that have access to proprietary data will stand out and will have a moat around their business model. Number two, those companies which are able to create proprietary IP in AI will have a sustainable advantage and will continue to benefit from that. Finally, those companies which are able to orchestrate the horizontal capability and the vertical capability together and do that at speed, that's going to be a sustainable competitive advantage.
We think about proprietary data, proprietary IP that we are creating, and the ability to orchestrate at speed as being the key ingredients for creating sustained differentiation. I'll stop there. With that, I'm going to ask Anita to come in and share with you how we are continuing to deepen this sustained advantage that we have. Thank you.
Thank you, Rohit. Good morning, everyone. Thank you for being here. I'm Anita Mahon. I joined EXL five years ago from IBM Watson. I've spent my last three years leading up our healthcare business and have just transitioned to lead up our strategy efforts again. I'm quite thrilled to have this opportunity, given all of the opportunity in front of us with AI in the workflow. I will be talking about how we've created a sizable lead by integrating the strengths that Rohit just talked about and what we're doing to really double down on those strengths and create more distance between us and our competition. Three key messages for today. First is that that unique combination of domain data and AI has been underlying our growth as we've outpaced the market, and that it will continue to sustain us ahead of the market going forward.
Second is that these strengths have led to leadership in our focused industries. I'll show you some evidence of the impact that that has created. Lastly, I'll cover some of the ways that we are enabling the execution of our AI in the workflow strategy, investing to deepen our moat, and how we're realigning to address the market more broadly. OK. It's hard to say more than Rohit did on this point, but I do just want to spend a minute on how we really are set up to win AI in the workflow. I'll go through the circle twice. First, the inner part is how we are integrating the domain experience and knowledge, the data management and wrangling skills, as well as data assets, and the AI solutions IP and orchestration capability to create something unique, truly differentiated client value.
Then going around the outer ring, what this does for us is having that differentiated approach attracts more clients, lets us win more effectively against our competitors, creates a broader base of domain experience, which generates more insights and shows us where to innovate, and also then lets us invest more in creating solutions and IP. Just another minute on the domain expertise and how this works down the right side of the slide. We work for many years alongside our client experts, and we get visibility across a multitude of processes, data flows, and enterprise systems. That data advantage where we started with data and analytics far earlier than our competitors has really given us a profound expertise on that very same data that is fueling the AI solutions today and in the future.
This also gives us that bird's eye view of where the opportunities are to bring in AI and create an impact. We invest and create those unique AI solutions. We are in a position to reimagine workflows again. What I really want to share with you is the unique ability to integrate from these three strengths really sets us apart from our competitors. No consulting company, no tech services firm, and certainly no startup has the depth and breadth of expertise and the ability to bring it together that we have at EXL. This flywheel effect expands our wallet share in our existing strategic clients, and it also is expanding our market share. That is why we think we're extremely well set up to win in AI in the workflow.
Now I'll share with you how this has led to market leadership in our focused industries. In insurance, we believe we are a market leader. We serve nine of the top 10 insurers in the U.S., and we have coverage across the industry in property and casualty, life and annuity, brokers, reinsurers, you name it, and we're there. With the 10 years average client tenure, when you take into account that we are adding clients all the time, to have a 10-year average tenure is quite unique in the industry. We have a large base of employees that have built insurance careers here at EXL. We've developed our insurance expertise through academies and have a very impressive base of insurance professionals now. Talking about some examples of the data assets that we've built up, one around life insurance policies in our proprietary LifePro system.
Another example is through our 20 years of property survey data. Most exciting, perhaps, for today's event is we really have been able to innovate and bring our domain data and AI together. We were first to market with an insurance LLM, which you'll hear more about today. In these 17 insurance AI solutions that we have in use with clients, we have both insurance-specific solutions like the LLM and our subrogation offering, where you see our $2 billion in value on the right. There are also horizontal capabilities that we've specialized and adapted into the insurance industry. The revenue growth rate is EXL's four-year revenue growth in the insurance industry, which we believe is outpacing the market.
An example of the differentiated value that we create is in this $1 billion just from digital transformation work that we've done, bringing domain data and AI together for clients over the last three years. I will also just highlight Rohit shared with you a number of market research analyst recognitions we've received in data and analytics and in AI. We also receive those recognitions in insurance, in life and annuities, in property and casualty. That is also market recognition of how we have this unique set of strengths. Let me move into our market leadership in healthcare and life sciences, where I've been the last three years. Here we are managing a number of high-value processes for our clients.
A number of our industry professionals here, something interesting about this group is a number of them are working in an outcomes-based solution, payment integrity, that we operate on behalf of our clients. They are not only healthcare professionals that we have developed and built, but their ingenuity creates new ways of improving our offerings, which then drives more value for clients as well as for EXL. The data assets we have built up, the $3.5 billion in healthcare claims, is accounted across multiple analytic solutions that we have been operating for clients the last many years. The $44 million patient records are longitudinal patient records that we can create in benchmarks from the data that we manage and then use that for real-world evidence studies for life sciences companies and others. Here, the healthcare AI solutions are specialized to healthcare and to specific high-value processes within healthcare.
For example, there's clinical intake for utilization management is one solution. We have also got the expertise to create, we have an intake solution on case management. We also have clinical summarization and intake on appeals, clinical summarization for case management, and we are continuing to build up on those solutions. The AI-resolved queries, interestingly, this is work across three different clients in both healthcare and life sciences, including two Fortune 10 healthcare and life sciences companies. These are fully resolved AI queries, meaning something that in the prior world, a human would have had to consult a number of systems in order to answer a work item or a user inquiry, is being fully cared for in our AI solution. That is bringing real quality improvement and efficiency to our clients.
One thing I want to just give you a sense of, just how much opportunity there is here, one of my clients told me in his claims area they were going through a systems rationalization. I said, well, how many systems do you have here in your claims operation? 175 different systems running the claims adjudication, review, appeals, and quality process. After they got through a multi-year enterprise systems rationalization effort, they would still have more than 100. It is not like there is one workflow that some SaaS workflow provider is running for our clients deep in their operations. We will be continuing to improve and bring AI into their operations for many years to come. The $2.2 billion in value from payment integrity, again, outperforms the market. Vivek will talk to you about that in much more detail later.
I will keep moving here to banking and capital markets. Our industry leadership here is also strong. We serve eight of the top 10 U.S. banks. Our heritage here is in the data and analytics space. And we have experts across the value chain. It does give us a bit of a lead in being able to understand how to bring AI to these clients. We're one of a few providers in the U.S. that has access to all three credit bureaus. We have the best data available to enable marketing insights and customer engagement. I'm happy to be able to share the $8 billion in delinquent debt because in this investor day last year, we showed the Paymentor solution. It's continued to grow, more traction, more clients. In that $8 billion of debt, we have recovered $800 million.
These AI-supported consumer interactions are across a number of our banking clients, two of which are in the Fortune 500. These AI solutions and accelerators are a combination. They are full solutions like Paymentor, some solutions like Agent Assist that Vikas will talk about later, that are broader solutions that have been specialized into banking, and also some accelerators in areas like bank statement insights. The $1.35 billion in fraud, that is an avoidance of fraudulent and illegal activity and is across a number of our banking clients and our payment network clients. OK, coming back up to the enterprise level, I'll just talk for a minute about how we're setting up to execute our AI in the workflow strategy. We are building the business in the T-shape, as Rohit described.
What we are investing in is in the innovation and IP that supports both capabilities and industry vertical solutions. Capital allocation is a very important process for us. We have recently enhanced our investment process. We have formalized our R&D into new AI technology. We have increased our investments and capabilities by a factor of 4x over the last four years. You are going to hear a lot more about the solutions that we are investing in. We are also continuing to invest heavily in our technology talent and into our partner ecosystem. Rohit talked about this a bit, but three ways we are continuing to build up our strengths there are in putting our solutions on the partner marketplaces. We are investing in our go-to-market to jointly pursue key clients together and continuing to invest in our capabilities and external certifications of those capabilities, which exceed 1,000 of our employees right now.
Also, of course, we have a build-buy-partner, the buy piece. We're cultivating, especially in my new role, proactively and thoughtfully cultivating a pipeline of acquisition candidates. We think this will also help us accelerate our implementation of the strategy and, of course, continuing the very strong discipline on strategic fit and value creation. I do want to say one thing about innovation. Hopefully, you're starting to hear innovation is really permeating all that we do. It's everyone's responsibility here at EXL. We had a really fun upleveling of our AI culture last year as our venture team ran a program we called Idea Tank, which generated, and some of the leaders of the program are here, generated over 1,500 ideas, engaged thousands of our colleagues.
We had a finalist event right here at the Nasdaq site with judges from NVIDIA and a VC incubator and some advisors. We funded those ideas. Three of them are already in play. We funded the last six finalists, and three are in play being implemented with clients right now. Lastly, the operating model that we have put into place has really set us up to prosecute this opportunity both in the horizontal and the vertical that you saw in the T. The industry market units are bringing us closer to our clients and putting more senior focus into developing C-suite client relationships. This is becoming more and more important as we become the AI partner for our clients. Our strategic growth units are further deepening and transforming and innovating around our key capabilities.
The way this helps us is in solving for more complex client needs across end-to-end workflows, clearly allows us to broaden our client relationships and capture more wallet share. Also, enables us to put together more integrated, larger value propositions, which will continue to increase our large deal opportunity and supports our innovation culture. OK, so just wrapping up, we really believe this model cannot be replicated because all of our domain data and AI strengths are strongly intertwined and have taken multiple years to build up. So we believe we have some pretty strong barriers to entry and that we will be able to continue to build on these and develop them further in our pursuit of AI in the workflow. Now, I'm going to welcome Vikas, who will be covering how we're making AI work in the workflow.
Thank you, Anita. And good morning, everyone. My name is Vikas Bhalla. Next week, I will complete 24 years with EXL. It has been a wonderful, wonderful journey. As you can imagine, in those 24 years, starting from literally zero, I have done many roles at EXL. For about 10 years, I ran the insurance business at EXL. As we moved into the new operating model, I have now taken on the responsibility of running AI services and operations horizontally at EXL. Rohit and Anita spoke about AI in the workflow. What I'm going to talk about is how do we make AI work in the workflow. We're going to double-click on that and also talk a little bit about how we're building the organizational capabilities and how our strong capabilities in domain operations, the depth of the domain, analytics businesses over the years is helping us do that. Three key things.
The first is that we are changing the shape of operations of our clients with AI in the workflow. Second, our strong analytics foundation and business, which is a very large part of EXL's overall revenue base, is helping us accelerate AI capabilities. Third, and very importantly, we have a very strong engine at EXL now to help build new solutions and new capabilities. Just getting into a little bit more detail, when we say that AI is helping us change the shape of our operations, let's break that into two parts. One is operational sustainability. That is, how do you take the variability out of operations and take the human element-related variability out? The second one is, how do you create more value, which is moving up at a disproportionate scale?
The second is our analytics business, which is a very large part of our business and a very strong foundational element of building up our AI capabilities. As the new demand is emerging for AI, how we are able to use our strong analytics capabilities to be able to scale up on AI, both for EXL as well as for our clients. Finally, like I said, a very strong engine for creating new solutions and services. What I will do is, as I go through these specific elements, I will use specific examples and solutions. I think, very importantly, we're going to talk about the engine which is actually creating these because this is all about how do we actually now have a much more enhanced capability in continuing to build these services and offerings.
Let's talk about the first one, which is changing the shape of the operations and focus on sustainability. CX. Now, one of the things I'm sure everyone knows is that CX is one of the areas where there is a lot of focus on what AI can do to change the way that customer service operations work. Now, customer service operations is not a very large part of EXL's business. What's happening is we are able to now infuse AI into customer service operations. We are finding this increased demand from our clients because we can now run these operations for them using AI, which is infused in it. We can also help them transform this. Vivek is going to talk about this in much more detail with a specific banking client example. Let me just take you to the framework that we have created.
Now, what is the big challenge in customer service operations? There are two or three. One, broken, fragmented systems. Anita spoke about it on the claim site for one of the clients. They have 175 systems. What happens is when a customer is calling in, the agent who's taking the call does not have access to data to be able to manage that call well because the fragmented systems and the broken data flows is a big problem. Second is agent skill variability. To be able to handle a multitude of calls, different kinds of calls, you don't have agents at the same level of expertise. That leads to a big issue. The third one is inability to arrest failure demand.
A lot of calls coming in because something did not go right in the first place leads to multiple calls coming in, as a result of which what you'll find is that customer service operations typically are the most volatile of the operations in the industry. What we have done here is that we have actually created a framework which is using AI, which helps us totally transform the way customer service operations work. Let's use an example of member calls for a large U.S. healthcare payer. We're going to focus in the center, which is where the operation is running. The calls are coming in. Remember the three issues. Data is not available because the systems are fragmented, broken. The data flows are not there. Agent skill variability is a big challenge.
As you go through the infusion of AI, the first thing you want to say is, can certain calls be handled purely by AI, which means it does not need any human interface. In this case, which is the member calls coming in, there's a certain call type which is prior authentication. This is where the member is calling in to figure out policy coverage and trying to get a prior approval for a certain procedure that the member has to go through. We converted this into a purely AI-driven interface for the clients, for the members. Today, 67% of prior authentication calls are actually handled by EXL's Exelia, which is our virtual agent AI. The question now comes is, are the members happy with it?
The members are super happy with it because as we have fine-tuned this model, we are finding that the adoption rate has gone up, like 67%. The NPS scores are very high, 95%. The members are happy talking to a virtual agent, AI agent, as they go through a prior authentication. Now, let's look at the other calls, which are more complex, which go through a multitude of scenarios. What happens there? What we've actually created is an AI agent assist. This is where the call is actually happening. There is an AI agent who's listening to the call live, collecting all the information from multiple systems, the broken, fragmented systems, bringing them all out, and also providing the nudges to the human agents as to how to best handle the call.
What we're doing here is taking the human variability out because now you actually have an AI agent who's listening to the call and giving you the nudges. As we have implemented this thing, what we have found is productivity improvement, absolutely. We have found 20% productivity improvement because now we can collect all the information. AI can collect all the information which is in those fragmented systems and bring that to the human agent and also help with administrative tasks like summarization and so on and so forth.
The real value of this is the personalized interaction with the member because now we are able to provide these AI nudges to the agent in terms of assessing vulnerabilities, in terms of recommending the next best step, in terms of making sure that the agent understands what are the policies that need to be managed through this whole interaction. Because all of those nudges now are available to the agent, the call actually goes much better. What we found is that while we have productivity improvement, the biggest benefit is that the customer NPS goes up by 20%. That means better retention and better overall customer satisfaction.
Once you start implementing this framework for CX operations, you find that the CX operations have totally changed shape because now you're looking at virtual agents and you're looking at agent assists basically managing the complete customer service operation. What you're finding on the bookends are things which are around how do you make sure that agents, when they come into the operation, are better prepared and agents get better feedback for performance improvement. These processes typically are capacity-constrained in organizations. Now when you can actually use AI in the workflow for training and for feedback, you can actually enhance that capacity quite significantly. We are implementing this framework for all the CX operations we are currently running, which, like I said, is not a very large part of EXL.
What is very important is that clients are now coming to us and saying, rather than actually spending a lot of money in manual operations, we can actually help them with AI-infused customer service. This is an example of how we have created a framework using AI to fundamentally change the way that customer service operations are running. Let's move forward to creating value. For that, we are going to start with a demo, a demo on claims.
Customers demand fast, transparent, and empathetic service while claim professionals wrestle with complexity, bottlenecks, and outdated processes. Reimagining claims journey with EXL Claims Assist.ai that brings speed, personalization, and ease of working for both customers and claims professionals. With EXL Claims Assist.ai, customers and brokers can report claims anytime, anywhere with real-time contextual guidance, enabling ease and completeness of loss intake with the option to get support from humans. EXL Claims Assist.ai continues to provide real-time assistance to claim professionals and real-time updates to customers throughout the journey. With EXL Claims Assist.ai, claim handlers benefit from GenAI-powered features, which reduce their workload and allow them to focus on what really matters.
For example, get a quick summary of the claim from multiple sources of structured and unstructured information, gain insights to support in fraud and claim payout decisions, reduce effort with automation of routine administrative tasks, and receive AI-guided next best actions to progress the claim with the most optimal resolution path. EXL Claims Assist.ai solution continuously monitors to detect any deviation from planned path and provides recommended actions to effectively progress and resolve the claim. Claim handlers can also use the AI assist feature at any time to ask questions and get contextual responses, making decision-making easier and providing personalized service to customers. With EXL Claims Assist.ai, customers enjoy faster, more transparent experiences, and insurers benefit from reduced claim cycle times and optimized claims spend.
I was recently having a conversation with the Chief Claims Officer of a large P&C insurance carrier. In that conversation, we were talking about how will true value be created in a claims operation. While you can look at reducing fraud, improving leakage, looking at claims accuracy, and so on and so forth, the single biggest thing you can actually do to impact customer outcomes is if you can speed up the claims adjudication process because what the policyholder needs is a fast decision. That is something which is extremely difficult to do in today's world because of the same set of reasons. Data flows are broken. They are fragmented systems. Information is not available. You need a lot of unstructured data-related signals which need to come in.
For those of you who were here last year, I think we demonstrated a solution called Smart Data Signals, which is our ability to use all the analytical models that we've actually created to bring live intelligence to a claim. What we have done here is use that JSS to basically now create a complete end-to-end claims assist. What we're doing here is fundamentally changing the workflow. This is not about automating certain steps. It's not about optimizing existing processes. It's about fundamentally changing the workflow. What that means is you look at what is the outcome that is required in a claim, what is the speed which is required, what data is required at what stage for effective decision-making. Can you bring all of that data together and convert that into an action?
What you actually get from a customer's viewpoint, which used to be earlier uncertainty, prolonged process, inability to predict how much time will be actually taken, to now look at self-service, faster claims processes, much more predictability about the operation. From a claims adjudicator perspective, from a claim adjuster perspective, something that used to be complex, manual, a lot of time going into figuring out data, sometimes having to take decisions even when the data is not available, to getting into much more faster adjudication, AI-powered insights, and so on and so forth. Fundamental changes of using AI in the workflow. Now you get to see productivity improvements. You get 30% productivity improvement, fraud detection, recovery uplift. All these are great because they show up into the claims ratios and in the P&L.
Like I said, the single biggest impact that you actually get is because now you can speed up the complete claims adjudication process and reduce cycle time by as much as 50%. That is the single most important metric in terms of customer NPS and retention, which is what most claims officers are really tasked with. This is an example of using a framework. Why are we able to do that? Because of our deep domain expertise in claims, our understanding of the data flows, our understanding of the tech ecosystems, our understanding of the analytical models and the signals that can be created at multiple parts of the process. We can totally infuse this with AI.
This was about changing the shape of operations, both from a sustainability perspective, as you saw in the example with the healthcare payer, as well as the Claims Assist, which is for value creation. Let's just switch to the next part, which is how the analytics business and our foundation actually is helping us accelerate on AI. We all know that analytics has been a large, fast-growing part of our business and continues to be. We have about 15,000 people at EXL who actually are doing data analytics, AI. It's a pretty sizable chunk of talent that we have, again, very deep in the domain. More importantly, it's what they do. For that, let me give you an example of the way you can think about it, again, using insurance as a framework.
When you look at the value chain of insurance, you look at new business, you look at pricing, new business policy, administration, claims, customer service, and so on and so forth. Then you look at, on the other axis, the value chain for data, which is around data strategies, getting the data warehouses, moving them to the cloud, working on data analytics, creating models, feeding those insights back into operations for policy decision-making, or infusing that intelligence into the operation. Think of this as a grid. If you look at this grid, particularly, let's say, for insurance, you'll find that our capabilities are very full, which means that we have done this work across the value chain of data and across the value chain of insurance pretty much fully. In fact, most insurance companies don't have that.
We have it because we've done it fully. Now you look at that rich capability set we have and see what is it that we can actually do with that to accelerate our AI capabilities. What's the new demand which is actually coming up? Now we're looking at AI-infused workflows. Rohit spoke about it. This is no longer about looking at static analytical capabilities, but how you can actually take that and put that into the workflow. What you need are more adaptive and self-learning techniques. Now we're looking at data which supports the workflow. You need to be comfortable with multimodal and high-velocity data. Why multimodal? Because you have multiple channels of data coming through. You have multiple systems from which the data has to come. The data has to be of different kinds.
It could be structured, unstructured, documents, images, call recordings, et cetera, et cetera. Multimodal. The ability to basically manage all the data and manage that with velocity so that it is available at the point of decision-making, because that is how you will actually infuse that into the workflow, requires a lot of capabilities. The third scope expansion, now this is no longer about doing standalone work, but it is all about orchestrating that so that it becomes part of the workflow. Finally, governance. We all know about the risk associated with AI making decisions, the risk associated with AI interfacing with your end customer, regulation comes in, hallucination issues, and so on and so forth. You need to have a robust governance framework.
When you look at all of these demand vectors and you think about the strong capabilities that EXL has in its analytics business, it gives us this frontrunner advantage with respect to being able to provide for these requirements. We have a large set of professionals, and we have significantly expanded our generative AI talent. When you look at some of the intellectual property that we have created, let me just talk about two to give you a specific example. Because we've been doing analytics work for so many years at scale, we already have a platform for using multimodal data. We have this ability of actually combining all of the multiple sources, multiple types of data, and ability to use that at that spot where decision-making is required.
That is the analytics business which has created that over the last many years we can bring to the table. A very important one, knowledge graphs. Again, taking the claims example. When the claims is being processed, you actually have a policy holder, you have a policy document, but the ability to connect that with the provider of the service, with the profile of the customer, with the previous prior claims history, with the policy documents, with the annexes in the documents, that ability of actually making that connection, the web, so that all of that data is connected, that is something which comes with experience of having done this work for multiple years. That is the kind of IP we're bringing to the table to be able to accelerate AI. And finally, success.
We are building this capability for EXL, but many of our clients who are struggling to scale up their AI capabilities are now working with us. We have over 1,000 people at EXL who are working on dedicated centers of excellence for AI for our clients. Let me give you an example of that. This is for a large APAC insurer. About two years ago, they started this journey that we need to start building some capability in AI, but we do not have the methodologies, the framework, the IP, and the talent to be able to do that. In the last one year, in particular, we have scaled up an AI team which is focused on working on this particular client and looking at infusing AI into the workflow through the value chain of insurance, whether it is operations, claims management, or customer service.
What we are doing is infusing AI intelligence into the operations. It's a combination of two things. One, it is EXL proprietary IP. It's our IP. For example, Smart Audit, Claims Assist, you saw Claims Assist. It's our IP that we're actually embedding into their operations, which is great because once you embed your IP into their operations, there's an annuity revenue stream that you actually get along with that. We are also building specific solutions for them, which is built to their requirement as to what are the specifics needing going in there. This is now creating a totally new demand vector for us, which is in terms of creating these large centers of excellence for AI for our clients. Now let's get to the engine, which to me is perhaps the most important thing.
How do these things come together and help us create these solutions and services? Given that we've been talking about claims, let's stick with claims and give you an example. If you look at our claims capabilities, you saw in the earlier presentation, we have 25,000 people in EXL who work in insurance. About 10,000 people work on claims. What do they do? We have a large operational footprint. These are people who are working on claims across product lines, personal lines, commercial lines, workers' compensation, general liability, across different market segments, small markets, large markets, commercial, personal, so on and so forth, across different product lines, auto, home, et cetera, et cetera, and both for P&C as well as for life and annuities.
We have a very strong understanding of operations, the underlying ecosystem of technology, the platforms, the interactions, the nuances, and so on and so forth. We also have a large analytics team which has been doing claims analytics for the last 20 years. These are people who've been working on fraud analytics, customer analytics, leakage analytics, NPS, catastrophe modeling, and so on and so forth. They have built a tremendous repository of knowledge, not only in minds of people, but IP that they have actually built over the years. Let's talk about a specific business area within claims, which is a very small part of our business. Just to give you an idea, that's probably 2%-3% of the overall claims business we have, just telling you the power of that. That is a medical record summarization business.
This is where we actually take medical records and we summarize them. It's a 10-year business for us. It's been evolving. What do we have there? By the way, we run it off our own technology platform. We get medical records. We do the summarization using certain technologies. We actually create those summaries and feed that back to the claims adjudicator for decision-making. What we have as a result of that is strong expertise. We have a team which has been doing medical record summarization for the last over 10 years. We have 80 million medical records and their summaries. We know what good looks like. We have 300 million data points on those medical summaries. These three things, which are claims operations, claims analytics, and an example of a specific area which is called medical record summarization.
The total number of people who are working in claims is about 10,000. This combination, which is access to data, deep domain expertise, understanding of the analytical models, and a repository of analytical models, is helping us create new solutions at speed and scale. Let me give you three examples of solutions. The first one is an LLM. Last year, we launched our first LLM, which is the insurance LLM. How did we create it? We have a deep understanding of claims operations. We understand the tech ecosystems. More importantly, in this case, we had access to medical record data. We had the medical records. We had the summaries. We had the subject matter experts who were able to help us fine-tune the model because they had the expertise of what good looks like.
We were able to use this data, a generic model, and create our first claims LLM to start with, which was doing medical record summarization. As we started building out to that, that became an engine, a new solution offering, where now we are saying we can use this LLM and give it to our clients and say, use it, pay per use. Second, we tell them, you want to build your LLM? We can build it for you because now we have the reference architecture. We know how to do it. Third, if you allow us to do your claims processing, we will use our LLM, and we'll be far more efficient and productive. That is the first category of solution that we created, which is the LLM. Now we are creating an underwriting LLM, an F&A LLM, a CX LLM, and so on and so forth.
The genesis of that was those three combinations: large operational footprint, analytical capabilities across the value chain of claims, and a specific area which had data. Let's look at the second one, Agentic AI. Agentic AI is basically moving decisions to actions. Now you have the ability of actually using AI for decision-making. You can actually take that and move to the next step, which could be administrator, which is what is the next step that you can actually do in terms of claims processing. That becomes a new solution set for us, which is creating Agentic AI. Last, creating industry solutions, creating solutions which can actually be translated across multiple clients. They can be point solutions. For example, using this claims LLM example, one of the specific solutions we are actually creating is negotiation guidance. This is when there's an arbitration going on.
A negotiation is happening. Now we actually have an AI which is helping us steer through that process because it's using that same LLM and telling us how this negotiation can be handled, which is a point solution. You also have an end-to-end solution like Claims Assist.ai, which is what I just spoke about. Think about it. This engine is very powerful. I mean, I could have taken you through the same case using an underwriting example or an F&A example. It's the combination of these three things coming together. You look at your domain and data expertise and heritage, and then you add to that AI in the workflow framework. That becomes a very, very powerful engine for solution creation. I spoke to you about Claims Assist, which is an end-to-end solution.
Andy is going to talk about Agentic AI, including the platform that we're using to create Agentic AI. Vivek is going to talk a little bit more about LLM. We'll make sure we cover all the three solutions which have emerged from this engine that we now have at EXL. My last slide. What's the success rate? If you look at the industry benchmarks for AI success, that number is 30%. Seventy percent of implementations fail. Why do they fail? They fail because people try to deploy horizontal solutions without having deep domain expertise, because the context is missing, because the ability to understand where it'll work and won't work is not there. Deep understanding of the workflows, the data flows, the tech ecosystem which lies under that, the ability to integrate that into the workflow, or even the ability to drive change management.
These are the reasons these things fail. With what EXL brings to the table, which is this combination of domain strength, data capabilities, and now infusing into AI, we've been able to deliver a 90% success rate. This is the reason our clients are coming to us. Our clients are telling us, you guys know how to make this a success. We are doing it across the value chain of solutions. Whether it is standalone AI solutions, I gave you this example of Ecelia, which is the virtual agent which is being deployed in a healthcare payer operation. We imagined end-to-end workflows. We spoke about Claims Assist on underwriting assist, which is totally changing the way that those operations work. New native solutions like Paymentor, which is a digital collections agent.
Across the value chain, we are able to do it with high success rate and at speed. Overall, we think that this combination of domain plus data plus AI and this thinking of AI in the workflow is helping us create new solutions at speed, at scale, and take them to market successfully. Now, if you just look forward, what are the foundational elements for this? The two foundational elements are the tech for AI and the data for AI. I'm going to hand it over to Andy Logani to talk about those two elements. Thank you.
Thanks, Vikas. All right. Good morning. Thank you very much for coming and spending time with us. My name is Andy Logani, and I'm also a lifer for EXL, been here for 24 years. Currently, I'm responsible for data engineering and creation of IP. I'll try not to repeat stuff that you've heard. What I want to focus on is actually three things. As Vikas alluded to, data for AI and AI for data. There are nuances to both, so I'll spend some time on that. Number two, we will demonstrate to you EXLerate.ai. The important thing about that, Rohit mentioned two words: orchestration and speed. That hopefully will come to life with that demonstration. Thirdly, you've heard a lot about the intellectual property we are creating. I'm just going to give you a few examples of both horizontal and vertical IP that we are creating. What's the source of differentiation? Why do we think we can sustain it? That's the focus for the next 15 minutes. Firstly, Vikas talked about why enterprise AI deployments fail.
Still, the number one priority and the number one reason is actually the data. If you don't have good data, there is no good AI. We are helping clients in three strategic ways. Firstly, just a reminder, Rohit alluded to this. Over 10-15 years, acquisition of Clairvoyant, ITI, DataSource, RPM has allowed us to build complete end-to-end data capabilities. That's become a position of strength. Two, infusion of domain and proprietary data is giving us more differentiation. I just wanted to set that context as to why we've been able to create these market-leading end-to-end capabilities. As it relates to the first strategic way in which we are helping clients, it is actually migrating data to newer platforms. You heard about these partnerships: Snowflake, Databricks, Microsoft Fabric.
We are helping clients move data to modern platforms, but also realize that it's not just the modern platform. It's not just that same old textual data. It's visual, it's audio, it's synthetic data. There are new types of data that have emerged and, quite frankly, new ways of handling that. That's one area that has been a big advantage for us. This is where the stats become very interesting. 125 programs delivered, 80-plus clients. Rohit talked about 500-plus clients we have. Think about the opportunity just this piece has for us. What's the differentiation? I will double-click on Code Harbor and just walk you through what we've created for migrating data and migrating programming languages, which we don't believe are available in any other competitor set. Number two, this is actually very interesting.
Everybody talks about data for AI, but AI is also coming and just completely reimagining how data gets managed. For example, if you think about extraction, ingestion, quality, master data management, agentic is going to actually reimagine in which this will handle. What we've been able to do, and I think this is really—I strongly believe this is really resonating with our clients—is we have created new offerings: annotations, lineage, anonymization. Because in the AI world, you will need all of these things. Think about the world where a lot of these new purpose-built models will happen. You will need these capabilities. Think about understanding the lineage from different sources. If you don't, you really don't understand truly where the true different data sources are and what's the true value of that. I think that's really helping us bring these new offerings.
Secondly, I'll also walk you through another intellectual property we've created in this space called Xtracto. I'll double-click on that. This is another thing which is really helping us speed to market and being a true source of differentiation with customers. Thirdly, you're going to migrate. You're going to manage. You're going to run data. This is where you'll have long-term relationships. Again, look at the opportunity set here: 40-50 clients of that large cohort, tremendous opportunity for growth. We wanted to just call out why data standalone is so exciting for us and what we've been doing in this space. Let me just get to IP. Now, you've heard the words accelerator. You've heard the word agents. You've heard the word application or solution. You've heard the word domain LLM. Right? There's lots of words floating around.
Let me just try to maybe give you a little bit of context as to what it means for us because these can be open to interpretation. Accelerators, simply put, code packages, tools, programs, blueprints. Every time I'm deploying a solution, I want to use this to bring speed. If I built a prompt, I want to put it in a prompt library. If I have anonymized data for healthcare records, I want to put that so that I can reuse it every single time and I can bring speed. 100-plus have been created, and almost 100-120 customers are using accelerators. Like a tech-forward company, these are all API-enabled. Secondly, domain AI agents. Why be in the business of creating domain AI agents when you have a Salesforce, a ServiceNow? Like Rohit mentioned earlier, we're going to stay agnostic, partner, but there are places where we are different.
What we did, Rohit mentioned 2,000 workflows. We went through the enterprise, and we actually saw there are 16 things or 16 types of workflows which we actually do repeatedly. We have deep proficiency. We understand the data sources, and we understand the tech integrations. It is best to actually configure those domain AI agents essentially with either human in the loop or completely autonomously. It is actually performing the goal. This has been built with our own UI, but this is also in partnership and collaboration with Salesforce, ServiceNow, and other solutions because if a client chooses a particular platform, we just want to make sure that we are there with them to meet the promise. Thirdly, domain LLMs. A lot has been talked about it. Again, pre-trained models are evolving really, really fast. There is reasoning. The math is working better. You are seeing distillations.
I'm not going to get into all the technicalities, but these are evolving fast. Why do we choose to play? Complex workflow requires high-precision AI. You will not compromise with the accuracy on that decision on an underwriting, a claims, or somebody's medical condition. For that, if we can bring private data and take the accuracy high, speed better, and cost low, sure, let's do it. If that differentiation is not there, we're just going to use a pre-trained model. Right? Again, orchestrator. We are always thinking, how am I going to orchestrate for one thing, value and impact? If that's not there, let's not recreate stuff that's already out there. Lastly, this is what you heard from both Anita as well as Vikas. These solutions actually are our data, proprietary algorithms, deep domain knowledge stitched together. They are consumed on outcome or SaaS-based.
Vivek will walk you through an example of payment integrity. Last year, we walked you through PayMentor. There is lending. So we've created many such solutions which are resonating in the marketplace and, like I said, can be consumed on transaction or outcome-based pricing models. I was just going to pick these two examples in the interest of time because other things have already been covered. Xtracto. And look at, I mean, just sort of as it's very visible, we are trying to focus both on horizontal IP as well as vertical. Vertical is always when domain gets ingested. Horizontal is building blocks that I can keep reusing and cuts across multiple use cases. By the way, 80% of regulated industries have unstructured or multimodal data. There is no way you can solve an AI problem if you don't harness the data. That's exactly what Xtracto does.
Brings the data out, allows you to query, allows you to understand the data very, very fast. Consume it from our platform. Four to six weeks, you can get that information and query that data. Twenty-five customers are already using it. What was the source of IP? We went fast, got three patents, and used some cutting-edge techniques to be the first. Sometimes, as you all know, speed just gives you that flywheel and customer trust that we have. We are able to deploy it faster. Code Harbor, another absolutely fascinating thing going on for us at the moment. Fifteen customers we are working with. Code migration and data migration is a massive industry problem, as we all know. Very costly, very time-consuming. What we did is we created this machine learning-based architecture, a complete agentic platform. For example, I can do chunking. I can create synthetic data.
I can do iterative debugging. I can bring all that code set 50% faster, 50% cheaper. This plays out. We are already three customers are live. Twelve others are in different phases. This is another solution which is resonating really well. Both examples of stuff that we built because of our expertise, got patents very quickly, innovated at speed, and then applied domain lens and multiplied it. That is just a quick example of two IP. Accelerators, we are building solutions, domain LLMs, and specific agents. Remember these words because I am going to bring this to life when we get to the demonstration. I do not want to get into too much complexity. You do ChatGPT or get into Google. Ten different architectures will come up. I am just going to highlight two important things. Look, these things are still clients have to make choices.
The role that EXL plays is how do I allow the clients to orchestrate this at speed, combine the best tech, whether it belongs to us or the client or a third party, the best foundational models or best fine-tuned model, and then actually leverage that, stitch it together to deliver impact. Right? That is what we will be doing. In obviously several cases, you still have human in the loop just to make sure that you have the right confidence. The platform really allows to stitch it together, bring it all together, and deliver value. What I will do is now just do a quick demonstration of the accelerator.ai to make it real. What you are going to really see in this platform is solutions, accelerators, building of agent all coming together. Vikas alluded to that claims example.
I'll show you a multi-agent example of a multi-agent coming together, interplay of data between the agents, decision-making so that ultimately you are rendering the right decision. You're going to see that. This is actually a solution architect persona or a junior engineer persona that you're going to see in what we are demonstrating because Vivek later will show you some customer examples to bring it to life. Okay. We'll just get started with the demo. At the home page, what you're going to essentially see is different agents, job queues, and different models that are available. This is like your landing page after you've signed off. Once you've done that, let's say if you want to go and let's pick the model. Like I mentioned, all pre-trained models and EXL fine-tuned models are available here.
The important thing sometimes a developer is, what is the model doing? What's the summarization accuracy? What's the Q&A accuracy? What's the comparison? Cost, speed, accuracy. We just give all that information, input-output tokens, so that the solution architect exactly knows which model to use for what needs. This is where the R&D team keeps infusing this data in a real-time basis. That's the model. From model, let's go to the agents. Firstly, all proprietary domain EXL agents are available here. Now, let's say you have two and you want to create an agent. Let's pick Vikas's example. I'm going to pick an insurance claims FNOL agent. It's in the insurance area. Let's do next. These are all the leading agentic frameworks: LangGraph, LangChain, AutoGen, CrewAI, whatever you want to use. Let's use it. Pick next.
I'm going to attach this agent to EXL's LLM because I like the performance and what I really want to do. Remember the tools. I want to pick scraping, search, PDF parser in this case. Let me do that. All right. This is my design ready for the agent. Now, let me create a solution. I'm going to stitch these agents. I'm going to stitch the accelerators, and I'm going to actually make a multi-agent architecture. Right? When I create it, two are available. Let me do that. Okay. Do I want my own UI? No. It's available. Ignore it. I want to pick the new agent that I created and the two that exist. Do I want the data and intelligence to flow? Yes, I want. Okay. Let's create a multi-agent collaboration. Pick the three agents toward there, one I created.
For supervisor, I want the routing rights. Go next. This is the data sources: Amazon S3, vector database, EXL-created knowledge graph. For all three agents, what data source do I want to attach? Next. Now, RAG. Do I need to solve for recency? Is the language model sufficient? How do I want to configure that? Do I want to chunk the data differently or in a default standard? Next. What's my prompt length? It's a claims. I want a maximum prompt template. In this situation, history configuration, I probably want for 40 days because this has relevance to the claims context. Now I want to pick all the accelerators. I want to observe the LLM. I want data extraction. Got it. Let me just go ahead and now deploy the solution. Once I do that, this is so important. AI Ops. I want to monitor the models.
I want to monitor the dashboard. I want to monitor the prompts because there will be a better mousetrap to solve the problem in a more accurate, economical, and expeditious manner. So that's what EXLerate.ai does. We are putting all our solutions on this. We are deploying through this. It is 50% faster if companies try to do it on their own. If it's a custom development, it can be 70% slower. This is truly becoming a differentiator for us. With that, I'll turn it over to Vivek to share some customer examples. Thank you for your time.
Thank you, Andy. Good morning, all. Now, compared to Rohit and the last two presenters, I'm the new guy in the room. I'm going to complete 19 years at EXL this summer. I joined through that tiny little company that Rohit alluded to, Inductus. I privately believe that I'm the reason, or Inductus is the reason why EXL did not do another acquisition for the next five years. Here we are. I've played many roles at EXL in my tenure here. My most, the longest tenure was as the head of our data and analytics business, which I held for a long period of time. In my current role, I lead our new IMU verticals. I lead insurance, healthcare, and life sciences. Now, what I'm going to do today is Rohit talk to you about the AI supercycle. What does the AI supercycle mean?
How is it that EXL is really kind of monetizing that and helping our customers through that? My focus is going to be on talking to you about how are we driving enterprise adoption, enterprise adoption of AI in the workflow. How is it that we are pulling together our domain, our data advantage, and our AI toolkit and our solutions to bring together solutions for our customers? These are not just any types of solutions. What we are trying to do is tackle some of the biggest industry problems using AI, using our data, and then taking those solutions for those industry problems, deploying them in the workflow, and kind of producing outcomes for our clients, meaningful outcomes for them. As a consequence of that, driving a tremendous amount of value for EXL. What I'm going to do throughout this presentation is show you examples.
These are all real client examples. These are instances where we are actually implementing this work for clients, or we are running it for them. What I'll do is a call out of saying, here's the value that we are producing. Here's why EXL does it best. Then here's the value that accrues to the client. Here's what accrues to EXL. You'll get a feel of how AI is getting, well, becoming real for our customers and how the value gets generated for them. Let's start off with the first example. This is one of our largest clients at this point in time. It's a very large life and annuities customer, insurance customer, that's actually migrated a large part of their policy administration work to us. In the past, this would have been a massive FTE implementation. It still is.
This is more than 1,000 FTEs that are working on this particular problem right now. What we've been able to do is leverage multiple parts of our IP, multiple parts of our solutions. We've taken the knowledge that we've accrued from the data that you just heard about, those 80 million claims records, and brought it to bear in terms of saying, what should the workflows be? What should the decision-making be at each step of the workflow? We've taken our own EXL proprietary solutions, LifePro, which is the platform that we run for policy administration, Xtracto, which you heard from Andy about, which takes out unstructured data, Exelia, which helps them kind of process the voice calls better and pushes guidance to the agents. We've put all of those together, and we've created an end-to-end policy management system for them that's powered by EXL AI.
Now, the outcomes for this, for them, clearly, first of all, they've been able to do this transition. About 5 million plus policies are now being moved over to this particular solution. And we've been able to dramatically improve their customer, the agent productivity, by about 30%. The benefit kind of accrues to the customer and dramatically lowered the cost to serve. So this is just an instance of saying, look, this is not just small installations. We are running some of this stuff at scale. This is probably one of those largest installations that you would see in terms of the number of policies that are going through a particular solution. Now, throughout this talk, what I'm going to do is give you examples of some of these use cases and some of these solutions across the various industries.
As it happens, I've picked one each from healthcare, from insurance, and from banking to give you a sense of the spread of the work that we do. Let's start off with payment integrity and showing you how we've been able to drive an enormous value from it. Just as a backup, as we all know, healthcare claims is one of the largest parts of not just basically one of the largest components of the U.S. economy right now. It's a pretty large, almost 20% of the GDP. Claims on an annual basis, there's about $3.6 trillion claims that pass through the U.S. between payers and providers. Approximately 5% of those claims, according to CMS, are incorrectly paid. The errors can happen because of any number of reasons. They can happen because there are billing errors. This is really complex in terms of the coding.
There's billing errors by the providers. There's sometimes payment processing errors or contractual errors on the payer side. And sometimes there's bad actors. There's intentional fraud, waste, and abuse. Now, it all adds up to an enormous amount of money. At 5%, it's $180 billion each year of incorrect payments. And if you get some of these incorrect payments identified and pull them back in, then that amount goes directly to the bottom line. So for our clients, for the payers, this is an enormous, enormous lever for being able to fix their overall profitability equation. Now, what did EXL do for it? What we've done is we've actually taken the capability that we had on payment integrity and brought it up to the next level. We've created what we are calling is our smart platform, which is a cloud-based, cloud-native platform.
Today, it's taking in data from about 30,000 providers all across the U.S., addressing data for about 155 million U.S. customers or members across these plans. That data comes onto a modern data stack, which we've actually implemented on Databricks. It's something that optimizes for both the way data is handled and the way data is actually operationalized. On top of it comes basically a secret sauce. Once the data is into the stack, what we are running is about 8,000 different algorithms. These are effectively things that allow us to find patterns within the claims data, patterns to identify overpayment, patterns to identify outliers. It's really the ability of mining it at a very, very minute level and saying, can I identify things that I suspect are overpaid?
Now, once that's done, the data, the flags, effectively the claims that have been flagged, now go to our set of smart auditors. That is where the AI comes in. The auditors are now getting assisted by AI, which is now mining through the claims associated with these triggers, bringing out effectively the specific areas, the specific part that is actually getting the flag. That is then getting used into saying, okay, I need to start preparing the recovery claim. This is something that gets sent back to the providers. The whole process right now is optimized for speed, for precision, and for effectively reducing the amount of pushback that you get from providers while optimizing for the throughput or the benefits. We've already told you a little about this.
This system, our data platform today, is running about 3.8 billion claims that are passing through it on an annual basis. We've become incredibly precise in the way we identify these patterns. Only about 1% of these claims are actually getting identified as things that are fraudulent or things that require recoveries. The system's producing about $2.2 billion of annual savings for our customers. Now, you can imagine what that does to the profitability of one of our clients because this is now going directly into their bottom line. It's kind of boosting the overall profitability. There are other players in the space, for sure. I mean, there's players like Cotiviti. There's players like Optum. Our customers have internal teams that do this. There's a reason why they choose us.
It's because we are actually outperforming the entire market in terms of how effective we are in actually identifying those claims and creating those recoveries. The metric we look at is for every billion dollars of claims that passes through our system, we are at this point in time identifying and recovering $6.2 million in terms of recoveries. It's about 40% higher than the industry average. This is basically the reason why our clients choose us because what we are doing for them is giving them that precision while optimizing and actually really getting a huge bang for the buck in terms of the recoveries. Now, we've been classified, as you can imagine, as a leader in the space. We've got the star rating. We are in the top right quadrant, as you would imagine.
But really, the actual proof of the pudding is that our revenue from payment integrity has tripled in the last four years. That just shows you how much payers, both large as well as small, are trusting us and actually pushing their volume with us. I'm going to just walk you through a tiny aspect of that smart data platform. Now, it's not possible for me to show you the 8,000 algorithms running. We'll be here for a very long time if that happens. Let me tell you about the one area that I'm going to show you. Once the algorithms have run and we've flagged something, the biggest impediment to payment integrity is what's called provider abrasion. It's basically a hospital system or a doctor saying, you're just nickel-and-diming me. You're pushing back on claims that are incorrectly flagged.
It's causing a long delay in the system. The key to this is to actually make sure that you're actually reducing your incorrect flags, that you're reducing false positives. That's where we are using AI. What we're doing is we're taking an auditor who's basically looking at all of these flags and churning through them. We are optimizing that whole process by using AI. I'm going to show you a little example of that. Just starting off, what you're going to see is a little bit of a vignette of the system. What the system does is it pulls together the medical claims across all of these different providers. Those claims come into our central database. We standardize the data. We transform it. We enrich it by adding other data sources.
That is when we start running the algorithms on it to start effectively looking for outliers. Once those outliers have been identified, they then go into our audit system. In the audit system, you are supposed to actually then check for whether something is real or not. I am going to fast forward and show you a little bit of how the audit works. In the audit, once something has been flagged, we are now reading in a medical form. These can be between 100 pages to sometimes 2,000 pages of dense, dense unstructured data, reports, lab reports, tests, what have you. What we have done here is shown you an instance where what we are trying to do is read that entire data.
The auditor is looking for symptoms, is looking for patterns, and is basically getting an AI-generated summary of the entire case, saying exactly what was the mental status, what were the symptoms, what's the response. We are also getting effectively a summary for saying what was the patient use case. This is something that then starts getting used. The auditor has the ability of asking questions and is effectively now getting responses and saying what was a clinical workup done, what was flagged, what was not. You have the ability of asking for effectively certain conditions and identifying whether there were proof points for that or not. In this particular case, what you've seen is that the AI has gone through the entire form and basically presented to the auditor, here were the conditions that were flagged correctly. Here were the ones that were not.
That is basically the judgment call that they make. Now, that tool that I have just shown you, that has done wonders for us. Because when we put it into our workflow and we deploy it correctly, that has taken the productivity of these auditors who are doing between eight to eight and a half audits a day, and it has taken that productivity in certain cases up to 17 audits a day. That shows you the power of this particular toolkit because it can now start pushing through that kind of a volume and at still an elevated level of accuracy. The accuracy dividend for our client is creating a huge impact. Because of what you have seen, it is not just about the productivity. The accuracy of this has basically taken false positives down by 35%.
Now they have a lot more confidence in the ability of saying, I can pump volume at EXL. They can churn through it very productively without sacrificing accuracy. I'll be able to manage my throughput without effectively threatening the quality of my relationships with the providers. Part of the secret sauce of why this thing is kind of getting the revenue boost that it is. All right. Let's move on to another example. Now, you've heard throughout our presentation today, you've heard about the insurance LLM. It goes without saying we are incredibly proud of what we've built. Because when the whole AI boom started, the first thing we said was we'll do a lot of things, but we won't build LLMs. Here we are, a year and a half later, we've got seven of those already.
This one, I wanted to show you how does it actually work, what's the value that it provides to our customers. Let's get into it. The overall global insurance market is massive. It's about $7 trillion in terms of the size. About 5% of that market is actually spent on the administration of claims and underwriting. That's one part of what we impact. We also impact a little bit of the claims portion of it, the rest of the $7 trillion. Most of the work that we do is on the administration side. This is how the LLM really performs against that. What our clients are asking us today is, can you show me how I can start getting in unstructured data into processing my claims?
What is it that I can do to take the multimodal data that I get, be it accident photographs, be it medical forms, be it reports? How can I start ingesting the insights from that into my decision-making? That is where our LLM comes in. Now, what you've got here is a chart. I'll explain the chart a little bit. What we did was we tested our LLM against market-leading LLMs that are available to all of us, foundational LLMs, and looked at it in terms of how well we compare against it for this particular use case. Our use case was very simple. It was basically taking these claims forms that we are processing. You've heard a lot about them, the 80 million claims that we have within our system.
Looking at those claims forms, getting an expert to summarize a claim form for a medical injury, and getting a doctor after that to validate and say, okay, this injury is correct. I validate that the extraction was done correctly. We created an entire repository of claims that was basically done by these experts, validated by doctors. We put our LLM to test to say, okay, how does the LLM do in terms of extracting that same information? The EXL LLM got to about an 81% accuracy level. That is important because at that point, we were outperforming an SME. You are basically taking someone who has got years of experience, and you are saying that the LLM does a better job in terms of extracting in a very concise manner, in a very precise manner, extracting the important details about a particular accident.
The interesting thing, though, was how did these other foundational LLMs do? You take a look at the comparison. For some of them, we've outperformed them by a 2:1 ratio. It was very important for us to actually have this and then start looking at this and deploying it into the workflow. This is actually for us one of the big step marks. We've replicated this across several different use cases, but this was the original one. The outcomes that that differentiation drives, the 81% versus the 50% or the 60%, is first of all, once you deploy this in the system, you have the ability of reducing the claim settlement cycle time by about 30%. You heard Vikas talk about how important that cycle time is for our clients. This becomes a very big differentiator for them.
For our own employees, the productivity goes up by about 30%. This is the ability of saying, I can now have the same person take on a lot more in terms of claims volumes and process them. For the entire system, the volume that it can process goes up dramatically. This is something that we've already seen in our medical summarization use case. Finally, because of the accuracy of what you're doing and the insights that you're getting from the unstructured data, you can take that indemnity ratio, the loss ratio, and reduce it by about 3-5 basis points. Does not seem like much. When you multiply that by $7 trillion, that's a huge value. For EXL, what this has done is created a brand new revenue stream. You heard from Vikas about the negotiation guidance.
This is something now we are taking to customers and saying our LLM is going to create the negotiation guidance. This is the fee structure associated with it. The productivity leads to a much higher margin, particularly because some of the work here is being done on a transaction-based price. Overall, it's creating a huge amount of customer loyalty and stickiness for us. Let me now move on to showing you an example of how does this work. What we've done here is shown you EXL's performance on the left-hand side. We've taken basically the same claims. We are passing them through the insurance LLM. We put basically an accident report, so unstructured data. In this particular case, the unstructured data resulted in a T-bone accident.
First of all, what we've done is we got, like I told you earlier, an expert to put together the entire use case and extract from it what we are calling the golden truth. On the left-hand side, you'll see the insurance LLM. On the right-hand side, one of the best-in-class generic LLMs. This is the response that was expected. Hit another car from behind, T-boned another car, head-on collision. As you can imagine, that drives the claims associated with it. Look at the response on the right, the verbosity of it. Look at the precision of the EXL response. Straightforward, identifies that the car was hit from behind, identifies the fact that it was T-boned. In fact, the leading LLM missed some of those key details. They didn't really identify the full thing correctly. Now, the same test with another LLM.
In this particular case, we're looking for a summary of the entire accident. Now, the precise golden truth response that we expected was that the passenger was restrained, there was minor damage, and no loss of consciousness. The claims were supposed to be low. Now, take a look at the response. Look at everything on the right-hand side. In all of that, it does not pick up the fact that there was a restraint involved or it talks about windshield wiper and so on. EXL's response is incredibly precise. Now, what we've done is we've taken this particular LLM and we've embedded it into the workflow for our customers. You have the ability now of actually getting that precision, that conciseness, getting run on every single claim. The output of it is either viewed by a human or processed straight through into the system.
That is the power of what we do. Let me end with a third example. Let me now talk to you about one of the other things we have talked about, which is how EXL is taking our capabilities and using it to modernize the customer experience. This is actually a brand new space for us. It is an area where we are actually creating pretty much a new market. The CX transformation spend overall is about $130 billion. This is effectively companies saying, what I want to do is create an omnichannel experience across all of the interactions that I have with my customers and bring it all together. I want to be able to use my data and create hyper-personalized experiences for my customers to move to a segment of one and move to a modern data and a technology stack that reduces the cost of ongoing data operations.
What we've done is created our own modern CX stack. This is a transformation program that we are running at multiple customers at this point. First of all, what we do is bring together the interactions and the data for customers across all different channels. The ability to actually treat a customer who's coming to you on a web chat transfers to a call, and you can do that in between. Here's the secret sauce. What we built is some very specialized EXL Agentic AI systems or AI agents. In this particular use case for the wealth manager, we've created an advisor agent that's basically talking to you about what you can or cannot do. There's a redemption-specific agent, which basically handles redemption queries. There are specialized agents for tax advisory, specialized agents that process the payment.
The workflow between those, all of those agents, is actually getting governed by the data that we've been able to kind of bring together. All of that work that we've done for years with banks, the understanding of how do you create a next best action algorithm, how do you actually extract the data from a customer segment, what do you do with segment one versus two versus three, all of those insights are encapsulated in our intelligence layer. They are now determining what happens in the workflow. Should you be going to the tax agent first? Should you be going to the payment agent first? It's now happening in real time because the real-time data is going through that intelligence layer. That's what's kind of taking care of the workflow.
A half of our customer is the fact is this kind of inversion of the pyramid. This was a very, very complex client because they're doing some wealth management conversations, very, very high value and very sophisticated. They were at a base where they were talking about most of my transactions are human-led, 90-odd %. 10% of the ones that do self-service go straight through to the web and help themselves. The pyramid we are shifting them to is actually inverting that. A majority of the transactions, 60% now, are going to be effectively resolved by themselves using our AI and using data. Even the 40% that remain are actually going to be still getting the AI insights. The AI insight is now going to a human. It is completely changing the picture around for them.
As you can imagine, it's going to create a pretty large cost savings, about 50-odd % for them. More importantly, it takes that CSAT up dramatically. It takes their agent experience up dramatically. Importantly for EXL, it's a brand new space. It's a new revenue stream. It makes us a player in the CX modernization and transformation space. We also have the ability of actually reducing the overall cost for doing some of these transactions. In summary, I wanted to talk to you about what we've shown you today. Shown you multiple examples. In all of those examples, it was our work of AI in the workflow. Now, how do you kind of push this forward?
The more AI in the workflow implementations that we do, the more access we start getting to data, the more we have data usage rights, the more proprietary data that we start creating in terms of these workflow decisions. The more data you have, the better quality data that you have, it's going to directly go into improving the accuracy of our AI solutions. All those things that you saw in terms of the uplift, the things that you saw on the payment integrity side, we can keep building up that advantage and taking up the accuracy up a few notches. The greater accuracy that we have, if it's in the workflow, it'll keep resulting in better outcomes for our clients.
Effectively, because better outcomes for clients will keep happening, it's going to translate into better outcomes for us, higher market share, more revenue, higher margins. What we believe is that AI in the workflow is actually going to be creating a start of a virtuous cycle for us. As this enterprise adoption keeps moving up, this cycle will start to keep playing out. It'll keep kind of increasing the advantage that we have. That's part of the reason why we're so excited. Now, I'm going to have Maurizio come in and talk to you a little bit about how all of this now translates into what we think of the future in terms of our performance. Maurizio.
All right. Good morning, everyone. Thank you for coming this morning, spending time with us. You've heard a lot about our AI in the workflow and how we're really pushing forward there and a lot about our data and AI strategy. I'm going to talk to you a little bit about how we performed over the last four years in terms of our financial performance, but then also talk to you a bit about our confidence in our financial performance now going forward beyond 2025. When you look at our data and AI pivot, it's really driving our growth, our consistent growth going forward, both in 2025. It'll continue to drive our growth beyond 2025. It's really leading us to industry-leading performance. When you compare us to our competitors, we are really performing extremely well with our peer set at the end of the day. We're very, very proud of that.
In doing so, we've also created a very strong balance sheet. We have a very well-capital allocation plan strategy that we're executing on. I'll talk a bit about that also. If you look at our performance over the last four years, we have performed very well. We pivoted to our data-led strategy back in 2020. Now we've pivoted to our data and AI strategy going forward. It's really proven to really push our top-line growth very well. If you look at our revenue growth over the last four years, we've almost doubled revenues during that period with some very small acquisitions that we've done over that four-year period. We've almost doubled our revenues in terms of dollars over that four-year period. We've grown on a CAGR basis 18% and grown 16% organically over that period also.
We've performed extremely well in the mid-teens in terms of our top-line growth. In doing so, we have driven our percentage of data and AI piece of our business to 53%, up from 38%, which really shows that we're really pushing more value-add services selling to our client base that should drive, at the end of the day, some higher profitability. You're seeing that over that four-year period. We've driven our adjusted operating profit margins up 350 basis points during that period, from 15.9% to 19.4% in the most recent calendar year. Very steady, consistent execution over the last four years. If you look at just our top-line growth, and I talked a little bit about our data and AI piece of our business, it's really grown very well. On a CAGR basis, it's grown 28% over that four-year period.
We've gone from $366 million to almost $1 billion during that period, virtually tripling that revenue number over that period. What you've also seen, though, is healthy growth on a digital operations side of our business also, which is so closely aligned to that data and AI piece of our business. That data and AI piece of our business will grow with new sales because almost every opportunity has some component of data and AI in it. We're also embedding more AI into our current operations. You see some of the revenue from our digital operations business become part of our data and AI piece of our business and convert over the period and then also going forward. What this has all resulted in is just really driving profitability. You've seen our gross margins grow 270 basis points during that four-year period.
That has really helped grow our operating margins, our adjusted operating margins, by 350 basis points during that four-year period. All of that flows into growing adjusted EPS. If you look at adjusted EPS, we have more than doubled adjusted EPS and grown that at 24% on a CAGR basis over that period. We really focus on top-line growth. We also focus on growing EPS faster than revenues. When you compare our growth in EPS of 24% overall, that compares to 18% on the top-line growth. We have healthily grown EPS faster than revenues during that four-year period. What has really helped us and really sustains our business is our resilient business model, which really helps us in periods of uncertainty. We are really able to continue to grow year- after- year, no matter the economic environment that we are in.
If you look at our overall business, we have more than 75% of our revenue is very annuity-like business, which is really recurring business, which really gives us a very steady, stable revenue base to really grow off of going forward. We have much less of an uncertainty or volatile top-line compared to some of our peers. We really have been able to build a business that's really steady annuity revenue growth year- after- year. What we've also been able to do is really diversify our client base. More than half our revenue is from the Fortune 1000 companies overall.
Being able to really diversify that client base, really have a significant portion of our revenue being annuity-like revenue, really gives us the stability no matter what the uncertainty is in the economic environment and continue to be able to drive our AI in a workflow going forward and really drive top-line growth off that base. If you look at how we've done in terms of our industry verticals and also our economic expansion, we've done very well over that four-year period. We've grown each one of our industry verticals during that period between 16% and 21% overall, really showing that diversified growth in each one of our industry verticals. If you look at also our overall geographical expansion, we've done very well in growing our international piece of our business. We've grown it from 14.3% to almost 18% over that four-year period.
Keep in mind, we have almost doubled our revenue during that period. In doing so, we have more than increased our growth overall on the international side, which gives us just a better, solid, resilient revenue base to really grow off of. When you look at also how we have been operating overall within our clients, we've done very well in getting embedded into our clients, creating more services and solutions for our clients, and getting deeper into our client base. When you look at the growth within our client base, we are really getting, we're building larger and larger clients overall. If you look at just clients that are $50 million or more in total revenue, we've grown from having two clients back in 2020 to five clients with an average revenue of around $85 million in 2024.
Each one of these buckets has grown during that period. It's really our ability to create more and more solutions and more services that are more impactful for our clients to get deeper within their organizations. Now, in order to do so, we need to now, with our pivot to data and AI, and particularly embedding more AI into workflow, we have to invest more. Anita touched on this a bit earlier. We need to invest more. Over the last four years, we've increased our investments almost 4x , going from $20 million to $74 million during that period. What really has funded that is our expansion in gross margin. As we sell more value-add services, particularly in data and AI, that should drive a higher price and drive a higher gross margin.
During that four-year period, we grew our gross margins 270 basis points during that period. If you just look at the first quarter, which we just released last week, our gross margin was 38.6%. We are continually driving gross margin because we have to fund investment now going forward to really drive our strategy of embedding AI into the workflow. You will see more of that going forward overall. In building just the overall solid business and the growth within our business, we are now starting to really drive a significant amount of free cash flow in our business. We are solidly now driving over $200 million a year in free cash flow overall. If you look at 2024, our ratio of free cash flow to net income was well over 100% during that period.
Now, as a larger entity, as we are growing significantly, we have more capital to deploy. We will continue deploying that capital to really the three buckets that I have always talked about. It is M&A, which is really small to medium-sized acquisitions, opportunities in the market. It is allocating some capital to reducing debt overall, although we do not have a significant debt balance on our balance sheet. Then lastly, allocating capital to share repurchases, which we have done significantly over the last four years. We have reduced our share count by 5% during the last four years. We have plenty of authorization to continue doing so. I will remind you that we were authorized to repurchase shares back in March of 2024 by our board. They allocated $500 million to share repurchase. We still have around $300 million to allocate still to share repurchase based on that authorization.
We will continue to be opportunistic in buying back our shares going forward. It just shows that we have a very healthy free cash flow in our business that we can really deploy now going forward. What this has really driven overall in our performance is really driving our return on invested capital. If you look at our ROIC back in 2020, we were hovering right around 8.9%. It has virtually almost doubled during that four-year period to 17.7%. That has really been driven by many different things. It is really us optimizing or driving our P&L, profitability, and just the overall growth in our overall business. It is us effectively managing our total asset base overall. It is also being helped by us having a meaningful stock buyback program. All of these levers really have helped us really grow that ROIC significantly over the four-year period.
Lastly, it is really our confidence in our financial performance going forward. Now, we've talked a bit about our guidance for 2024. Our guidance for revenue is between 11%-13% organic constant currency growth during that period. We're very confident in terms of being able to achieve that. I talked a little bit about driving gross margins higher to really fund investments overall. You saw that in the first quarter to a large extent. We had a much higher gross margin where we're going to be investing also during the year pretty significantly to continue to drive that top-line growth. When you think about AOPM or adjusted operating profit margins, you should be thinking it's being slightly higher than 2024 for the year. All of that should flow down to driving EPS or adjusted EPS at or above our top-line growth overall.
That was a little bit of what you saw in the first quarter also. You will continue to see that during the year. You will see during the year us investing fairly significantly overall and coming right in line with those AOPM and EPS projections. Going forward, when we think about our medium-term targets, it is very much very similar to how we think about the year. When we think about medium-term, we think about 2025 and 2026. When you look at our pipeline, our pipeline is very robust right now. We have so much opportunity. Rohit talked about a lot of that when he talked about our TAM being at $1.2 trillion. There is a significant opportunity for us right now. Our pipeline is reflecting that also.
We are confident that we can continue to grow our revenue base at double digits going forward, particularly in 2026 for the next 24 months. We do think that we can continue to make incremental improvement on AOPM. Lastly, we are confident that we can continue to drive EPS higher than our top-line growth, which is really important to us. It is something that we really focus on overall within the company, and it has the attention of all our senior management group. Overall, when you think about our guidance going forward, we do believe that we can continue this momentum in the medium term going forward and also achieve that double-digit revenue growth and growing EPS faster than revenues. I think from there, we will take a slight pause and we will set up for a Q&A.
Okay. That concludes the formal presentations. We just need a second here to get the chairs up on the stage for the Q&A. We hope you found the content to be insightful and valuable. We tried to go very deep on AI in the workflow. We're sure that'll generate some questions here. Just give us one second and we will proceed to the Q&A. Oh, one other thing, because we are still live webcasting, please wait till you have a microphone to ask the question. Otherwise, people online won't be able to hear what you're asking. Thanks.
Thank you. Jared Levine here for Bryan Bergin. To start, Maurizio, can you talk about how we should think about target gross margins here in terms of the potential across both the data and AI business as well as the digital operations over the medium term?
Yeah, sure. So when you think about gross margins, we ended the year last year right around 37.6%. And we do project that we will be growing those gross margins every year. Now, when you look at the breakdown between data and AI and the digital operations business, over time, you should see a higher gross margin in our data and AI overall side of our business going forward. That should trend slightly higher overall. You should see this year, you should see our gross margin being in the low 38% range for the year. Now, we were a bit higher in the first quarter, like I talked about at 38.6%, but also our AOPM was higher during that period. When you think of gross margins for at least for 2025, it would be the low 38% range.
Got it. One follow-up here. As you continue to shift more towards higher value mix of business, how may that impact your future geographic or operating footprint here?
Sure. So when you think about our geographic footprint as we go forward, there is a significant opportunity for us overseas, particularly on the international side. We have grown international faster than the overall EXL revenue base for the last three, four years. We do project that that will continue now going forward as we continue to expand, particularly in Europe and in Australia.
Hey, it's Puneet from JPMorgan. Great presentation. I wanted to ask about change management involved in when you move from people-based delivery model to more AI-infused, both at your client side as well as on EXL side. If you can talk about that, has that been a constraint as you try and sell some of these solutions to your customers?
Sure, Puneet. Firstly, I should have said you hear a lot from me and Maurizio as such. Today, our goal was to make sure that Vivek, Vikas, Anita, and Andy address many of your questions. Both of us will step in as and when needed, but we are going to pass it on to these folks to address the questions. Anita and Vikas, do you want to kind of address the change management question that Puneet raised?
Sure. Change management has become more and more part of our end-to-end solution offerings. Our clients do see the challenge that they face when they're bringing in new AI capabilities. We're kind of using our training, our consulting, the change management skills that we've built up through doing transitions for our clients. We use that now to transition them into typically solutions that involve both AI and human support. I think internally, I talked briefly about our new operating structure. What that is really designed to do is to unlock our capabilities across markets and across our capability and expertise areas. We moved to that model quickly. I think what we're using is kind of a unifying strategy of AI in the workflow to keep us all pulling in the same direction. Vikas, what else would you?
Yeah, sure. Let's break this into two parts. One is the change management challenges on the client side and how we're addressing that, and also the change management that we're driving within EXL as we scale this up. I think on the client side, there are two kinds of challenges in change management. One is when we're trying to drive AI in the workflow, it does conflict with broader technology change agendas. What you'll find is that the business actually has great buy-in into what we're trying to create. Because there's an already existing technology change agenda as to how you drive with that, that's something that we have progressively become better over time. The idea is to quickly move into a POC, demonstrate the outcomes. Once the results start showing, you can start accelerating the implementation.
The focus is on AI in the workflow and how quickly can you start. Andy spoke about it. Speed becomes more important than thoroughness because once you deliver results, then you can actually make it thorough. The second part about change management on the client side is more nuanced in the practices and many times culture. Let me give you an example. We actually implemented a virtual agent and a virtual Agent Assist at a client side. Even though it was showing great results, we found adoption was very difficult to go up. Because we understand operations, we realized that the reason adoption actually is not going up fast is because it was increasing the pressure on the agents, the human agents, because they were not getting any relief because now AI was doing the work, which used to be their low-intensity work.
We had to adjust the capacity model such that the relief is still there, but AI still gets the chance to do the work, which took the adoption up. In terms of EXL, I think this is something that we realized about two years ago. We realized that there are two kinds of colleagues we have in EXL. One are people who are working on AI. These are the people who are actually creating solutions, doing configurations, prompt engineering, the data scientists, the data engineers, the AI solution architects. This is all about creating that capacity and keeping that capacity concurrent with what is happening in the world today. It is all about investing and making sure that you are building up that capacity and making it more and more conversant with the latest technologies.
We have people who will be working with AI because they do not have the skill set to be able to design AI solutions, but they still need to be conversant with AI technologies because now they will be working with AI, which is sometimes taken for granted. That is one of the big hindrances towards change. We are investing a lot of our focus on helping those colleagues become conversant with AI, become more comfortable with how to work with AI. Those two motions of building up capacity on people working on AI and sort of upskilling and better informing people who will be working with AI, I think those are the two things that we have with respect to change management within EXL.
I just want to add one very quick thing to this. See, the technology is evolving so fast that even people who are equipped and very well-versed, the thing's going to change pretty quickly. What we have done, one, we have made sandbox available to our teams. Any new technology that comes, we port it pretty quickly. When DeepSeek came or a Manus came, we made sure that it is on the stack so that people can experiment with it quickly and get comfortable. Secondly, increasingly, it is becoming difficult for organizations to think about how do you really manage this evolution. Our L&D team, learning and development team, is constantly looking for ways and means to engage.
We also have to make sure that we keep pushing because it's very hard for any organization to keep up and design programs because you're going to do something today, it's just going to change tomorrow. Sandbox is one method. Culture is another. Great question, Puneet, because it's the most least talked about topic, but actually is the biggest impediment sometimes to getting the AI mandate through.
Thank you.
Surinder Thind, Jefferies, can you maybe talk about how limiting current technology is with respect to the client's ability to adopt some of the solutions that you have and what that client journey looks like from the point of where you have that conversation to you ultimately implementing a solution?
Sure. Vivek and Andy, maybe you guys want to over.
Sure. Surinder, thanks for the question. I'll give you a perspective of how the client's thinking has changed even over the last year or so. Last year, when we were having some of these conversations, the conversation, particularly on AI, was still about POCs. It was still about looking at different use cases and testing things out. Experimentation was the key theme. I think today that conversation has changed quite dramatically because it's not about, "I'm doing 50 use cases or 100 use cases." It's about, "Here are the four things that I'm doing to transform using AI." "Here are the four things that I'm doing that are going to drive real tangible results." That's at the expectation level and where they want to engage.
I think when you go a little bit underneath it, I think we're looking at a meaningful change because when you start now thinking about the workflow, the biggest changes with Agentic AI, the workflow conversation has now become about, "Well, AI will determine the workflow, the decisions." They are no longer expecting us to be conversant with workflow technologies and so on because that's getting a little replaced. We do not need to worry about that much about systems of records or enterprise systems, data systems, because now what you focused on is just the APIs of extracting that and then bringing it into the AI. You have the focus on looking at all of the unstructured data, which was never part of the enterprise systems of record.
I think the conversation has now changed to saying, "Tell me what are your skill sets within the AI stack." That is very different from some of the conversations that we were having even a year ago. Fundamentally, I think that has changed the playing field for us because the questions that we are answering right now suit us a lot better. We are kind of completely aligned with our strengths. Whereas if I look back to three years or four years ago, it was more about technology. Now it is about the AI.
I'll just add maybe just a couple of quick points here. Firstly, I'll stack clients maybe in two or three buckets. One that are mature, have got seven or eight solutions already deployed. There you're seeing a flywheel of a culture change, adoption, and let's keep expanding. They also realized that some of the early investments perhaps got wasted, but it helped them build the foundation because newer things evolve. There you'll see exactly like Vivek mentioned, conversation has changed. Data domain AI truly helps because you're going to solve for a value, and then you're going to work backwards to, "Do you have the data? Do you have the deep understanding of what problem you're solving for?" and then what AI technologies will make the most sense. If you have that conversation, it really helps. Three, breaking the deployment into chunks is always better.
Twenty-four weeks and thirty weeks is not going to fly. Break it down into, "Okay, large problem, but in eight weeks, six weeks, you're going to see sprints of deployments." Like Vikas was saying, "Domain application comes important. I can open all five things that AI can do. The human's not going to adopt the change." You have to work with them with value and outcome. Without that, these things fail because they're expensive still. The costs have come down significantly. Break it into chunks and then help them understand what the reimagination will be, but what's the path to reimagination. That eases it out. I'll still say somewhere ahead, some kind of getting comfortable, there are some that are still coming up the curve, and that's what sort of makes the opportunity very exciting.
I actually have a follow-up with that. Just more broadly, as you think about the consulting versus the digital operations side of the business, on the consulting side, how is the revenue model changing in the sense of if you were to use Agent Assist within a CX example? Obviously, in digital ops, you own the whole process. You own the technology, and it is part of the value proposition. Can you talk a little bit about that and your ability to charge for the models and things that you build and how clients are evolving?
Sure. I'll take that same CX example, Surinder, that we showed you in the presentation. That's work that we're doing for a very large financial services company. We stepped into it absolutely de novo, where we basically went in and said, "We are going to advise you on what is the journey that you need to take in terms of modernizing your entire CX stack. What is it that you need in terms of the data capabilities? What are the interventions that you need in terms of the work, the analytics or the intelligence layer that you saw? What are the capabilities that you will need to make this transition happen, the frictionless transition of creating the omnichannel experience?" We started that off as an advisory consulting dialogue.
Over a period of time, it's now transitioned into us actually getting the mandate for running the operations as well. This is effectively the tip of the spear strategy for us, where you have the ability of actually going in, playing that advisory role, playing that consulting role, charging for it. The monetization of it is not just the project. It's also the flow-through revenue that you get and the opportunity that it unlocks for you to do a lot more. It's become one of our kind of largest modern CX implementations now. It started off as a consulting project.
If I can just add to that, for us, true north is getting an integrated deal, which is we run the operations. We set the data structures and the data flows under that. We drive the AI in the workflow in that solution. We get into an outcome-based kind of a commercial construct so that we can actually create value for the client and retain some of that value. For us, that is a true north. Vivek, in fact, showcased one of the examples on that today.
But it doesn't necessarily start at that point with clients because clients would want to either start with, "Okay, run my operation and then later on bring in these elements," or, "I want you to first think about transforming my operation using AI," or, "I need help on the data side." Irrespective of where we start, our objective is to eventually get to the integrated part of the deal. There's one more important element in terms of the strategic value of these things. Let's break this into three parts. One could be a purely consulting kind of an engagement, which is let's take that LLM example. The client says, "I want an LLM. Can you build it for me?" In that case, it becomes a consulting kind of an engagement where we go and build an LLM on their side of the technology.
The second one is a solution wherein we have actually created an LLM, but then we're saying, "You can now use it and pay per use." The third one is we say, "I can actually do the claims processing for you. When I'm doing it, I'm going to use the LLM. And because I'm using the LLM and the Agentic AI as an outcome of that, my claim processing is going to be much better." Our eventual objective is to move towards that latter, the last part, which is, "I'm going to run your claims operation.
It's going to come with an LLM, and it's going to be much, much better than everything that you're doing right now. The reason consulting engagements are better for us, not only because sometimes they're the tip of the spear, but they're the best places for us to learn because eventually what we'd like to do is to create our own intellectual property. You create your best intellectual property as you are in that services and consulting business because that's what you then use to be able to then create your own solution. That's important for us, not because it's many times tip of the spear, but that's the best grounds on which you basically start creating your intellectual property.
Thank you. Two more client questions here. Rohit, can you provide any color or comment on potential succession planning? Obviously, you have a very talented, deep bench here, but any comments you can provide?
You can see the bench. It's very good. Look, I think we take succession planning very, very seriously, both for my position as well as for the broad leadership of the organization. The process is very robust. We discuss that with the board on an annual basis. We do follow-ups continuously. I think what you will see is at EXL, we've got a lot of longevity and tenure of talent. That means we are grooming a lot of leaders within the company that can take up higher responsibility as we move along. I think what we've got, again, within EXL is a very deep and a broad bench strength of talent. That's something which we feel very good and comfortable about. I think as things are progressing, there are different roles and different responsibilities that people are playing.
You would have noticed that with our new operating model change that we made, we also switched some of the roles and responsibilities. Vikas, who used to handle the insurance business earlier, now runs the strategic growth units and is also responsible for analytics. He is also responsible for payment integrity. He is also responsible for running off the domain operations and the platforms. That is giving him exposure across a wide set of capabilities that we have. Vivek, who used to run all of data analytics, has switched over and is running our entire business on insurance and healthcare and going deep into these industry verticals. That is something which is broadening out. Kinney, who used to run our emerging business unit, has taken up responsibility for banking and capital markets and the diversified industry group. That gives him broad exposure on that.
Vishal, who is our growth officer, he's taken up responsibility for our growth in the international markets. Each one of these changes are designed to be able to give people and leaders within the organization the ability to take on new functions, new areas of responsibility, and to be able to elevate their leadership status. I think that's a very intentional and a very determined progress that we make around succession planning.
Great. One from Maurizio here. Can you talk about expectations surrounding stock-based comp as a percent of revenue over the medium term here?
Sure. Our stock-based comp has been a little bit higher over the last, I would say, couple of years. We are very focused on that going forward. I think you'll see that start to flatline a bit going forward in 2025 and 2026. You'll see that be much more flat in terms of just the overall percentage now going forward. We did a lot in terms of ensuring that we had the right level of stock compensation allocated to our senior management group over the last two, three years. I think we've done a good job there, whereby it gives us the ability now to flatline that a bit.
Hey, team. Thank you so much. Way over here on the left. Thanks so much for the presentation today. This is Brendan Biles from Puneet's team. Could you elaborate a little bit on the customer experience, sort of omnichannel business? I thought that was really exciting, but also potentially just sort of like a fraught area for clients where they might be a little more worried about AI and whether they're going to make the right choices. How big is that operation for you guys today? How big do you think it's going to be, and what are clients most worried about on the consumer experience side? Thank you.
Sure. I'll take that one on. First of all, as you probably know already about EXL, the call center work or call operations work is a very small percentage of our revenue. Historically, we've always been more into doing the high-end work. Call center has been a very small percentage. What we are doing right now with CX modernization is actually a brand new area because it allows us to kind of open up a new market. We feel very strongly about this particular space because what we are seeing is all of our clients, be they ones working with internal call centers or with other third parties, they all today have almost a split personality. They've got different channels for treating customers through digital channels, through the web, and completely different with call centers. It's not kind of coming together.
Our value proposition, and we started this a couple of years ago, was to say, we need to bring that together into an omnichannel experience. We focused on creating the digital engineering capabilities for bringing that together. We went a step further and said, okay, what we now need to do is bring in our data science and analytics capabilities to start becoming the intelligence layer and saying, okay, once we've got the plumbing, how do you actually start making decisions? What's the next decision that you need to start taking? What we've tacked on to that is the AI today with the Agentic AI, which is now actually making those decisions possible, but is also making the routing and is solving for outcomes. We've built this up incrementally. Today where we are is you've got the full suite.
I showed you the example for wealth management. We have just won a very large mandate for a healthcare customer. We are taking it to some of our other customers within insurance. It is really opening up a completely new space for us. Your question about are customers reluctant about using the AI versus not, I think as long as you are establishing the guardrails, as long as you have got the human in the loop, I think customers are right now very open to saying, I see how this experience is so much better. Therefore, I will switch over to it. I think the NPS numbers that they get from the CSATs, that really is the clinching argument because you see that 20-point jump in NPS and you say, I got to have it.
Vivek, I'll just chime in with another point. See, one other dimension you have to think about is that we handle very complex work. Sometimes the source of friction can be that input point, which could be a call. From that lens, these walls between front and back office are sort of falling apart. It is more like an experience that you give end to end. The other important element is that data that you capture and the insight that you capture has a downstream impact on value. It is not necessarily that we are going after these simple call centers. These are more end-to-end experiences in which you're trying to get the insight. You're trying to get that data, and you're removing that friction so that you can deliver a unified end-to-end experience. That is where customers in the past have not been able to solve this.
That's perhaps the tailwind for us.
You know, one of the things that you have to always consider is that we are very selective in the customer experience work we do. We do not do commoditized, simple, price-sensitive work at all. In fact, most of the work that we do, which requires a customer interaction on the phone, is very deep in the domain. Just sticking with the claims example, we do not do pure claims call center work. We do claims processing work. As part of that, it requires customer interaction. It could be a lodgement, or it could be claims inquiries. It could be medical-related discussions. It is very deep in the domain. These do not have the size and bulk of large commoditized contact centers, like, for example, commoditized selling. We do not do that work at all.
Even with AI infusion to CX, which is the value proposition that we bring to the table, we are still very selective that we are doing work which is very deep in the domain. We also differentiate from what you would call the contact center providers because if you think about the funnel through which we think, first is, can we eliminate the call from happening, which means our understanding of how the operations work and the data flows work, can we eliminate the call from happening in the first place? If we cannot do that, can we change the mode? Can we actually have a self-service module, or can we actually have a digital mode of interaction? If we cannot do that, can we actually have an AI agent take the call, like the prior authentication?
If we can't do that, can we actually have the human agent now take the call, but augmented by an AI assist which can actually operate? To bring that funnel to every CX opportunity, you need to have the domain strength and an understanding of how the data flows work. Our value proposition is very differentiated, and we're very selective in the kind of CX work that we're taking on.
Yeah. Maybe I'll add one more thing to that, which is our clients are seeing this now as their best strategy for coping with both ongoing cost pressures, which you see in cost of medical claims, and expectations from their end user. How often do you want to make a phone call to a health insurer? I'd say never. Yeah. One of our large clients has an objective of eliminating the need for any member calls whatsoever. That's why we're taking the approach Vikas just described. Thank you.
A follow-up question about the decision to go down the path of building your own LLMs. In the presentation, I think you mentioned a couple of years ago the idea was that you were not going to go down that path. What prompted the decision to go down that path? Obviously, you guys have seen really good success. Can you maybe provide some color on how you think about maintaining that success versus the next generation of frontier models as things are coming out at an ever faster changing pace?
I'm happy to take that chime in. That's a great question because you're right. The pre-trained models are evolving really, really fast. We always stay abreast with what's happening. Like I mentioned, one significant investment we made is a very focused R&D team, which actually evaluates every new model and every new development every week so that we are staying on top of what's happening. However, what we have seen is complex and intelligent workflows will require very high precision AI. Your accuracy standards are very high. The example that Vivek was choosing, what happened is we actually started working on, I'll just use this insurance LLM as an example. We started working with this claims adjudication. What we found, one, the summary was very verbose, and two, it was missing important elements. The client adjuster completely dropped using the pre-trained model.
They said, we're not going to use it. I cannot let the indemnity decision go wrong based on, and if I have to check everything, I just can't trust it. Big moat there is, one, we had the private data. Claim records that we've been handling over 12 years, understanding of the Q&A pairing. What response generates what output, the tagging. We had the domain knowledge to do that. Yes, first it started with experimentation. When we did that, we immediately saw results. That was actually leading to a much better accuracy. A, we took a smaller model, open source, trained it, added 2 billion tokens. It was much faster. Two, more accurate, significantly more accurate. Obviously, it also costed less. We always keep this, you saw that model monitoring in that accelerator demo.
We always keep monitoring it. In some cases, it's actually going to happen that maybe a pre-trained model will get better in some ways. In that case, you might discontinue. In most cases, the data, private data, the domain knowledge, and the need for that high precision would be the only reason that will drive us to just create our own LLMs. That will be the differentiation. Otherwise, if a pre-trained model is going to solve a problem, we'll just use the pre-trained model.
Just a quick perspective to add to that, Surinder. This is an example of why we think that the market is coming to us. If you take a look at today where the AI growth rates are happening, vertical AI, which is AI aligned to those particular use cases, trained for those use cases, and very, very specific to a certain domain, is actually growing at a much faster pace, orders of magnitude faster than horizontal AI. Now, what does that mean? It means that that premium for the precision, the accuracy, and being trained on that data is there not just for us, but for other vertical AI providers as well. What we did is we saw the opportunity, we recognized it, and we've kind of doubled down on it right away.
Because the data that we have, all that data exhaust from all of those decisions in our workflow over the years, that data has now become training data for those particular use cases. All of those algorithms, the project work that we did in the analytics, that has become now the intelligence layer for how do you make decisions on the workflow. And the domain expertise for saying, how do you stitch all this together, that's been there in the company 20-plus years. It is the ability of actually saying, well, the market's moved to us because this has become important. It is identifying all those skill sets that we have and then moving very quickly to say, let's double down on this and make use of it. That is what you are seeing in our journey today.
Sometimes an experiment actually reveals the opportunity that you're sitting on. When we actually worked on the first insurance LLM, we didn't know what kind of results to expect. What we realized was the generic LLMs weren't performing as well. What we also realized was that using techniques like RAG were adding to cost and latency, and we're getting accuracy, but cost and latency was high. We said, why don't we try this out? For the first time, I think we really realized the data assets that we have built over the years, how powerful they are. I mean, we gave you an example of medical records, but I can extend that to underwriting claims, claims on the medical side, the payment integrity databases.
If you look at all of that data, the property databases, workers' comp databases, if you look at all of that data and the domain knowledge we have supporting that, each of those have tremendous opportunity of actually creating these fine-tuned solutions. This is an experiment which really opened our eyes in terms of the opportunity set we are actually sitting on.
Just as a final follow-up here, it sounds like your access to private data was one of the key points of differentiation here in building those models. Does that mean in general, with most data being private, whether it's banks, insurance companies, retail companies, whatever it might be, that we should see an arms race or proliferation to building as many industry-specific LLMs here? How do you go about determining how many to fund, how many to build? Because it sounds like there's an opportunity to invest as much as you could possibly want. How do you guys make that trade-off?
I think it.
Yeah, yeah, sure.
I think it's really going to come to the investment use case. Because, yeah, sure, let's address the data issue first. Do I believe that proprietary data is going to be a massive differentiation? Absolutely, yes. And what we've been doing, thanks to the work that Ajay has been leading, is we've actually been getting the data usage rights across all of these different assets and saying, let's understand that, let's build on it, let's keep getting more and more of it. We're building up that dry powder. That'll create the advantage. The second part of it is, one, you start the experimentation. You start basically saying, can I use that data for driving certain use cases? What do I build out of it?
The proof is in the pudding when you start seeing the outcomes there and you start seeing the comparison of how you're doing it vis-à-vis others. The final piece of this is actually the orchestration, the bringing it into a client use case of saying, is it adding value to the client? As a consequence, is it adding value to us, to EXL? That is where the decision gets made of saying, do I want to use the internal EXL LLM, like Andy showed, or do I want to use what's available out there in the foundation layer? Can I use something else? We have all of those choices available to us. That decision really gets made on the ROI. What is providing the biggest lift in terms of the outcomes for our clients?
See, we're working on both things. If you saw the EXLerate.ai Platform, we have a motion where we're actually creating Agentic AI solutions, which is about orchestrating these agents and creating solutions. We've spoken enough about claims. Let's talk about underwriting. We are looking at underwriting assist. We are creating that. In parallel, we're working on underwriting LLMs. Now, just like medical records, we actually have an underwriting platform, which is state-of-the-art digital, highly data-based platform that we actually use for launching new products in the market for the life and annuity providers. That's a platform. It gives us an underwriting rule set, and it gives us massive amounts of data on the underwriting side, which you need for underwriting decision-making. What we're doing is we are basically creating these underwriting Agentic AI solutions, and we are creating these underwriting LLMs.
Now, there is no sequence here because we can use this agentic AI solution, which is orchestration with generic LLMs. The moment we find that we actually have matured an underwriting LLM for a specific use case, let's say it's individual life in a certain market segment, we can then just replace the generic LLM with this thing as long as we're getting an uplift. There are two different motions, but our ability to then move things around, because one is about processing and the other one is about decision-making. Both of these things are actually working in parallel, and I think we will actually keep evolving that.
Anita, do you want to address the investment question?
Sure. Just responding to your question too about, Surinder, about the proliferation of LLMs. There may actually be something of a proliferation of LLMs, but what you just heard Vivek and Vikas describe is that you can make a medical record LLM. You can get access to data. You can go do that. What is really unique here is that we are fine-tuning those LLMs for the business process, for the complex questions that our clients need us to address with that AI. It is not something you can really do with that. Because imagine you can query a medical record. Did the patient have diabetes? That is not going to help you with a payment integrity claims audit. It is not going to help you with an insurance summarization of an accident.
You have to combine both the core data and the understanding of the output of the process and use the process output data and bring that together. You will see more LLM specialized by process. As far as the investment goes, we are looking at, exactly as Vivek said, what is the value that can be generated through advancing our capabilities in a particular area. As Vikas was describing, looking more favorably at those areas where we can create an LLM or another capability that is going to be deployable across multiple use cases or multiple business processes. We are looking ahead, doing typical ROI analysis before we invest in a new solution. Even before that, when I mentioned R&D, what we are doing is making funding available to do the kind of experimentation that Vikas talked about as well.
Surinder, we will need to make more and more investments in terms of creating this capability. We'll need to charge our clients appropriately for creating that capability and making sure that we can take up our gross margin and then fund more and more of this development. For us, that virtuous cycle is going to be something which is how do we go towards the higher end of the value spectrum and we deliver more value to the client, get our fair share from the client in terms of the value being delivered, and with those extra dollars, reinvest that in terms of creating this capability.
Okay. I'm going to have to cut it short here. I'll stay off the stage so as not to hurt myself. That concludes the live webcast. Thank you all for joining us today. For those in the room, we can continue the Q&A right over there for lunch. These guys are all going to be at different tables. Please feel free to remain, join us for lunch, and we can continue the conversation there. Thank you.
Thank you.
Thank you.