Good afternoon, everyone. Welcome to TCS's Analyst Day 2025. I welcome you all on behalf of the entire TCS management. I also welcome everyone who has been listening to our webcast live from our website. As you are aware, we don't provide any specific revenue or earnings guidance, and anything said in today's session, which can be construed as a forward-looking statement, must be viewed in conjunction with the risk that the company faces.
You can view some of those risks here. Please note that we will not be taking any Q&A during the speaker sessions. Kindly hold on to your questions for some time. We will have an executive Q&A session at the end of all speaker sessions. We are also recording these sessions, and post-event, a recording of all the sessions and a transcript will be available on the website. Without taking too much time, I would now like to invite our CEO and MD, Mr. K. Krithivasan, over to you, Krithi.
Thank you, Nehal. Once again, a very, very warm welcome to all of you for joining us here. So over the next hour or so, can you have the first slide? So over the next hour, what we want to do is showcase to you what we are doing, why we are doing what we are doing, and why we believe this is the right strategy, and if you look at what all of us have seen, multiple technology cycles that have passed through, and the important aspect of each of these technology cycles in the past has been, every time a new technology cycle comes in, it actually puts technology right in the center of the business, and every business ends up spending more on technology because the value that they're getting from technology keeps going higher and higher.
And the value that TCS has been able to derive is we've been able to master this navigation from each technology, like whether it's from mainframe to personal computers or from personal computers to internet and web or from web to digital. Every one of these transitions, TCS has been able to use as a growth accelerator and grow further on these technology changes. And of course, what we are seeing today is a new technology in terms of generative AI. Of course, it's a misnomer to call it just a technology. This is a fundamental shift. It's very different from the previous changes that technology disruptions we had because of the scale with which it is going to impact and the speed with which it's going to impact and the benefits that we can deliver from it.
In fact, because of this reason, our chairman called it out as a civilizational shift. It's a very, very important change and transition. And we want to talk to you about how we are going to navigate and what we define by this shift. First thing, what does a digital enterprise look like? Like we have seen, in fact, if you look at on the left-hand side, what you see on the data we're reporting, these are some of the characteristics we declared for a digital enterprise some time ago. Now, how do you characterize an AI enterprise? I think it's important for us to first characterize what an AI enterprise is, then talk about why we have a strategy or how we justify the strategy we have. The most important or the number of important aspects of this is a digital enterprise was data-aware.
It had access to all the data. But an AI enterprise is what we call context-aware. It has a complete knowledge about the context in which the data is being available. And the decisions are made in the context. It's not without the context. That's where what differentiates an AI enterprise. Similarly, a digital enterprise made reporting, on-time reporting possible every day. But an AI enterprise, we are looking at advanced analytics and reasoning, why the report says what it says and what smart choices and actions somebody has to take based on the reporting and the reasoning available. Third, in the digital enterprise, we talked so much about automation. In fact, if you looked at our layers, you would say data layer, enterprise layer, experience layer. The focus was on straight-through processing. How do you automate stuff end-to-end?
But in an AI enterprise, the focus is going to be on autonomy, how the systems take decisions on their own based on the reasoning, based on the judgment. This is going to be supported and supervised by agents, by humans, sorry. So that's the nature of an AI enterprise. And the software that you would see here are not going to be rule-based software. These are all software that's going to be that can learn and adapt and take decisions. And essentially, that's what we call in a data digital enterprise, it's about data informing humans. But here, it's AI is becoming a decision coach. In fact, we did a joint research with MIT. They came up with this overall architecture, what they call an intelligent choice architecture.
What a future AI enterprise would have is a choice architecture that throws all the choices in front of you, not the data points, throws the choices and helps you in taking the decision and creates a feedback loop so that next time you have to take a decision, the feedback loop, the past experience is taken into consideration. So this is broadly our view of how an AI enterprise should be. And we've been doing many, we want you to keep this context on what we define by AI enterprise. And so that will probably later on, when we describe on what we are doing, it will come in to come quite handy. We've been doing many projects. Since 2023, when the ChatGPT moment happened, we've been working with our customers, helping them do POCs, exploring experimentation. So many things have been happening.
By now, we have done more than 5,500 projects, and if you look, ask us, like all of you are in one of those immersion sessions, some of the basic engagements we get into would be, how do I help the customers accelerate AI adoption? They ask us to come and see how we can help them change their culture, adapt AI at scale, but the interesting nature of the projects are also on how do we help them anchor AI to their strategy and business value? Because AI is a lever that has to be embedded into a strategy, and otherwise, it will not deliver value to you. Second, we are helping our customers make those technology choices in the context of their business strategy, and more importantly, we are actually helping them design for change.
This is a very important aspect because we all see that these technology cycles keep happening at more frequency than before, which means what you buy today, build today may not be fit for purpose maybe five years, two years down the line that's shrinking, and even within AI, the technology is evolving so fast, so we need to be building and choosing technologies that can adapt and that can change very fast, so the design is for design for change is very important. That's what we are helping our customers with, and helping them get the right operating model, more importantly, the organizational culture. That's a very important aspect as you adapt AI because the roles of people are going to change. The roles of humans in organization is going to change.
What we are going to, for instance, even if you take an IT services company, we keep talking about how many programmers we'll have, how many coaches we'll have, how many trainers we'll have. We'll keep changing. And new roles will keep evolving. An organization should be dynamic, should be adaptable. That's what we are helping. Next one is assurance. How do you establish the right guardrails? Because without the guardrail, many of these programs will fail on the first day itself. Because if the AI doesn't give the right results that you're expecting or says something it's not supposed to be saying, it's a huge setback. That assurance, being ethical and being responsible, is very important. And last is ROI certainty. Because we can do few projects as an experimentation. We can do few POCs. But eventually, enterprises have to get the ROI benefit.
And the kind of projects that we are doing with our customers broadly fall into this category. And if you look at the important aspect of it is the kind of projects we are doing based on many years of our deep customer experiences, we are also moving up the value chain. Like if you looked at in the past, the kind of projects that we would have helped our customers is we've been building the system, somebody designs what they need to do, we go and help them in implementing. But this technology shift and the deep customer connect and context we have is helping us move up the value chain now. And this number of projects that we have done gives us the experience and the confidence that we have to aspire to be in a league of our own.
And we are on a journey towards this to build a future-ready TCS. The future-ready TCS is built on a vision that we have put forward. This vision is a vision to be the world's largest AI-led tech services company. We believe with the customer context we have, deep customer relationships we have, the experience that we have built, and the investments, the strategy investments we are going to put in and the strategy we have, we are really poised and in fact, we feel that we are destined to be there. We are making every step to achieve that. What is driving this AI-led technology services company ambition is these five pillars. Essentially, we have defined five broad pillars on which we will be working on.
While Aarthi will come later and double-click on each of these five pillars, I just want to take a minute to explain. First is achieving the internal transformation. Essentially, we want to be our own customer zero. Whatever we want to tell our customers, if you want to tell our customers, you need to have an AI culture, AI-first culture. You need to have a culture that where you can change yourself readily. We want to do it for ourselves first. In fact, there's a massive internal transformation exercise going on, and we are encouraging all our associates to adopt AI-first culture. What does AI-first culture mean for us? Every time we do a project, every time we engage with our customers, first question we ask is, what can AI do here?
Can AI do something better than what we are doing, even if it is going to cannibalize our revenue? See, that's what we define AI-first culture, giving AI the first right of refusal before we do any other option. And of course, we are building our own AI solutions. We are scaling with AI. And I would let Aarthi come and talk about each of these. We are redefining each of the services. We have a new leader that takes care of AI and service integration. We are redefining every one of the services and within a structured way. And we are re-looking at our talent model. How do we train our people? What are we training our people in? What should be the structure of the team? How does project get delivered? We are re-looking at every piece of this talent puzzle.
And fourth is, while all of us know it can deliver productivity in software engineering, but we believe the best value through AI will come in when there is a business change. We are able to deliver business change or we are able to introduce new products, new services, give better customer experience. So we are re-imagining the customer's value chain. For example, for one of the customers, we re-imagine their claims process by which we did not throw away the human beings working there. For instance, the call center agent was continuing to work, but the call center agent was able to focus on being empathetic to the customer while having a discussion at the same time collecting the data and able to close the claim in a very short period of time. So a number of such examples where you re-imagine the customer's business value chain.
We believe this is where the maximum value release will happen. Our customers are very keen on working with us because of the deep contextual and domain knowledge that TCS has. The last one, if we have to broaden our play, we need to start working to have more partnerships. We have to be more acquisitive in nature. We have to get into new ventures. We have spoken about some of them. I'll talk about what we are doing. The most important reason that we are doing this is, as I said, this technology change. One important aspect is the speed at which it is happening. If we need to address the speed aspect of it, we cannot be only building everything on our own. We need to partner with others. We need to acquire capability wherever it's required.
That's the reason we have chosen the ecosystem play as the most important pillar in this journey. So if you look at where is the money being spent in this overall AI transition? If you look at, we looked at it in the stack of these five different layers of the stack. Money is being spent in building new infrastructure, developing new hardware. And customers are building, leveraging models, building models or fine-tuning models, creating SLMs. Then once you have them, you need to have the frameworks. You need to have the platforms in which these models could be leveraged. On top of that, you need to have the agents that can be built on these platforms that can deliver value. And last is, how do you show this intelligence in action? The intelligence in action is shown as a physical AI.
You will probably be seeing a quadruple today where we've been able to integrate a physical AI and AI and physical, physical and digital AI coming together. Of course, where you are able to give integration services, conversational AI using digital AI. What we are doing is playing in all the stack. It's important for us to play in each one of these layers so that we, one, we provide end-to-end value to all the customers. Only by playing in the entire ecosystem, you give the best value to your customers. Like your customers can engage with you. We see many of our customers engaging with us in creating a model form and then using a platform on top of it, then engaging with us in building agents and then conversational AI.
So when a customer engages with us in the entire value chain, there's a more and more increasing value for our customers, and that's the reason that you see right from the infrastructure. We announced the creation of Cyber Vault. We work with many of the chipmakers today, and we are helping them in the chip design. We are working in the devices platform area. We are actually working in integrating models for our customers. We are building SLMs. We are fine-tuning LLMs for our customers for their domain needs. We created platforms like ignio, a number of platforms. CodePlus is a very interesting platform that helps you in migrating from one tech any to any multiple technology transformations it can do, reducing tech debt. We have, of course, built agents on top of that for many domain use cases.
Of course, I talked about quadruped and other intelligence services. As I said, we'll be doing all these things while we build most of it internally. We'll also be partnering. We are developing deeper partnerships with our customers, with our partners like all the hyperscalers, AI-related companies, and also acquiring. We talked about ListEngage, of course, Coastal Cloud acquisitions. We are being more acquisitive. There's another angle on why we do this. By participating end-to-end, the partnership with all the ecosystem partners is also becoming 360 degrees. It creates a very, very virtuous cycle for us. Hyperscaler is our go-to-market partner. We consume hyperscaler services. We provide service to a hyperscaler.
This can go like this while it is true for a hyperscaler, it could be an industrial company from whom we are buying switch gears, or it could be an AI company that provides multiple opportunities for us to integrate ourselves into the AI ecosystem by playing end-to-end. Our end-to-end strategy is very, very deeply thought out and creates a stickiness, creates the ability to give a lot of value to every stakeholder, and makes our play very strong in this ecosystem. What do we believe that we will succeed in whatever we have put as our strategy and investment? See, we look at more we are currently about 60 clients with whom we make $600 million a year annual revenue. Of these 60 clients, for 54 of them, we are engaged in AI work. As I said, we have done more than 5,000 engagements.
These engagements have resulted in a satisfaction of close to 95%. These are all only AI engagements. So very strong customer endorsement and satisfaction. Second, we went ahead and trained 100% of the customer-facing team in AI technology so that they can articulate what is the value the customer will get by deploying AI in a particular way. We also went ahead and trained more than 180,000 associates in higher-order AI skills. We also conducted the largest hackathon in the world to prove the point that TCS can scale, help our customers scale. 280,000 associates, the largest hackathon where they all created first ideated, close to 175,000 builds were done within a short period of about three to four weeks. Of course, all this has been recognized with the market.
Like 14 of our engagements in the last one year have been recognized as best-in-class, and we are found in the leaders' quadrant in eight of the eight reports that have been published. And I talked about platforms in terms of ignio, WisdomNext, CapDot AI. Collectively, more than 200 platform implementations have happened, so what we have seen at the customer gives us the confidence, first proof point that we are on the right track. The second, from a revenue perspective, our AI-related services have garnered a total revenue of $1.5 billion annualized. And I said about 54 of the top 60 clients use TCS for AI. 85% of all the clients, greater than 20 million, leverage TCS for their AI work.
Based on the success we've been able to get in the market from our customers, our Q on Q growth on AI alone has gone up by 16.3%. If you see that is reflecting in every service line, every service line, the AI revenue is growing and growing significantly. Like whether it's BFSI or life sciences, you see a strong growth across the board. I talked about we are reimagining every service that we deliver. The reimagination of services actually is creating the growth acceleration for every service line. Today, all the what we can call non-traditional services, if you take ADM and testing and BPS out, the non-traditional or the new age services, if you can call, constitute almost $11 billion of our revenue. All of them are growing at a rate higher than TCS average rate.
All this put together again is another proof point that gives us the confidence that we are on the right path. We are reimagining the services, right? Our customers are acknowledging what we are doing. TCS continues to be very well recognized and accepted by their customers directly. Like this Whitelane survey, which is conducted by an independent agency. For the 12th year in a row in Europe, TCS came as number one in customer satisfaction. Of course, our internal customer satisfaction scores are above 94% and keep increasing. This is another very interesting statistic. Recently, Newsweek in the U.S. published a list of most reliable companies. TCS was globally the number one tech services company in the list, ahead of many other peers and competitors.
And of course, we continue to be the top employer and one of the best on retention rates in the industry. And Fortune ranked us as the most admired company. So as I said, we have our customers trusting us. We are seeing that validated in the revenues and the service lines and the industry line growth. And of course, our employees continuing to like and they'll be delighted by the investment that we continue to make in them. And the last aspect on why we are uniquely positioned to go through this transition. Because as I said, this transition is going to require a lot of discipline and it's based on certain investment that we are committing. For a long period, as all of you know, that we've been a benchmark on our margin.
And we have been our cash conversion has been more consistently over 100% and best return on equity. And Samir has $6.3 billion to spend as well. With that, I just want to conclude and let Aarthi, Samir, and Mangesh talk about the different aspects of our strategy. But I just want to give a high-level summary. First is the shift from digital to AI is a huge opportunity for our enterprises, particularly for TCS, based on the deep partnership we have with our customers and deep investments that we are making. And we have a very well-differentiated strategy. We are committing our investment. And that strategy, which is based on these five pillars, is, we believe, well validated by many of our customers.
And the partnerships with our other enduring partnership, which is very important because you can do one by the contextual knowledge you have gained with these partners over the years, helps you in building the solution and making AI real for our customers because this enduring partnership is very important in converting an idea or converting a technology into a business value proposition. And of course, with the strong, robust financials and execution discipline, we all very strongly believe TCS is best positioned to gain leadership in AI. I'll be happy to take questions after Aarthi, Samir, and Mangesh's presentations. But once again, I want to thank you all for being here. Thank you.
Thank you, Krithi, for that very insightful keynote. Next on the agenda, I would like to invite our COO and Executive Director, Aarthi Subramanian. Over to you, Aarthi. Thank you, Nehal.
Thank you, Krithi, for setting out our aspiration and the five pillars that are driving our transformation in the company. So as part of my session today, I'll cover the five pillars that are driving our transformation. And I'll start with the first pillar. As Krithi said, the first pillar is about TCS's own internal transformation. And we have codenamed this transformation TCS to the power of AI. So as part of this transformation, what we are looking at is how do we make every TCSer, senior, junior, in every role, an AI practitioner? How do we create an AI mindset, AI-first mindset, as Krithi alluded to earlier, in every employee in the company? So that's the focus of this internal transformation initiative. And to achieve this, what we have done is we have made some big investments.
We have been working on this, as Krithi said, since the ChatGPT moment. We have been making AI available to select employees. But in the last six to eight months, today, we have what I believe is one of the largest AI infrastructures available for employees. 600,000 TCSers have access to AI at their fingertips. What this includes is access to all the models, access to all the coding assistants and other tools, and also access to all the hyperscaler AI tooling which is available with which we actually build solutions for our customers. So these tools are available in a safe, secure, and on-demand fashion to all the 600,000 employees. Now that we have made this infrastructure available to all the employees, the next question was, how do we get all these employees engaged and use this infrastructure, put it to use to the maximum extent?
So we ran what we call the world's largest TCS to the power AI hackathon. It is the largest because we had more than 280,000 associates engaged during the hackathon. We ran it in two phases: ideate with AI for four weeks and build with AI for another four weeks from August to September. What this generated was these 280,000 employees created more than 500,000 submissions, about 330,000 ideas, and about 170,000 plus builds. So in the four weeks, people actually took the idea, brought it to life, built a solution. And all this is available on a platform which is powering the TCS to the power AI hackathon. It was also most inclusive because, like I said, we want to make every TCSer an AI practitioner. So all of us, including the senior leaders in the room, HR, finance, every function participated.
Obviously, all the engineers and the development community was very much part of it. But we wanted to make sure that senior, junior people in finance functions who didn't have ever coded in their lives really got hands-on with AI. And it became the most inclusive hackathon from that point of view. Another thing we are very proud of is it was also the most innovative hackathon. 500,000 submissions, if we had to evaluate with humans, would have taken us 90 people, 12 months. We put AI to the task. We built a model which evaluated these entries. And we completed. 500,000 submissions were evaluated in three weeks. And we had a big recognition event where our best employees who really did outstanding contributions and generated great ideas for our customers and for internal consumption were recognized and felicitated by our CEO.
Now, hackathon is something we did, a large one in August, September, but it's something which is continuing on a regular basis. Different themes, different functions, keep running these hackathons, and this is an ongoing activity. But what we also did is we said we want to really bring people together because a lot of times you could be sitting at your desk and ideating and building, but we also wanted to bring teamwork into the hackathon. So we launched something called TCS AI Fridays. This is where our AI infrastructure and we have created AI Friday labs. This is a physical infrastructure in every delivery center.
And we have brought the best AI mentors in that location all together to run what we call AI Fridays, which is basically what it does is it helps our teams solve problems, come up with ideas, but most importantly, discover the power of this technology and also discover their own capabilities by building solutions. So what we do is this is on every Friday, there is a four- to six-hour AI Friday gamified hackathon which is run. And it's competitive. And people start ideating from Monday through Thursday, then come together as teams, self-organized teams, and they build and they present to a jury. And on the spot, they are awarded for their creativity and innovation. And what this is doing is the teams actually include not just the AI-native talent.
We actually bring together senior people, 20 years, 30 years, 15 years experience, some of them who are still hands-on, some of them who are not hands-on, but also with the AI-native engineers, some of whom you met when you went through the immersion today. We bring them together. And what this is doing is really blurring and bridging the gap between seniors and juniors and expert and novice. And that's what we are trying to do internally. This is something that started. It's a moment which will continue because the technology is changing every day. The second part of TCS to the power AI is building AI-first solutions, which is making TCS the best showcase of AI internally for the industry and also in front of our customers. We are driving this across two areas. One in IT, TCS own internal IT.
How do we disrupt how we develop software? How do we disrupt how we run our service desk? How do we run our IT operations, whether it's infrastructure operations or application operations? So we have put together a very ambitious goal to disrupt how we use AI in our IT. 97% of our developers and engineers have access to coding assistants. And we are well on our way to drive productivity improvements. There are portfolios where we are already seeing 20%, 30% improvements. And also, we are looking beyond coding and looking at how we can disrupt the entire software engineering, testing, again, high levels of productivity, generating Figma designs, UX with AI. These are areas where a higher level of productivity is already being seen. And we are creating our own showcase. We are also learning from the work that we do for our customers.
The second aspect is how do we put AI to use for our own business, for our own business functions, our business users, whether it's HR, how do we disrupt learning, how do we disrupt hiring, whether it's for finance, procurement, every function in TCS, we are looking at how do we disrupt with AI. Jana took over as our Chief Information Officer from July, and he's driving the entire internal transformation within TCS. Let me show you an example of how we are creating disruption from a learning point of view, so as all of you know, TCS, not just now, over the decades, right? If you look at the last 20, 30 years, we have always invested in learning for our employees, and over the last many years, we have put together a very strong learning infrastructure, so it provides on-demand learning anytime, anywhere, any device.
Top-notch content is available to our employees. Clear pathways are available for taking your capability to the next level. But we didn't want to stop at that. We said that now with this new form of AI, generative and agentic AI, with reasoning being available, we have disrupted learning within TCS. What we have built is an N equal to one personalized coach. It's a learning coach which is available to every TCSer. What does it do? You can see a video which is playing out there. You see a TCS employee there interacting with the AI avatar, which is the learning coach, to understand how containerization works in the context of microservices. That's the video. So what it does is we have brought all the learning content that we have and made this available to the AI model, which the AI avatar is consuming.
So in some ways, the AI avatar is an all-in-one SME. It understands all domains. It understands all technologies and can also coach our associates on soft skills. So the key differentiation here is that it has complete understanding of where the skill level of the employee is, what the aspiration and learning goals of the employee are, and then how do you now bridge the gap. And that's where the N equal to one coaching comes into play. And here, traditional AI, generative AI, and agentic AI all have been put to use. And this is under deployment, and we are seeing great results. Let me now move on to the next pillar, which is a very important pillar. If you look at AI today, right, it is a hugely promising and powerful technology. It's disrupting every industry.
But what is important is it has a huge opportunity, and it's creating a disruption in our own industry. So we are looking within and saying, how do we use AI to disrupt every service that we deliver to our customer? So this is where we have now appointed Amit Kapoor, who's here, as the AI and Service Transformation Officer. We are looking at all service lines and saying, how do we disrupt every service line for the new world of autonomy? All services, if you really look at not just in the last few years, but over decades, automation has always been central to how we deliver services. But with this AI, we are shifting from autonomous to autonomy. And that's where we are looking at how do we need to look at the human plus AI autonomy model.
So just to give you a very quick view, right, when you look at our services, whether it's infrastructure services, application services, all the services around business, right, across all these service lines, there is a huge opportunity for us to disrupt how we deliver each of these services to our customers. And to do this at scale, at TCS scale, and also do it consistently, we said, okay, how do we do it at scale? How do we deploy the autonomy at scale? So the first thing that came to mind when you think about autonomy is ADAS. All of us are familiar with autonomous cars. So we drew inspiration from the autonomous cars where all of us are familiar with the five levels of autonomy. So we have level three and level four cars internationally.
We have level two cars here in India. So we said that why don't we create our own TCS services autonomy model, which is inspired by the autonomy that we see in vehicles. And this is the model that we have created. It's a five-level services autonomy framework, which we are consistently deploying across every service line. What you see here is a framework. This has been instantiated for every service, whether it's development, whether it's testing, whether it's production support, whether it's an SAP implementation, Salesforce implementation, or a ServiceNow implementation, or an autonomous GBS. For each service line, we have instantiated this framework. What this does is it gives us a blueprint to think, consistently deploy. More importantly, it becomes an assessment framework when we engage with our customer.
How do we go and find what is the current maturity of the customer on this model, on any service line? This is becoming a cookbook for us to go and tell a customer, use this framework to say that you are at level three or you are at level one. This is how we can take you to level two. On development, coding assistance usage, you are at level two. We can take you to level three, and this is what it takes. So let me give you an example of how we have put it to use in the application development software engineering context. So as you can see, just like cars, a level five car is not on the street. So even here, the level five and the level four capability, the technology capability may not be available.
But we have created a blueprint of what the art of possible is. And for level one and two and three, where the technology is available and those levels of autonomy can be put to execution in our own internal context and customer context, that's exactly what we are working on. The first one, as you can see in a development context, is coding using a general purpose LLM, lower accuracy, lower context. You move to the level two, it generates better productivity because you're using the purpose-built coding assistance, whether it's a Windsurf, whether it's a GHCP, or any of those tools. And level three is where the customer context starts flowing in, where the entire enterprise knowledge of the customer, you start building agents who understand the customer code base, the customer knowledge.
That's when you get the next level of autonomy and obviously the higher level of productivity where AI can now start autonomously executing some of the software engineering tasks. Just to bring this to life with a few examples, for a large customer in Asia-Pacific, the customer had already very good maturity and was already at level two and had made significant investments. We worked with the customer and used this framework to take the customer from level two to level three and helped generate higher order benefits, 30% productivity. For a global consulting firm, we helped them, again, take them from level two to level three with 25% benefits in terms of productivity. Again, the third one is a very interesting example where we already were executing this project. This is a large application development portfolio that we deliver for this large aerospace OEM.
We proactively went and disrupted, and Krithi talked about doing the right thing, AI-first culture, cannibalizing our own revenue. We went ahead and proactively deployed coding assistance and moved them from level one to level two, delivering 20% benefit. We are now on our way to take the customer to higher levels of autonomy. What you see here is the quote from this customer, which shows how proactively we went in and delivered savings to the customer. Let me move to the third pillar of our transformation agenda, which is building talent model, which is future-ready. Here we are looking at three pillars. First is building future-ready skills, AI fluency at scale. How do we train every employee to learn how to work alongside and with AI? So over the last two, three years, we had made significant investments once GenAI came into the scene.
As of today, pretty much all employees, 580,000 people, are AI aware. But AI awareness is not enough. It's necessary, not sufficient. So then, over the last two years, we are driving higher-order AI skill development. And today, 180,000 TCSers possess higher-order AI skills. This number was 80,000 end of last year. So we have almost doubled capability building when it comes to building higher-order skills. The other element is the AI-native fresh graduates. I think this is a very exciting time because all these fresh graduates are very AI-native. They really know how to use AI very organically. And what we have done is our intake of trainees, fresh trainees from universities, we have actually doubled on that. And what we are finding, and you met with some of these trainees today, these people really know how to treat AI as a teammate.
That's a big learning that we have. That's where the seniors and the juniors coming together, the fusion that I talked about, is really helping create that AI-first culture. The second element of our transformation on the talent model is the role transformation. Here, I was talking to some of you before we started today. Every role in the company is changing because of AI. Every customer conversation is an AI conversation today. How do we drive AI-centric customer conversations, whether it's from sales to advisory or solution to delivery? Every role in the company has to become AI-first, AI-centric. We have a program called AI Dojo, which we have launched to all our sales, solution, advisory, and delivery teams. This has been done at scale.
This is the starting point, and this is something that we need to continuously engage and keep further honing the skills. And another important thing here is that AI is also introducing new roles. We have Rapid Build engineers, Rapid Build leads. So we are also looking at new roles that are becoming very relevant for the future. The third pillar is future-ready hiring. Here, we are looking at securing top talent. So over the last year, we have doubled down on advisory and consulting talent across the big bets. And I'll talk about big bets in a little bit. These are areas like cybersecurity, Enterprise Solutions, cloud, all these areas. We are hiring a lot more advisory and consulting talent and positioning them closer to customers. And our experienced hires, if you look at today, more than 50% of our experienced hires are coming with next-gen skill sets.
So that's the talent transformation that we are driving. And we are well on our way to make TCS ready for the talent transformation, which is required for our AI future. The fourth pillar, extremely important one, is making AI real for our customers. So I talked about how we are driving our own transformation with AI as part of the first pillar. I think it's a very similar transformation that we need to drive for our customers. Most importantly, helping them deliver value with AI deployment across their industry-specific needs, across cross-industry needs that they have, common needs, whether it's HR, procurement, finance, customer service, and most importantly, helping them scale with AI. And when you look at AI adoption in our customer base, if I look back the last three years, 2023 was a year of experimentation. GenAI was just available.
People wanted to know what the promise and the power of the technology is. But in 2024, we started to see a lot more scaled, I would say, still experimentation and some projects going live. But 2025 has been different. And I think that generative AI, along with agentic AI, with the reasoning models coming into play in early 2025, is really creating the tailwind for us to deliver solutions which have reasoning, and that's what leads to creating solutions that have more intelligence baked in and building solutions which can take better decisions. So let me talk through how we are working with enterprises to make AI real and taking AI close to customers. So if you really look at, when I reflect on our conversations with customers, there are dual priorities, and these are not sequential. They are parallel.
First one is customers want to get ready for AI, and the second one is customers want to lead with AI. If you really look back, the digital transformation started sometime early 2010 with cloud, mobile, and enterprise systems transformation and cyber and everything else. But if you look at the technology debt that still exists in infrastructure, enterprise core systems, and data foundation, it's still there across enterprises. So there is a good amount of work that we continue to do. If you look at cloud adoption, it's still 35% across the globe. So it's significant work across cloud, data, S/4HANA transformation, Salesforce, ServiceNow. Each of these enterprise solution implementations, there are many transformational opportunities that we are engaged with our customers on. Cybersecurity, again, top area of focus across.
So I think a lot of our service lines, what Krithi referred to as our next-gen services, are focused and really driving that customer transformation to get them AI-ready. On top of this, what we are working on is really helping our customers lead with AI. How do we work with our customers to help them realize value and help them shift from use cases, pilots, experiments to real projects which deliver ROI at scale? And that's what we are working on. And how are we doing this? We are doing this across three areas. First, we engage with our customers to help them innovate with AI, build with AI, and scale with AI. And here, it's all about how do you create value chain impact. Value chain impact could be vertical specific or horizontal, like I said earlier.
But the whole focus is to drive adoption across value chain and create value chain impact. Let me talk about each one of these. Today morning, in the immersion session, you experienced what innovate with AI really means. So many of our customers, what we are doing is when they are visiting us here at our offshore delivery centers, when we work, meet customers at their premises, or we host customers at our AI experience zones, Pace Ports, or even when we are responding to an RFP, we are really bringing rapid builds and integrating in every customer interaction. So what we do is we engage customers and help them identify what are the challenges that are worth solving and how you can solve it differently with AI, and completely shifting the narrative from presentations to really build an experience.
This is something that we are doing across the board, and this is really resonating extremely well because what it does is it is helping customers understand how they can use AI in their context. The second one is where you shift from ideas to execution. We have created this rapid build methodology, which is all about taking a problem and solving the problem, saying that, okay, this is the metric you want to move the needle on, from outcome backward, solve the problem with AI in a short period, anywhere from 8 to 16 weeks. This is what we called a rapid build approach. What it typically entails is bringing together a rapid build squad, which has AI-native trainees and people with contextual knowledge from our customer accounts. That's a great combination.
And the contextual knowledge and the AI skills together is what is the secret sauce for really doing that build. And one thing important to understand is while you can do the rapid build in a day or two, what takes time is really integrating the solutions with the data and the systems within our customer. And once we deliver the rapid builds and we have proof points and these are in production and customer sees value, the next obvious question is, how do I scale AI within the enterprise? And this is where you will hear from one of our customers later today, how do we really work with our customers to put the AI architecture, the technology selection, putting the AI platforms so that you can really build at scale and go live at scale? And here, what we have done is our own investment.
We have some of the platforms that Krithi talked about are the AI platforms that we have built and we are taking to our customers. But on top of those platforms, we are creating TCS-owned industry-specific agent marketplaces. So we are taking different vertical domains. And for example, in manufacturing and BFSI, as two cases in point, we have 100-plus agents identified that we are creating. And more than 30-40 of them are already live. And some of these are already getting deployed in our customer engagements. The other important aspect is safe and secure AI and also AI which delivers value. So AI office is about setting up program management to put responsible AI and also looking at ROI from the AI projects that we deliver. The last one is, again, a very important part of scale.
This is about how do you create AI labs which are nothing but rapid build factories. So many customers are looking at, after a successful build with AI, we go ahead and set up a factory where multiple problems that we want to solve, multiple projects that have been jointly identified, become the pipeline for the factory, and multiple rapid build squads come together to deliver value. But not only the AI labs, what we are also doing is in our customer engagements, we are deploying our rapid build teams in our existing projects also. So embedding rapid build teams as well as setting up AI labs, it's a dual approach that we are taking to help customers scale AI. Now, very quickly, many of these examples you're going to see when you are visiting our executive briefing center later today.
The first one on innovate with AI, I think Ashok already covered this, but this is a very unique one because we engage with the CEO and direct reports, and we ran an AI innovation day, generated high-impact ideas which had the endorsement of the senior management. This resulted in a subsequent AI innovation day which ran for two days with 200 people from the customer service organization, and eight high-impact ideas have been put into motion. The second example I would like to share is with build with AI. Again, AI, we are seeing a lot of opportunities to create value in the industry for the specific industry value chain. But I think another big area where we are seeing AI come in is in modernization. GenAI has this unique capability to really understand the old and generate the new.
So tech modernization projects which were earlier put aside because they were capital intensive and they would take a very long time are now becoming big opportunities for us to engage. You can see here how we delivered an integration modernization from TIBCO to Java. Rapid build, three weeks, delivered, proof points shown, then led to scaled projects in modernization. The other example is an electronics OEM example where we have built a site inspection to expedite the construction. And this is a great example of physical AI, which our teams will talk about. The first MVP that we were able to put in production was done in eight weeks. The last one on scale in AI, the UK High Street Bank, there's a customer who's going to be talking to all of you today.
So I'll not cover that, but there's a great example of how we have partnered with this customer to scale AI across their enterprise. And the last one is, again, an interesting example of scaling AI in AI for modernization, where we are working with a payments tech provider for mainframe modernization. This is a huge amount of work, 50 million lines of code in mainframe being modernized to completely new microservices stack. And this is a great example of human and AI coming together and TCS knowledge of really contextual knowledge of mainframes, contextual knowledge of the modern tech stack, and bringing it all together. The last pillar is the AI ecosystem play. I will cover the partnerships, and then Mangesh will come in later and talk about M&A and how we are setting up new ventures.
Partnerships, as Krithi said, are an integral part of the business that we operate in, and we are doubling down on partnerships, and let me explain what I mean by that, so we're looking at partnerships across four areas: enterprise partnerships and domain partnerships. These are long-standing partnerships we have had for decades, but what is new here is all these partners are infusing AI in their products, so now when we are solutioning or upgrading, we need to leverage the AI features functionality which are now coming out of the box, so we are working with these partners to expand the center of excellence to really build AI competencies in these products, so that's what we are focused on, on the first two pillars. We are also expanding our partnership ecosystem to cover deep tech partnerships, so you can see whether it's Anthropic, OpenAI, NVIDIA, Mistral.
All these are new partnerships that we have set up and working closely with them to really build competencies. The fourth pillar is also an important element, which is AI-native partnerships. There are many AI-first solutions which are available, which can be plugged into the overall solutioning that we do for customers. So this is, again, an area where we are investing in building partnerships. In summary, I think when you look at these partnerships, I think partnerships are pivotal to us working with our customers. So we are doubling down, building the required competencies for the future with our partners in terms of capabilities, certifications, and everything else. And I would like to leave you with two examples.
So if you look at NVIDIA, and I'll give you an example of how our manufacturing teams have worked with NVIDIA, and we have built 12 industry solutions. And these are all innovative solutions which we are already taking into market with our customers. And in our meetings with NVIDIA, what we have heard is TCS is a gold standard in manufacturing NVIDIA solution development. We have built platforms. So I think really pioneering work done by our manufacturing teams.
The other example is if you take Google, even before their new launch this year on Google Gemini Enterprise, we started work with them ahead of time on their A2A agent-to-agent protocol. And TCS teams have done really pioneering work with Google, which we started quite early. And we continue to partner with these companies. There's a very essential part of the solution and the integration work that we do for our customers. So with this, I'll wrap it up, and I'll hand it back to Nehal. Thank you, Nehal.
For those of you wondering how come Nehal has changed, my name is Balaji, and I work with Krithi in the CEO's office. We have a very special session today. You heard our CEO talk about building multi-decadal partnerships with our customers and the deep trust that they have with TCS. You also heard about how we make AI real for our clients from Aarthi in detail. We have a special guest today joining us from London, Mr. Ranil Boteju.
He is a Chief AI Officer for Lloyds Bank. Lloyds Bank, all of you would know, is a premier bank globally, especially in the U.K., and has been a pioneer in terms of embracing technologies well ahead of its time. TCS has a very strong partnership with them right from the early digital days through to the cloud transformation and then subsequently now with AI. Mr., I would request a live feed from London, please. Hi, Ranil.
Hey, hi, how are you? Can you hear me okay?
Yeah, we can hear you okay. And we have our senior management and our guests here in front of you. I would like to do a quick introduction, Ranil, if that's okay with you, and then hand it over to you.
No worries.
Thank you. Ranil has over 25 years of global experience. He spearheads the bank's data and AI strategy, overseeing cloud data platforms, machine learning products, ethical AI frameworks, and data literacy initiatives. Prior to joining Lloyds, Ranil held significant senior positions at major institutions such as HSBC, Standard Chartered, Vodafone, and the Commonwealth Bank of Australia. Additionally, Ranil contributes to the U.K.'s Information Commissioner's Office as a non-executive director, where he plays a crucial role in data privacy and transparency governance. Ranil's leadership is pivotal to Lloyds' transformation into a data-driven organization with a strong emphasis on operational efficiency, customer experience, and responsible innovation. Without much ado, Ranil, the floor is yours.
Thank you very much. So look, I would really like to share my experience with TCS in terms of the data and AI transformation I've been delivering at Lloyds Banking Group. So I've been at Lloyds Banking Group for about four years now. When I started, I embarked on a very comprehensive transformation of data and AI. I really want to share with you how TCS has been a partner along that entire journey. Obviously, prior to me starting, TCS has had a very long relationship with Lloyds Banking Group. In the context of my role, I'll share with you the data and AI story.
Really, the first step coming into Lloyds Banking Group was a real focus for me on building much more modern, up-to-date foundational data capabilities. In the work, that was predominantly about migration of a significant amount of on-premise data sources from Hadoop, Cloudera Hadoop, to Teradata, to significant amounts of data. We basically partnered with TCS to migrate that to public cloud. Really, our relationship was very much around leveraging two things. Firstly, proven experience from TCS.
So in terms of the skills and experience they had done with similar migrations to public cloud, we were able to tap into experts. And then similarly, really build out the team. We had to scale very quickly. There was no way that we could have actually built up our own team to do this work internally ourselves. So the partnership was very much on bringing us the skills, the IP, the thinking, the know-how to supplement my own team, and then as well as the actual really skilled engineers to do the work as well. So that was a very difficult program of work, but we managed to get through that over the 2022 and 2023. And then once we had a solid foundation of data on the cloud, my next focus was very much on really scaling the new thing at the time, which was generative AI.
One of the things that I did was set up an AI center of excellence, a whole bunch of AI platforms. And again, we wanted to really leverage specific skills from TCS. And so we had a very senior-level engagement, almost CEO to CEO, where we said, "Look, we want to get access to your best people." And we were able to get access to skilled engineers, skilled AI developers from TCS who helped me form my AI team whilst I built my own. But more importantly, really built out basically our initial generative AI workbench. We call that Cortex. That's now up and running. We set some really strong goals for ourselves. We said that by the end of 2025, we want to deliver at least 50 generative AI use cases in production. We've actually overachieved on that.
This year will end with 57 use cases in production. TCS has been very much part of that journey, right? So essentially, it's the access to the skilled resources. It's your knowledge of the products that we work with. We work with both Google and Microsoft, but per the previous presenter, you have strong proven links with both of these hyperscalers. We're able to tap into that knowledge, and then secondly, just really helping us build the team, so that's been tremendously successful. Where we are now is, though, a very significant pivot to agentic AI, so we are now really scaling up our agentic AI capabilities, and again, we've had to lean very heavily on TCS to access agentic AI is new for everyone. We've had to access TCS skills on the engineering side, AI developers, but also in areas like responsible AI. I have a responsible AI team.
It's stacked full of PhDs, but I still need access to what do others do outside in thinking, and again, we've been able to secure some very skilled colleagues from TCS to help us do that work, so look, in summary, over the last four years at Lloyds in our data and AI transformation, TCS has been pretty critical to building the foundational capabilities on public cloud. Secondly, when we pivoted to generative AI, really standing up our AI sort of center of excellence and the AI platforms, we really partnered, I would call it a partnership with TCS. That was very successful. We've ended the year with more than 50 use cases live in production, and our latest pivot towards agentic AI, again, we're going to lean very heavily on TCS, so very exciting. What I would say, though, is I'm really excited about our partnership with TCS.
So myself, but all of my colleagues at Lloyds, all of the CIOs, even our CEO, I feel the one thing is I've talked a lot about the skilled resources that you have, the ability to spin up teams very quickly. That is definitely something that we value. But more importantly, for us, we see the culture at TCS as being the thing that we like the most. I mean, whenever I'm in India, I will spend as much time with the TCS team as with my own Lloyds Banking Group team. They are very much part of the team. We treat them like the team. There's no real boundaries as far as I'm concerned. But just that culture of coming to us with new ideas, suggesting things, it's been incredible for me personally, but also for our AI progress at Lloyds Banking Group. So that's kind of the summary.
Really excited about the continued innovation with TCS. So we've seen some of my colleagues were in India recently. We've seen a lot of the new off-the-shelf agentic AI solutions. So again, very keen to explore those as we continue to build out our own capabilities. So look, I'm really pleased about the direction of travel with TCS. We've achieved a lot together and very excited about continuing to work together as we transform Lloyds Banking Group with agentic AI. So that's kind of the main things I really wanted to cover.
Thank you, Ranil. You've been very kind and generous with your time today. With that, I would like to thank you for your time and hand it over back to Nehal. Nehal, please, on stage, please. Thank you very much, Ranil.
No worries.
Thank you, Balaji. I would now like to invite our Chief Strategy Officer, Mangesh Sathe, on the floor. Over to you, Mangesh.
Yeah, good afternoon, everyone. Before I start, just an interesting observation. I tried to chat with LLM and ask whether I should wear a tie for this event. I said it's an analyst day. It said that absolutely I need to wear a tie. But if I look at the room, I think the LLM really requires more training, right?
Context.
Yeah, I think if you see what we have covered till now, we have covered our aspiration, right? Krithi has laid out what we are all set to achieve. We have talked about the five pillars. Today, I'm going to spend a few minutes on the fifth pillar, the AI ecosystem and the approach we are taking to build that ecosystem. If you see the infrastructure to intelligence stack, the AI stack that we have outlined, there are multiple layers. Krithi covered that there are a lot of investments happening across all these layers.
A lot of changes are coming in these layers. Today, the customers look up to TCS or a partner like TCS and say, "Hey, look, I really need someone who can make sense of all the layers of the stack, right?" I don't need a partner who comes and only talks about a particular layer. I want somebody who can really understand that entire stack and also provide services and navigate this stack so that we are able to really create some impact.
I think we talked about Lloyds Bank and how we have kind of created impact. Aarthi talked about autonomous cars. So if, let's say, a car company comes to us and says that, "Look, we want to really build an autonomous product." Yeah, so as TCS, if I have to really help the company end to end, I'll have to start with the chip as to what does the chip need to look like, what should be the spec of the chip. Then I need to really help cover the AI infrastructure because every car is going to emanate a lot of data, right? So how do you really manage that data? How do you really deal with the latencies that are required for a fully autonomous product? And then we get into the models, the applications, and then the whole digital layer of interaction, right?
So as a company, if I have to really add value to this particular case, my ability to really understand this fully is very, very critical. So one of the core elements of our strategy is to really have a meaningful presence across all these layers. So that's one of the core parts of our strategy. And now to do that, of course, we are doing a number of things organically on our own. We are building products, AI platforms. We are building services, transforming services. But in addition to that, to really augment all of this, we feel that we need to also do what we are calling as build, partner, and acquire because that is going to really help us accelerate this journey because this whole aspiration has to be achieved soon. So in that case, we really need to use these levers as well.
So I'm just going to spend a minute on each of this. So the first one is the build part. Now, this is one example of the build. This is not the only build we'll do. So building can also be around capabilities, services. We have taken an example of a venture that we are building. So this is the whole AI data center thing that we announced a few weeks back. The rationale for us to really do that is, of course, there's a huge requirement that you see on the left. We currently in India only have around 1.7 gigawatts. Expected demand is 10-12 gigawatts. More importantly, if you see the middle column, the type of customers we are targeting for this AI data center are hyperscalers and AI companies. These are the primary targets for us.
And then, of course, we will also work with public sector and private sector customers. Now, the reason we felt the need that we need to really address this requirement is that the requirements are very unique. If I look at hyperscalers, they are really talking about if I have to get an AI data center, it also needs to work with all the other non-AI data centers that they may have in the region. So the latency becomes very important. The kind of performance they are expecting from this data center is very different from what today exists in the country, right? Similarly, in AI companies, while, of course, the default expectation is to really look at India as a destination for inferencing, we are also working with them to really ask the question, "Can I also look at India as a base for training?
What can it kind of do?" In addition to whatever they are doing in the U.S. and other geographies, can we look at India as a geography for training? Public sector, of course, sovereign. I think a lot of things are said about that. So sovereign is a big requirement. AI governance, how do you really govern the infrastructure? And then private sector, if you see what they are expecting, like the example I gave, they want somebody who can come in and do the full stack.
But more importantly, apart from just providing services across the stack, you also need to provide it at a low TCO because that is finally, like today, in initial stages, maybe people are not asking so much about the spend, but this is going to become very critical soon that once billions and trillions of tokens are going to get generated, of course, the cost becomes a very important factor. And lastly, the whole AI-led transformation. How do you deal with all these stacks, layers of stack, and then really deliver a very meaningful transformation? So for us, the rationale was simple that we feel that there is a unique opportunity for us to position ourselves as a one-stop shop for all AI services. We also feel it is a great opportunity to use this to deepen our partnership with hyperscalers, AI companies, and the ecosystem per se.
We also want to leverage all the deep domain expertise. I think the examples that were given earlier showcase a lot of depth in terms of industry-specific, function-specific domain expertise that we have. So how can we translate that? And lastly, of course, this is a high-growth segment. So whatever expertise we are building, building our own data centers, we can start offering it to the other data center companies across the globe, right? So that's one example of what we are building. Finally, the plan is to build a gigawatt scale. We have announced a partnership with TPG, and we are going to do the funding through equity and debt, yeah. Second part, build and then partner. Partner, Aarthi has already covered a lot of it. I'll probably highlight the middle portion first, which is when we say partnership and 360-degree, what do we really mean?
To look at these three parameters, the whole aspect of mutual services. I offer some services to the partner. We take some services from the partner. So that's one element that really creates a very tight relationship between the two entities. But more importantly than also, how do we work together in driving growth? And this is okay. You can start with the regular go-to-market kind of initiatives. But here we are also with some of the logos that we have put here. We are also working with them to define unique initiatives where we can penetrate a certain industry segment or a certain use case or a certain function. How do we really take that to the market in a very unique fashion? And lastly, the innovation and the co-development that we can do with these partners. So a lot of examples. So NVIDIA, Aarthi spoke about.
Similarly, with Google, now we are defining how can we build industry-specific workflows. So a number of things that we are doing with each of these partners, which can help us become a more tighter coupling with the partner as well as take something unique to the market, yeah? Lastly, on the acquisition part, the core focus of M&A is centered around capabilities. We really want to ask the question strategically as to what capabilities can I build, what capabilities do I need to acquire, and in some cases, build and acquire to kind of do it together. So some of the examples we have put is like, for example, in the big bet area, there are certain areas we want to acquire capabilities. But across the service lines, advisory is a big area that we want to build and in a way, acquire capabilities for.
Similarly, if I look at deep domain or service line capabilities, all the big bet areas that were talked about, I think those become focus areas for us. Those are the growth areas for us in the future, and then market access, whichever markets, because as you start seeing the overall technology market, you'll have to now start going geography by geography, within a geography, maybe sector by sector to really ask the question, "Where is the penetration opportunities and how do I really now go after that aggressively?" One case in point here is the recent two acquisitions we have announced in the Salesforce space, which is ListEngage and Coastal Cloud. The reason we did these back to back and kind of did it together, one, it helps us really bolster capabilities across advisory implementation and managed services. It helps me complete the entire capability set.
Advisory services give me multi-cloud. All the modules within Salesforce get covered. It helps me deepen my partnership with Salesforce because the companies that we have acquired have partner advisory board positions, are summit partners. So they have a very deep connect with the Salesforce ecosystem. So it helps me deepen my partnership with Salesforce. It helps me cover market segments. And lastly, of course, gets me 500-plus talented individuals in this particular space. So if you look at the map, so the top line is essentially the combined entity where I put all these three entities together. And then below that, you see that individually TCS, ListEngage, and Coastal, how do they fare in terms of capability map across the various modules within Salesforce, right?
So from services all the way to industry clouds, you will kind of see now that we have a full set kind of covered. And it helps us in a way strengthen the entire platform to take larger aspiration in this particular space. Also, market coverage. We have covered both the large enterprise as well as the mid-market and the sectors as well. So coverage aspect is also covered very well. And so this is the approach that we will be using for M&A, I think even for partnering, the unique approach we are looking at and building. I think we have just started. We'll look at other areas as well. Yeah, thank you.
Thank you very much, Mangesh. I would now like to invite our Chief Financial Officer, Samir Seksaria. Samir, the floor is all yours.
Thank you, Nehal. And good evening, everyone. So I think through the day, through the immersion sessions which we had when we heard Krithi, Aarthi, and Mangesh speak, and probably from the demos also which we are going to see at TBC, you could relate to it that we are doubling down on our aspiration to become the world's largest AI-led tech services company. And towards that, we are focusing on two key things. One is execution rigor that is delivering real customer outcomes and talent transformation, which will ensure we stay ahead of the curve. And we have called out our five-pillar strategy that is the five-pillar framework, which will help us tap this opportunity.
And as you can see, we are breaking new ground in the past couple of months. We are shifting gears quite rapidly in this fast-paced era. So if you look at the tech cycles, they have been compressing. Krithi also alluded to it in his opening keynote. And in that context, gaining an early mover advantage is something which is important and a competitive advantage. And unlike in the past, when competitive advantage was only through acquiring capabilities, in the current day and age, it is also dependent on how fast you are able to take those capabilities to the customers and how rapidly and efficiently you are able to scale them. Our investment approach, keeping that in mind, is towards balancing innovation speed with scalable, profitable growth.
And that is what should position TCS as a market leader in the AI-driven services. I'm not going to cover the investments because Aarthi and Mangesh covered a lot across on the investments part. But putting it on the framework, our overall strategy is built towards three growth engines in terms of build, acquire, and partner. Traditionally, TCS has looked at investing primarily on the organic side of it. But as we heard through the conversations, it is important that we invest or put across our strategy, our investment approach across all three of them, build, partner, and acquire.
If you look at build, which is basically driving innovation from within, we have been investing on talent, on the intellectual property, on infrastructure, and in terms of democratizing AI across TCS. If we look at our annual spend just on the build part of it, we spend about $1 billion annually on learning and development, on targeted research and innovation, and on specialized infrastructure towards the new services which we have been talking about. If you look at from an acquisition, and this doesn't include the use case which Mangesh talked about on the Cyber Vault side of it.
This is the existing investments which we currently do. From an acquisition strategy, our focus is on building capabilities, acquiring synergies, and driving high-quality revenue, which will help us build a multi-year cross-sell opportunities, and from a partnership perspective, this was covered in detail. I think to a great extent, the benefits which we get is faster integration in client environments. We are able to lower our upfront build cost. We have an opportunity to have a shared investment model, and most importantly, like Mangesh also said, we get early access to innovation, and we are able to build cross-sell pipelines. This was covered in detail in terms of how we want to invest at scale across the stack from infra to intelligence, and this is where we believe it gives us exponential value. It will be able to deliver exponential value to our clients.
Looking at all the investments which are coming, I'm sure the question which comes to mind is, how are we going to manage the return metrics and the margins? As I talked about, we already invest about $1 billion as investments. Our focus would be the incremental investments would be funded partly through repurposing some of the spends, specifically around the learning development part of it into newer age spends which are required. Second is leveraging our balance sheet. Lastly, there would be areas which will require an offset and offset through operational efficiencies. Towards that, we'll be focusing on continuing our focus on operational levers, the ones which we have called out over the period, utilization pyramid, SG&A, etc. Looking at prioritizing tools, platforms, and reusable components.
We'll also look at shifting the revenue mix towards higher value, higher margin services and focus on delivery productivity. Our intent would still be to make all the investments and over a period of time shift towards our aspirational band, the guided beacon of 26%-28%. Currently, as of last quarter, we were at 25.2%. Then the next thing which was talked about when we called out some of our investments was, how does it impact our ROE? If you look at the last five years, our ROE has improved from 38% to 51%. And if you look at our peer set of six competition which are in our closest band range, their average ROE is at less than 25%, 23.6%. And the next best is at 30%. So we are 2x the average of our peer set.
If you look at the DC business case, also the data center business case, the way we have been able to structure it, not just the debt part of it, but the variable return which the private equity partner which we have announced is going to get ensures that we are able to participate and lock in higher returns for us. That should give us better IRRs as well. As the impact of now with the data points which are available, the impact of the investments which we'll be making, which roughly would be about not more than $1 billion over a period of seven years, annual investments being even less, will be on the TCS balance sheet will be very marginal. We are confident we'll be able to maintain industry-leading return metrics. Lastly comes to in terms of what happens on the capital allocation piece.
If you look at the data point, last five years, we have been returning in the 80%-100% range in terms of capital allocation, and our stated policy has been these have been uploaded on the website, so it's all available, but our stated policy is that we'll be returning substantial free cash flows back to our shareholders, and going beyond, we'd look towards returning 80%-100% of FCF post all investments. If you look at the last since our listing, 21 years per se, if I take the IPO was at INR 850, and if we take the dividends which have been given, just the dividends in the last 21 years, totals up to about 5,600, just under INR 5,600 for one equity share which was invested, plus an opportunity to participate in the buybacks, the four buybacks which we have done.
The capital appreciation itself, which has happened on the baseline, is about 30x. TCS has been and will remain a long-term value compounder. Re-emphasizing the messages which Krithi had as this closing slide, but the key points being that while we are entering the AI decade, we are entering this decade or this positioning with a strategy which is clear, a team which is prepared, and a balance sheet that gives us optionality without volatility. What plays to our advantage is what we call as the TCS advantage, which is in terms of our client trust, our delivery, our execution powers, our talent, and our financial architecture. With that, we should be able to deliver growth with profitability and ensure shareholder returns. That's it from me.
Thank you very much, Samir. With this, all the speaker presentations have been done. We will now like to start on the next session, which is the most awaited one, which is the Q&A session. So there's a small request that kindly please raise your hand before asking a question so that the volunteers here can come and give you a mic. And also kindly share your name and organization name for the benefit of everyone who's listening to the webcast. I would now like to call upon my management, Krithi, Aarthi, Samir, Mangesh, and Sudeep. Please join us on the floor. Yes, we can now start the Q&A session. Yes, Ankur Rudra, can you please share the mic?
Thank you. Thanks. This is Ankur from JPMorgan. First question is, thank you first of all for the great presentation. And all day long has been excellent and a lot more data. So really learned a lot. Your vision statement of becoming the leading AI-led tech services firm, how should investors evaluate that over time? If you can just highlight which metrics, what drivers, what should we see to see the success over the next three to five years?
So we've defined those five pillars. Each of these five pillars also have sub-parameters. Internally, the way we are looking at is how do we drive these five pillars? For instance, we talked about what are the business value, how do we reimagine the business value chain as the fourth pillar. We talked about how do we redefine our services through leveraging AI. So each of these have sub-pillars. The way internally we are going to be measuring is on how are we progressing on each of these individual metrics. See, obviously, overall metric would be on how would be the AI-driven revenue that comes to organization.
Thank you. Just a follow-up question is on margins. You did say that you want to maintain the 26%-28% band. However, if you look at the last seven, eight years, it's been tough to sort of stay there. Given a lot of the investments you will be doing going forward, doesn't it make sense to maybe change the band to around where we are right now, giving you a lot more operational flexibility and be more competitive in the market?
See, overall, what we believe is with our cost structures, we should be able to operate in the 26%-28% band. And recognizing all the investments, and we have been. We believe that the investments which we have been making early has been the source of how we are able to maintain sustained industry-leading margin band. And we are taking the challenge that, taking into account all the investments, we would want to shift towards 26%-28%. To the point in terms of the last couple of years, you would also see that there would be various points in time where we would have come closer to 26%, and then we would have taken the investments. So it won't be that we will be stuck on not making investments at the cost of profitability. Growth with profitability will remain our mantra. And we'll not be shy of investments, but we will be driving towards the 26%-28%.
Thank you. Just one last question.
Ankur, in the interest that others also get a chance, can we please limit the questions? We also need to give a chance to others also.
Sure. We'll catch offline also.
Yeah, we can take those questions offline also.
Hi, this is Kawaljeet Saluja from Kotak Institutional Equities. Thanks a lot for the presentation. I'll try and restrict my questions. First question is advisory/consulting focus. That's an area where, let's say, TCS may have not prioritized this, at least explicitly in the past. Now, as you drive down the advisory path, both through organically as well as through acquisitions, what are the challenges in execution that you see? And more importantly, what has changed for you to prioritize the advisory/consulting part so much?
So if you see the overall tech and advisory space, I think most of the incumbent firms are also going or are experiencing a lot of change, both in terms of what customer is expecting them to deliver as well as the way they need to deliver it. I feel this is the right probably window for us to look at a more updated model of advisory or consulting, which will essentially be more driven by AI analytics. Coming from our place of strength, which is technology. Given some of the stack conversations we have had, I don't think the success can be just about providing advisory.
It is also about then going through all the other layers of the stack to actually help the customer deliver it. We'll have to build that initial muscle of advisory as well because that's where the impact or the conversations get pegged at the right level for us to drive that impact across the layers. We see that as an opportunity, right opportunity for us to consider building that capability.
The second question that I had is for Samir. How do you define that $1 billion of investment? Is it annual? Is it to the P&L? Is it OpEx, CapEx? If you can just get some more color on it.
This is $1 billion in annual spend completely on OpEx, completely on the P&L. And the key components are on the learning and development part, learning and talent related, specific industry specific or new services specific R&I, and specialized infrastructure.
Got that. Nehal, can I sneak in one more?
No, we can take those questions offline.
Nehal is very strict.
Yes, anyone else? Yeah, we have a few people here. Can you please pass on the mic?
Hi, thanks. This is Kumar Rakesh from BNP Paribas. My first question is around the new age services revenue, which you called out about $11 billion. That means there is still a large size of legacy revenue, which is there almost two-thirds of it. How do you see managing that in the coming years? And more importantly, this new age services revenue of $11 billion, that also is growing at about mid-single digits. So the growth is not something which is phenomenally great in that piece as well. So the new age services growth is not significantly strong, and the legacy revenue most likely will keep on getting cannibalized as we expand into implementing AI across our service lines. So how are you going to manage your growth in the context of all this?
Kumar, as he pointed out, our overall new age services revenue is growing faster. First of all, you'll understand that there is a moderation of subdued performance because of the overall market sentiments. That is the overhang that we have. But within that, the new age services are growing faster than the traditional services. With the increased investment and a stronger strategy around our new partnerships, new investments, and invest in initiatives like data centers and our new strategy, we believe that the new age services will grow faster to offset for any deceleration or drag in the traditional services.
Got it. My second question, I'm going back to the question we had discussed earlier about how do you define as the world's largest AI-led services company? Any metrics as a specific metric, be it number of customers, we'll try to publish.
See, at this time, as we took it as a vision, we said these are all the five pillars we are going to work on. Like I was telling Ankur, then we looked at these sub-pillars that we are putting our energy and effort. One obvious metric is the AI-driven revenue. We published it. This is where we are. We'll continue to publish it as we go forward. And that's what will drive the external metric. But internally, we are going to be driving all the parameters. And I think to us, those parameters are equally important as important or more important than what because there could be a lag in the external revenue that we report. But it will keep us honest, right? Like what's going we are going to be focused on the internal parameters and the individual line items as they describe.
Got it. Perfectly clear.
Yes. Should we take some questions from this side as well? Yes. Yeah, the lady.
Two questions. First, in any technology transformation cycle, we have seen first-mover's advantage, so does the scale. So as TCS is looking at data center, and TCS has always enjoyed the scale advantage, would you think the scale would be essential along with being first mover?
See, scale definitely plays a role. And as one of the TCS advantage, overall, apart from the delivery execution and client trust, which we have the multi-indicator trust, definitely, at least in this one, the scale, the breadth and depth will definitely play a positive advantage.
So we will put our capital to use to achieve that scale advantage as well in the
when we saw the full stack and what we talked about, the investments, we will be leveraging and making investments where we see focused returns coming in. Yes, absolutely, we'll be using our capital.
Secondly,
but we'll also be partnering, as we saw. If you take the data center case, it would be prudent and it would be structured. So, but balance sheet does play a role.
Got it. Secondly, the $1 billion spend that we highlighted, can you tell us how historically it has trended? Is the intensity of investment on the rise as we are going through these technology changes, or, we have been consistent at whatever level you want to define in making those investments?
We have been making early investments across, and if you take it as a percentage, so as I said, our focus would be to make the right investments, and some part of the investments will also be repurposed into things where we want to focus on, and we would, if required, to the other question previously, we'll elevate our investment need also.
Growth with profitability would be the mantra. It need not be that we will shrink our investments in the near term to let go long-term growth.
Got it. Thank you.
Thank you. Yes, we will now take the question from
Arup from Gartner. My second question first is to Aarthi, and I'll let you think about it, which is how do you foresee your org structure to look like by 2030? Because you will have your agent as coworkers with your engineers. And my first question is perhaps to Samir or Mangesh, which is about your India IDC venture, which is going to be completely opposite to the asset light DNA that TCS has maintained over the last 20-25 years. How is it going to be done? Are you not thinking about coming up with a special purpose vehicle to do it, or is it going to be done under TCS's fold? But the key question over here is how do you see where is the synergy? That's what I'm getting at. Where is the synergy with TCS's overall services business in the global scheme of things?
So, one, there is a special purpose vehicle. And the data center is overall planned towards an anchor customer. We'll be making the investments with locking in an anchor customer. And as I talked about the structured investment, we'll be using debt, leveraging the balance sheet, as well as we have announced an equity partner, which is there. And there's a Cyber Vault; it is the subsidiary or the joint venture, which is formed as a special purpose vehicle to address that.
In terms of synergy, we benefit from both front-end synergies. As I said, the anchor customer typically would be a hyperscaler or an AI-led company. And it is not just about the; it's a 360-degree relationship which you'll get from them. And also the back-end synergies where we can leverage because the data center is all about cooling power and connectivity. And with the One Tata advantage, we can also look at getting optimized value from them or any other players which would be available.
Arup, your question on how the org structures would look like in a world where humans and agents work alongside. I think this is an area which is evolving. And the more we move to level four, level five autonomy is where you'll have agents which can do what, let's say, completely a task that a human does. Today, if you really see, level two and level three is where the human is primary and AI is in a supportive role, right? An assistant role, if you will. So this is something which is evolving. But where we are seeing this play out in actuality is more in the autonomous GBS, right? In business process, we're seeing that there are certain tasks in the business process value chain which are being completely done by agents.
So in BPS, we have got this model where human plus AI is working together, and there are tasks which are autonomous. So how do you orchestrate work? That's where we are building orchestration layer where the orchestration layer really knows what task is being done by the human plus what tasks are being done by the AI so that oversight is coming in that layer. But early days.
I think we have time only for one last question. The volunteers can choose. I don't want to be biased. The volunteers are not coming.
Oh, wonderful. Lucky me. Wonderful. Yes. Okay. So two questions, actually. So first one is last year, I think for the first time, you actually showcased COBOL to Java app modernization and things like that. It's been a year now. There should have been a lot of learnings on this. Just trying to understand, that's a very large market, obviously. And how much adoption are you seeing on this and how rapid is it? Do you think you'll see a lot of those next year? And just to get a feel of these things, you highlighted a project with 50 million lines of code being converted. Typically, as an example, how large are these projects typically and how long does it take? That was the first question.
The second one is today, early today, you showed us how we could rapidly create a use case and how it's very interesting for organizations. Understand that very well because CEOs would love that because it creates momentum in the org in terms of gen AI creation. But what I'd like to understand is there's a lot of talk around creation of an AI fabric or a foundation layer. What does that look like? My understanding is something I'd like you to ratify whether right or wrong and what it means.
My understanding is you have all the APIs, all the knowledge, all the context in a layer. And does it mean that if I'm able to create that use case, rapid prototype with just the front end, you can agentically sort of stitch together something if there is maturity on that fabric layer? If that is the case, then from a TCS perspective, where we have always been master systems integrators, what happens to business in general? And how long does this transition take?
Yeah, I'll take the first question. See, on the modernization trend, there is immense demand. Like when you talk because in the past, there's not been a proper business case for tech modernization, particularly when you want to move from a technology like mainframe into any new modern technology. There was no financial business case.
Even from a risk perspective, not many people knew what was the business logic that was used in those old systems, and to reliably translate them into a modern technology has always been a challenge, but what generative AI does is gives you the ability to understand, and a human can validate the understanding, and so you can encapsulate the knowledge that's residing, and it also provides a framework through which you can modernize to a new technology. Like the example that you talked about is progressing quite well. In fact, again, these are all very complex programs, so it's just not you turn on a switch that 50 million lines of code don't convert themselves into Java. It requires an amount of human intervention validation, but of course, that's one large example.
What you see more and more common or what Aarthi calls Rapid Builds, we showed it as a CodePlus as a platform here where we have the ability to move from X to Y, like for instance, from Tableau to Microsoft platform, or it could be a TIBCO to Java. There are multiple opportunities that exist. And these are all much simpler because codes where they're all recently written within the last 10, 12 years, and not very large or large or complex like what we talk about, 50 million lines of code. So such programs where they're very time-bound, outcome could be measured.
Results and value, results can be achieved within a short term. Mostly, we try to do it in a three-month period. So those programs are quite offered. I know the number, probably Aarthi may have the number of programs you have done. We've done a very significant number of programs, and there's a real opportunity for that.
I think if I may just add, I think if you look at the tech debt across the organization, it's in core systems. It's in integration. It's in data. It's in reporting, right? So across layers. So the code plus platform that Krithi talked about, and some of it you'll see when you go to our executive briefing center. So I think what it does is it actually accelerates X2Y migration, as I like to call it, across these layers. Krithi gave some examples. So I think what we are also seeing is same modernization programs that we did last year. This year, we are able to do it with much more productivity, right?
One is the technology capability has advanced, plus our own understanding of work with AI and what the human in the loop needs to do has also advanced. So I think in terms of projects, we are seeing the mainframe, the 50 million lines of code, those are all big projects. But I think there are a number of short-cycle projects that we are seeing. And I would say today we are seeing a lot of that in BFSI. And other industry verticals are now starting to catch up. But I would say that I'm seeing a lot more in the BFSI space. Now, coming to your second question, I'd like to sort of answer it more simply from an integrated SI perspective, the role that we play. I think I sort of briefly covered it.
If you really look at it in any enterprise, while we can do the rapid builds, build the AI apps, agents, right, fairly quickly, where the work goes is in the plumbing, right? Connecting it back to the data, connecting it back to the systems, right? And more importantly, if you look at the layer, our eye-to-eye layer, as we called it, right, infrastructure to intelligence, across each layer, there is a significant amount of work to be done to integrate, and today, when we move from the way we are doing apps today, we are connecting it to the model and to the data, but going forward, to Krithi's point on building learning systems,
the more we start creating these learning systems, creating the decision infrastructure, right, is where there is a lot of work across all these layers of the eye-to-eye framework that we showed you. And when it comes to agents, right? Not just the individual agent building, agent-to-agent interaction, agent interaction, which is the fabric layer that you alluded to. There is, while you can build many things quickly, but stitching things together, bringing the contextual knowledge of the customer is where we come in.
Thank you, TCS management. I would also like to all the participants for their participation and engagement today. We hope these sessions have been very informative and useful for all of you. We would now like to close the recording session.