Good day, everyone. My name is Surendra Goyal, and I head Research for Citi India and cover IT services as well. Welcome to the Webinar on Decoding the Impact of Generative AI on IT Services—thoughts from HCLTech. Before I start the session, let me highlight: investment in securities markets are subject to market risks. Read all the related documents carefully before investing.
Since the announcement of ChatGPT and other GenAI initiatives, there has been a lot of discussion around generative AI and its potential for IT services business. Secondly, any disruption it can cause to existing businesses. In fact, this remains one of the most discussed topics in our discussions on Indian IT with investors in the last few months.
To get a better understanding of the impact, we are hosting Kalyan Kumar, or KK, as he is commonly known, Global Chief Technology Officer and Head Ecosystems for HCLTech, for a deep dive on the topic. KK is the Global CTO and Head of Ecosystems for HCLTech, overseeing all product and technology strategy, emerging technology incubation, and the overall cloud offering, CloudSMART. He leads the HCLTech ecosystem unit spanning cloud, tech OEMs, telcos, and the startup ecosystem.
KK is also the Chief Product Officer for HCL Software. He is responsible for all portfolio and product groups, driving central engineering, cloud and SaaS platforms, IT security, and compliance for HCL Software. Over the past two decades at HCLTech, KK has played various roles across solutions, business development, alliance management practice, COE, and delivery, starting at HCLTech subsidiary, HCL Comnet in the year 2000.
The format we'll have here today is a presentation and initial comments by KK, followed by a moderated Q&A. I'll be asking the questions. If any of you have questions, please email to my email ID, surendra.goyal@citi.com. For people attending on Citi Velocity or the webcast, you could type in your questions as well, and I will take the questions from there. Welcome, KK, and thanks for doing this for us. Let me just hand over the session to you now for the presentation.
Thank you, Surendra. Good morning, good afternoon, good evening, and all people in all parts of the world. Incidentally, today I'm in India, but generally, my home base is in the U.K. It's great, and thanks, Surendra, for this opportunity. I've done at least a dozen such conversations now with customers, clients, boards, with various different things.
Every time, with such a diverse audience, I always turn to learn something. It's like every time you try to learn something in our conversation, a few weeks back, I learned something new. I'm going to just share my perspectives. Perspectives are continuously evolving because the tech stack itself is evolving. It's interesting to see the potential and what new opportunities which are there for technology services industries. Also, as a software product ISV, we also see some very interesting perspectives. I'll share both sides of this, Surendra.
Maybe I'll take about the next 35 minutes to give you some perspectives on this. This is what we're going to do. We're going to just try to demystify generative AI and generally talk about different value streams of how we apply this across different service capabilities which exist in the IT services and engineering services industry.
Then we look at what are we doing at HCLTech services, and then what are we doing at HCL Software around this. Then we can pick on from there. I think the first thing which in a lot of my conversations, the media and the market somehow has hyped generative AI as this super set of AI. Somehow people think it's just something just brought in which is going to be over-encompassing AI.
If you really go, and I always suggest that, read chapter one of Professor Stuart Russell's book because he is, I would say, currently the most living legend on defining some of the base fundamentals of what AI is. There is a large discipline of AI, correct? It's not new. Within that, there is a discipline like you have computer vision, robotics, a lot of other pieces. Within that, there is a discipline which is called machine learning, then we have supervised and unsupervised.
Within machine learning, you have a subset discipline which is deep learning. Actually, generative AI is an intersection, or it is basically at the confluence of what's in large language models, which is basically the foundational underpinning of GenAI. It's basically, I would say, at the confluence of deep learning and natural language processing, ability to combine both of them.
Learn using the deep learning methods, but then you process language, understand language, and generate language. There is a lot of other use cases. It is primarily just a subset. It is very, very important. Basically, you can take image, pictures, code, script, text, or document. I did not use the word video because if you go back and see, video is nothing but images processed within a second.
Finally, when you see video, there is a 30 frame, 60 frames. Basically, images are the fundamental building block. You use this source of data to train the model or the language model. That is where deep learning comes in. It learns the model like very deep, this thing, understanding. It is basically learning just like a child is learning. You basically use prompting and query to basically improve what the language model has learned.
You ask questions, you get responses. You tune, and you continue to do that. It generates output. The output could be a lot of different outputs: codes, test cases, documents. This whole ChatGPT, what really got brought in, was really, I would say, the first major public consumer implementation. What we are seeing today, whether it's ChatGPT or Bard or Bing Chat, or now Google has chat plugins which are available similarly with their Bard in their search and they're releasing out in that same way.
It's basically the consumer-facing world, correct, which is all out in the consumer. All of us can use it. Those language models are trained on public data. If you see ChatGPT many times and ask a question, it says, "I have been trained till 2021," correct? I can't answer anything before or after that.
It gives you that very standard disclaimer, like Surendra put a disclaimer at the start of the session, correct? It gives you that. What is happening is that as you see evolution, like what Bard is doing, Bard is actually generating on top of Google Search also. It can search context data, real data, and then it is able to generate on top of that. It is able to find, and it is continuously evolving.
Look at what Microsoft has done with Bing Chat and the consumer side, sticking in the ChatGPT capability into that. If you use ChatGPT Plus, which is the subscription, the paid subscription version, you can use different language models, you can turn on plugins, and you can see different plugins coming in. That whole thing is evolving in the consumer space.
I think as collectively from our perspective, you need to start to realize is, okay, consumer space is going to adopt it at scale. Consumer product and tech companies which are in that space will embed this technology. That's where a lot of the work which we see as engineering services from our standpoint. We are working with a lot of those companies to see how they are adopting and implementing the technology.
We want to see these interfaces come up with your consumer electronic devices and your various different things very soon. There is an enterprise view of that, and we touch upon that because most of the IT services funds have a big play in the enterprise customers. That's where the adoption is going to happen when we look at that. You'll look at the adoption on two lenses, the consumer lens and the enterprise lens.
Now, let's pick some three, four areas. If this is what it is, what I thought of doing was to pick those four common. I think I picked up the nomenclature which Surendra came out of your report, like you call BPO, IMS, AppDev, and we also added a section on engineering, systems engineering, very specific. You can say typically a BPO, you have front office, you have middle office work, and you have back office work.
If you really look at and start to look at the value stream of a BPO, there's a lot of activities which we do, correct? A human does a lot of activity. Many times, again, we sometimes misinterpret the context of it's a human-machine partnership, correct? It's like Microsoft calls it Copilot, Google calls it Duet. We call it the human-in-the-loop or human-machine partnership.
It is basically human and the machine, correct? It is not human versus the machine. The whole process is that within that, in the BPO segment, you do process analysis, execution, process improvement, and you deliver value to the customer. There are areas where you can apply GenAI, like search and retrieval. Anytime someone asks for a query, search retrieval, can you intent extraction from a document, correct?
Like you can use OCR, but also you can read a document and extract intent. A lot of things. Because anything which you do with a language, you are able to apply it, correct? You could generate standard operating procedures, like if there is a complaint or things like that, how do you handle this? You can generate a lot of text. You can summarize knowledge. You can translate languages.
You can actually, for first-level conversations, you don't need, you can use this to augment some off-peak loads and say you can translate language, help in process engineering. You can see things which you have to read a lot of documentation, a lot of compliance-type work, correct? You're reading a lot of information, trying to extract information, summarize, look at things. Anything which a language model can do, there is an applicability in a value stream in the BPO space, correct?
I'd like you to look at that. I think the input could be case, issues, requests, anything which is coming in, documents, all the things we talked about before, correct? That's one in a very simplistic way. There is applicability. Goes back to the same thing. If you go back, Surendra, RPA. Remember, about 10 years back, there was a huge hoopla around RPA.
RPA became CPA, correct? Oh, I'm going to insert some machine learning, some cognitive. You look at the adoption of RPA, it's done to a good level. It's taken a good amount of time because when you get into an enterprise rollout, connecting to various different systems, tying them together, integrating that data, cleaning up data, I'm going to talk about some of those prerequisites or areas which you need to do that. BPO is one area where you see this is a typical impact. Second area is a lot of talk about, oh, it can generate code, correct? This is a big, big debate, okay, code generation.
You see most of the public implementation of the Codex model or the GitHub Copilot or the Google code gen or Gen App Builder. These products have a lot of affinity to modern languages because you've been trained a lot of them into the modern programming languages. They can generate a lot of scaffolding code. They can generate a lot of code which is basically non-functional. If you train them with a lot of functional know-how data, it can generate a good amount of code, but you still need to validate that code, correct?
This is somehow people think you can write code and that code can be taken and deployed into production, correct? It needs to go through. When you do an application development process, correct. You have requirements, features, and bugs. You do analysis, process mining, understanding, experiment, pilot, you model, develop.
There are a lot of activities which you do. I think a lot of data, I have a slide coming up later which gives you some range of how the coefficient is. There is a lot of areas around generating test cases, generating scaffolding code. It can also help you do some level of code generation for programming languages, which you might, like the developer might be trained on too, but I think there is a variance of that programming language which he or she wants to generate a code. Maybe I'm an old JS developer, but I want to generate some JavaScript. You can do some level of code translation.
It's very proficient at some languages like Python, which is now, I would call more like a, it's like a more citizen-ish development language because I think most of our Excel sheet users in even in your industry are now all becoming writing, instead of writing macros, now they're loving to use Python to manipulate data, correct? I think it can generate some good level of code. It can ingest a lot of documentation, like in application support, kind of this thing.
When you're troubleshooting a customer, you can, rather than you searching up a document, you can actually quickly look up something in supporting this thing. In the dev, there is some significant touch points which is there, but in pockets, correct? Again, when you do the whole ADM lifecycle, there are a lot of things which you do.
Business requirements, and now with Agile, what's happening? You do so much of iteration, correct? Going back to the customer, working on prototype. You can do some UI generation, correct? Because we can do image generation. You can generate UI, you can generate some UX flows. You can do a lot of work. I think it can assist. The way one has to start thinking is, how do I pair it up and use it in a far better way, correct? Rather than thinking it's an and or or, I always say it's the and. How do you partner and do that? There's a lot of areas within application development and support where there is a significant applicability which is there for this.
Again, when you contextualize, now, when you do app modernization, it has to be trained on a lot of traditional languages, correct? Like if you see COBOL, it's an interesting language. People, there's still a lot of, I think we're still talking about GenAI, but there's still gazillions of mainframes running around there, running COBOL code. There is an opportunity that as COBOL programmers are not available.
Could you create a good COBOL Copilot which can help you generate code, can help you modernize those code? Can you use this to convert from COBOL to Java? The interesting question is that that's been a mythical challenge because COBOL is such an interesting language. It's got business context and non-functional weaved into one programming construct, correct? How do you extract? There's still, there's an opportunity.
There's always, just like in automation, you keep improving, there's always an opportunity, correct? Which exists for a long period of time. Let's look at infrastructure and operations. Again, in that, if you look at ops, see there are a lot of infrastructure work like projects and all. You can't do this with that, correct? Same thing in a lot of project build, like when you implement package applications in application development.
Yeah, you can do some documentation, initial work, but when you do implement package applications, there's a lot of system. Now, if the package application vendor is embedding some generative AI, helping them implement, that's where they will be able to speed up the implementation. You can't use a GenAI from outside and say, "I'll implement a package app," correct? An ERP, a CRM, or this thing because you are a commerce system.
You need to have context or plugins to be able to generate that. Things where there are possibility is applicability. Inno Space, this has always been the favorite child. Every time people say automation, first time, if you go back 10 years back, like the RPA example in BPO, is the cloud. Oh, will cloud change the way IMS? Actually, if you realize in the enterprise adoption, it's actually the migration and modernization skills on cloud is becoming a very, very important vehicle to help customers move that this thing.
If you look at an Inno Space, there are areas, especially in operations, correct? Where on top of what you do with automation, with AI and ML, you can apply a lot of generative AI capability around analysis, extraction RCA, doing RCA, intent extraction, script generation, a lot of automation script generation, SOP generation.
You can assist in threat modeling in case of cyber. You can improve a lot of employee experience by giving them more conversational parts. They can talk to various different systems, knowledge summarization, language translation. We were at the Inno Space, again, the same logic of you talk about cloud, correct? The cloud momentary option is going to, and the way we got automation embedded into this.
There are a lot of areas where you can apply this. Again, you have to contextualize. When you start to use this in the customer environment, you need to make it enterprise-oriented to talk about how we do that, correct? There is also some play in the systems and product engineering areas.
It can assist in areas like factory automation, VLSI design, can generate a little circuit diagrams, can help with UX design, content generation, can support in code testing and optimization. Can support some of the level one product support kind of activity. Even in areas where, because as softwarization is happening in most of these embedded systems and connected assets.
You have to realize that the total number of code which has been ever written till now, you're going to have 5x, 10x of more code being written as more and more of these devices get softwarized. I think there is a huge potential. I think humanly, whether you apply it to software product engineering, every company wants to become a software company, correct? They are reorienting themselves to build more and more software digital products.
Silicon and mechatronic, there are areas where as and when those software control plane versus silicon. There's a whole evolution happening around how we control silicon using software and how do you embed more software in silicon and in some cases put software instead of silicon into the cloud. You see a lot of similar plays happening and same thing in OT.
I think there is applicability a bit there. What we did was, and the way I really look at this is we have to see this in two perspectives. If you all the different areas we talked about, there is a lot of AI, ML, and automation applicability which exists today in various activities which you perform in these services. Generative AI has potential to extract and pull that to a certain, it gives you more efficiency.
If you see this quadrant, again, you know if you generally go and read reports, there is no sanctity in terms of if you take a piece of a work within the larger context, it's just like test case generation. T hen they'll say, "Oh, I'm getting so much productivity." If you look at the whole cycle, percentages sometimes can be misleading. You have to start to look at applicability in context of how much of that activity in the whole value stream there is an ability to apply these technologies.
There are two key things which need to be important to be looked at. One is from a dependency. See, you need to still have experience to use LLM and GenAI skills. There's a whole, there's a lot of activity which you perform. There is maturity of the LLM model, which actually is evolving.
A lot of contextual models being built by these providers. Quality of model training data, and then output perversity and changes which are happening. Look at the other aspects, data privacy, copyright, uncertainty and explainability bias, correct? Production outages, deepfakes, correct? These are areas one needs to start thinking about accidental use of sensitive data, correct? And the again, today, the way this is architected is, if you roll back and look at the cloud hyperscalers are the core feeders of GenAI technology.
Look at who's building GenAI technology. Majority of the work has happened with OpenAI. Obviously, if you look at it, it's all built on Azure. They're using Azure as the underlying play. Google, again, a Google Cloud with the whole Google AI, and AWS now with CodeWhisperer and AWS Bedrock and also clubbing onto the SageMaker and other pieces.
Fast forward, that is what they are following up on the s pectrum. These are the 3 big cloud. If you really go back to this, it is another layer of consumption on cloud. You have cloud infrastructure, cloud applications, task capability. You have to bring a lot of data. Hence, two things which we believe are going to happen more and more. The adoption in enterprise will mirror the cloud journey. It keeps the consumer out.
Consumer adoption will happen at scale. As is on Google I/O, they are releasing a lot of Bard capability inside Android, correct? The next releases of Pixel and other stuff will have. That consumer adoption will run at scale, correct? Enterprise adoption, because in case of Microsoft, you have to use Azure OpenAI. In case of Google, you have to use Vertex AI. In case of Amazon, you have to use the Bedrock systems.
Actually, if you go back to this, the availability of these systems, like even let's say Copilot, now it's getting out controlled introduction releases in different countries. Even in India. For India enterprises, it's still on the waiting list. As Microsoft opens up, it's going to take its time on the timeline to get those technologies. Second is you have to bring them into the enterprise. Train them with your data. There is a lot of discipline of data engineering, data veracity, data quality, data management. If you want to train the systems with your data, you also need to do a lot of annotation, tagging of your data, correct? Which data you want to train on.
You don't want to just ingest things which you don't want the GenAI systems to learn or learn and not answer anyone in the enterprise tenant itself trying to ask a question, correct? The second aspect is this FinOps discipline is going to become extremely important in cloud as infrastructure became utility. Apps became more and more consumption-oriented. This is actually this mythical word called the token, correct? That's what the new pricing model is.
You start training language models. You still have not found the right metric of how much it's going to cost and what. It's still an evolving space. There's going to be a variable cost model that's going to come in for customers. The FinOps discipline and building the cloud journey, there's a lot of things one needs to factor in because it's moving to a completely variable utility consumption-based model.
What are we doing in this, correct? See, from our perspective at HCLTech, we see this as a very interesting perspective. See, we see that we have, as a philosophy and as a capability, have been creating, infusing, embedding, and integrating AI. GenAI is a subset of the larger AI discipline. We have been doing this for the last two decades, correct? We have been in from silicon to infrastructure to apps, data, and processes. We believe that as we help customers adopt this, that will help them supercharge their business to become a generative enterprise, correct? Which is just the how do you apply generative into your enterprise capability.
We have got a collection of services, very like a decade plus of work around AI and enabling around AI, machine learning, computer vision, robotics, natural language processing, a lot of this work which we have been doing, annotation, embedded saving services, trustworthy AI, MLOps, a lot of these pieces. This has been in depth. What is happening and the other huge collection of partnerships from hardware chip accelerator partnerships all the way up to this.
We have got working with the AI ecosystem. In generative AI, the way we really see this, there are four big opportunities. Prompt engineering? I think this is going to be the single biggest demand area because how do you iterate with these systems and tune them, blend them, synthesize them? It is a lot of work. There is a huge opportunity which is in there. It needs incremental skill, this thing.
It needs a different kind of, just like you have squads, which you do in full stack development and teams which you have in operations. You'll create a squad in prompt engineering because you need a variety of different skills to do prompting. It might need a mix of capabilities, including domain and a lot of other pieces. Data engineering, I think, is the biggest opportunity. Creation, capture, cleaning the data, pipelining, data management, data operations, and the whole aspect around data is going to be huge.
See, analytics, insights will become most generative because these systems will integrate and they'll throw. I think that the plumbing, getting the plumbing right, getting what goes in, what goes out. Along with data engineering, you have all responsible AI. What data, inclusion, fairness, traceability, accountability, trustability of results, correct? This out of change management. There's a lot of other things.
If you are using these systems to assist in making decisions, the data which, look, this is exactly the way they say good behavior, bad behavior, which household the child grew up, which school did the child went to, how well it was groomed and educated, correct? Did it have good or bad influences? Things, people, just the similar way generative AI systems will be shaped based on the quality of data which we ingest into this.
The last but very important thing, my favorite quote has always been, "God made world in seven days because he had no installed base." In an enterprise, if you are greenfield, it's completely different. You all live in a world where we have to take all this capability and integrate and orchestrate them to create intelligent apps. You have to infuse them into your existing landscape.
That is where the opportunity exists from a services perspective. How do you pivot and build those capabilities? What we are doing is that because we have this whole key ecosystem partnerships, especially with the large cloud providers and also with some of the technology creators like IBM, NVIDIA, and Intel. We have been working very closely, obviously, with the three large hyperscalers, which is AWS, Google AI, and Microsoft with Azure OpenAI and the whole collaboration.
We have been already engaged with them for a period of time for delivering a lot of engineering services, ISV, OEM, and capabilities for them already. We have been involved in early stage of tech development in partnerships over a period of time. In reverse, we are taking a lot of the technology as a consumer, as HCLTech.
We are consuming a lot of their capabilities, whether it's Copilot, whether it's Duet AI, we're rolling it out within our enterprise, both within our HCLTech services landscape and also within our HCL Software, where we're deploying a lot of Google technology. We are co-creating a lot of services and capabilities using our GenAI labs, where we can consult, create, infuse, and integrate to our end customers. How do we, it's again the same thing what we did in the cloud.
This is another workload or another pattern on top of cloud when you apply generative AI to the enterprises. It's another specialized workload. We migrated infrastructure, VMs to cloud. We then containerized, modernized, app modernization, data modernization. We did SAP to cloud. We then started deploying. There's going to be a big data AI pipeline, and then you want to deploy all these services.
Whether it's Vertex AI, we consume this in Google Cloud. Whether it's Azure OpenAI, you need to build on top of Azure. We clearly see this as an expansion of the GenAI ecosystem on top of hyperscalers, and it'll have a hybrid deployment model. The most interesting thing is going to be, cloud model is all about bringing data closer to the compute.
I think GenAI will force a little bit of a differential thinking where you have to take compute closer to the data. B ecause it's going to be practically impossible to just bring all the data into one. There are going to be multiple providers. It is going to be very interesting how this model is going to evolve. If you're going to be, are you going to put everything around Azure OpenAI? Good.
If you want to start doing something with Google AI, you have to redeploy and recreate, and you have to move all the data to the next cloud provider. There is going to be data duplicity, veracity, but there is a lot of work, planning, data pipelining, hybrid data. There are a lot of opportunities around that. That is the way I really look at this.
We have a GenAI lab which allows us to quickly ideate, prototype, and get this technology from an offering incubation, work with our customers in the edge, quickly prototype and then mainstream it into scale. It is actually what we did with Cloud Native Labs five, six years back when Cloud Native was adopting at scale.
GenAI lab model is bring the power of what we do in digital business, digital engineering, digital workplace, and all our digital process operations, and start to infuse this and say, how can we help solve customers' problems, business problems? We've identified about 150 plus use cases, correct, of what we can apply it. That's what the GenAI lab is helping prototype.
As the customers get access, it's very interesting. We currently, about when Microsoft released the early preview, they have 60 global customers, they started to roll it out. A significant chunk of those customers are existing HCLTech clients. We've been engaged with some of the, many of the clients in the early stage prototyping POCs. Also we've got a couple of examples of how we will use the productionized a couple of those applications.
Again, as the technology access, again, you have to not look at consumer ChatGPT and Bard. That is a completely different use case. We have a product use because I talked about that. In an enterprise, they need to get those tenants. They can access and get those models in their tenants. There is a lot of work. There is a lot of opportunity which we see in this.
Two examples. One is it is a medical devices company where we actually use the ChatGPT API and including a mix of different language models to train it as a medical conversational agent. This is not like a programmable bot, but this was like a generative bot. You just ingested it, the GPT-3.5 base. This thing was initially built on a GPT-3, previous implementation got upgraded to the newer one.
It can generate based on a context or a conditional prompting. It helps the agent. It is like an assisted agent plugin, which means that the healthcare worker uses this to quickly prepare a summary for the physician for efficient treatment. It is like a human-machine partnership. The second one is a very interesting use case. We respond to a lot of RFPs and customer requests in various different industries.
There is a unique use case which we have created, like a sales bot, which allows you to do retrieval augmentation, augmented generation. Quickly ask the question, tell me by industry, by vertical, by service line, by product. What can I quickly know about a customer case study use cases? Rather than searching documents, it can generate a pre-templatized proposal for someone to really go and start working on.
We have built these two as very interesting case studies. We have talked about that, productionizing this. There are about 100 to 150 plus active MVPs which are in progress, which is currently being done. Very interesting use cases starting from image to code generation to customer. A lot of use cases we are seeing is in customer care, customer service, customer success. Can you create more conversational type engagements around that?
That is where we really see this. From a takeaway from our services standpoint, I think the six key things which we believe, which gives us a very unique position in this. One is that we have been involved in creating, co-creating AI tech stack for the last two decades. It is nothing new from an AI capability standpoint. Generative AI has some huge potential, correct? We are an early adopter of this.
We are rolling out some of these capabilities as a class customer, both the Microsoft stack in our services segment and Google stack on our software products and business. We've deployed AIOps in scale in our business. That is how we carved some of this IP into our software product business to really monetize and do that. We are a launch partner for all these hyperscaler programs, so pretty much in the GenAI stack.
We have spun up the labs, which allows us to quickly iterate similar to the same journey which we took our customers to Cloud. I'm going to quickly pivot and just give you a perspective on what we're doing in our software product business. A different view. One is on the services, how we create opportunity and how we apply it. As an ISV, our software product is basically built.
We offer a lot of our core software products. There are 90 plus products, but our shifting is moving towards what we call the four cloud strategy, correct? We have a business cloud, which is a platform of trust, a data cloud, an app dev cloud, and an automation cloud. What we are really doing here is working with three large hyperscalers and some specialized niche partners to embed some of this capability, generative AI capability into the cloud.
What we are doing there, say that we are creating some of those models, we are embedding them or plugging them into our products to really fuel our customers' digital plus economy. Three things. The first, we are adopting Copilot and Duet AI as a pair programmer to increase our product velocity.
We are seeing how can we cut backlogs, how can we deliver more pipeline, how can we offload some of the mundane tasks of developing applications and building those capabilities to a Copilot or a Duet AI system. Then we have something called an AI Workbench, which is called an AutoML platform. It's what we've been using for the last five years to build all the machine learning models.
Now we are plugging that along with this to quickly do model training, MLOps, and other pieces. Second is we just launched last three weeks back, and this is a large, one of the top five Indian banks. They are deploying this. They are a big Unica customer. We're just rolling this capability out where we are actually for about a very sizable number of their cards and loans, auto and personal loans.
We are using generative AI to do contextualized email generation using our Unica Deliver product and then using Unica Interact to really respond in context. It is not like a templatized email which you create in the camp and send it out. This is going to be, if the email has to be sent to me, as KK or to Surendra, it will be personalized. It will look up the customer profile database, use the tonality and the language to contextualize a customer.
This really scales up their engagement model, correct? We just released this as part of our marketing cloud. We have a lot of those capabilities coming in. What we are doing is that we have launched a new engine called PromptO, which basically plugs into our multiple clouds.
The business cloud, app dev cloud, and basically it is an abstraction layer which allows you to interact with multiple language models. You can plug in Vertex AI, Azure OpenAI, public ChatGPT, public Bard, and any other privately built models. It can help you do prompt engineering, model training, document summarization, and code generation.
It is infusing generative AI into our products. That's what we've been doing. Interestingly, we own the underlying technology for generative AI, which is vector database. We actually own a vector database engine in our software product portfolio called Actian Vector. We are now expanding that to support unstructured and other capabilities. The end state is that we're going to infuse a lot of this capability into our products.
One of the areas which we are building GenAI is into our automation cloud, where we can move customers from a human-led automation assistant to automation-led human assistant. Creating new subscription licensing models for a lot of our automation products around observabilities, automation, autonomics, and a lot of those pieces. You continue to start to see this coming in our releases.
That is in a nutshell. In services, continue to adopt the technology, use the technology, help the customer deploy and leverage it. In software, embed and integrate into our core product line, which allows the customers to leverage the benefits of this. Surendra, I think you know the five minutes which I lost in between, I tried to cover it up, but I think I hope I tried to address whatever I could and open to any questions. I know you have a lot of questions lined up.
No, no. This is a great presentation, KK. What I'll do is I have some questions and I have some questions from clients on email and the webcast. I will try to combine some of them in the interest of time. Let me just start with the first question. You started by saying that you have been talking to dozens of customers and you have made presentations to a lot of them. What are customers, like how are customers thinking about it in terms of readiness, what they want to do, what are the considerations, what are the kind of conversations you are having with your customers at this stage?
Interesting. Most of the customer conversations which we have, all of them have some sizable cloud provider commit contracts, either with a Microsoft EA or a Google consumption or an AWS. Obviously, they are, and there is a generative AI capability which is available on all these three. They are quickly looking at how can they apply this.
There are two things. First is an excitement. There is generally a lot of excitement watching these releases. They are starting to say from excitement, they are doing a lot of ideation to prototypes. A lot of MVPs. The GenAI labs is becoming very interesting. At this moment, our labs is overwhelmed with MVP requests. Like I want to pilot quickly, get me something to do. Everyone wants to show something.
When the experimentation starts, you then really start, okay, then the use cases get to become more visible. Clearly three, four areas, correct? One, everyone wants to try this as a conversational. Customer service, customer care, customer experience, outbound marketing, anywhere there is a lot of one-to-many conversations around the context about my product, about my offering, about my whatever I offer. Like if I'm a cards, like you could train the Citi cards basically into and train this thing or any one of the cards business or develop. I think they're really looking at that's a very interesting use case.
Second one is everyone wants to try code generation. I think there will be a lot of people who will try to learn programming now because you can use assistive programming. See, you still need to generate code, but if you do not understand programming, what do you do with the output? You have to still interpret that output, correct? I think we still have not reached a utopian dream where you could generate a code, it goes into production and becomes an app. It is still tied to an environment, deployment, data, you would still integrate data, a lot of the things.
There are some interesting areas around NLP to SQL. I think that is something which we are seriously seeing that everyone does not need to know SQL. You still need SQL query optimizer and all in the backend. A lot of people can. I think that citizen developer, citizen low code is going to be a huge expansion because as more and more capability comes. That is the second thing.
The third thing people are really looking at is this whole area around where you have to look through a lot of documents. Paper documents, knowledge. It's really a lot of our information is in paper, correct? Documents, PDFs in various formats, ability to ingest. And a lot of us, as experienced, we know, hey, remember we have this stored in this document or we've written something here or you research reports, correct?
Like you would have written something, Surendra, you write so much, correct? Suddenly you want to remember what you wrote four years back. If you ingest a lot of this in a curated way with the right annotation into a generative AI engine, you can ask questions and then it can get. It's not like just a search and retrieval, but it can also give you synthesis, you want to summarize a document.
There is a lot of those use cases coming in. There are very specialized use cases around image. You have seen a lot of use by UI teams. They are looking at, can I use it to generate UI? There are some very interesting use cases coming in industry sector-wise. There is a lot of piloting, experimentation, and everything, but very, very important. Customers are now getting access to those private tenants.
There is still a lot of conversations which I am having with CSOs and chief risk officers. They are still trying to demystify in terms of, okay, this language model training on my data, is the model frozen? Will it learn something and go back and update? There is a lot of that clarity and those conversations happening.
Second thing is there's a, there is suddenly a big uptake in data engineering as of now. Because they are realizing that to do all these things, you need data cleansing. Data foundations have become very important because that's what is going to train the model, right? These are the typical conversations we're having with customers. Europe is a bit different because there's a lot of privacy conversations just triggered up in different different countries. That's in a nutshell the way we are seeing the whole demand pattern.
Right. From the key consideration perspective, you mentioned the CSO and the risk guys coming into play. Beyond that, are there other considerations in terms of investments? Or it's still too early to get there? I think it's still in that initial excitement phase, I assume.
A lot of prototype and MVPs happening. That's also giving people a lot of areas they need to work on. The prompt engineering, if you go back three months back, no one talked about prompt engineering. Now there is this new discipline which is prompt design and prompt engineering both, correct? The reason is they're realizing if you just think the time ChatGPT was trained by OpenAI with zillions of prompting and tuning to get it to answer to a certain level, think the effort you need to put to answer context to our enterprise data, correct? That is one.
The second thing is this whole integration and orchestration because how do you connect? See, ChatGPT is a, or Bard is a UI interface. You have to take the UI out of context now and see, okay, when you access them through APIs, how do you push them in? Which language model to choose, correct? Like OpenAI gives you four, correct? Google gives you four. Each language model has different costs by token, correct? Which one will you use to train the model? Which one will you use to prompt, correct?
Customers are now, when they're doing MVP prototyping, because even most of the cloud providers have given access to them on a, like they have a tenant pricing, some of them are early adopter, but they've still not gone into production pricing models, correct? As they figure the usage out, they will have to start thinking about a lot of those pieces. I think it will accelerate a lot of cloud adoption in my view. I think it's going to give a lot of cloud adoption because customers have to move the data to the cloud.
That whole journey to cloud adoption so that you can start consuming generative AI services, that is in both Google and Microsoft. If you really see the two big push, there is a lot of adoption on that. Obviously, Microsoft has a play on the Microsoft 365, which is around the Teams, Copilot, the Word, Copilot. Similarly, Google on the Workspace is doing that with Duet AI and other pieces.
There is going to be everything. AWS is catching up. I think they're coming up and now they're just trying to go in a hyper speed to just get a lot of their pieces out. I mean, some very interesting thing coming out with CodeWhisperer and a lot of the other pieces. It's going to be the same. I think customers have to then make a choice. Again, go back to the cloud conversation.
One cloud on-prem, now it's going to be multi-cloud. Now the question is how many places will you fork data? Then you will start segmenting, saying that, okay, I want to build generative AI for ERP applications. My ERP is running, let's say, on SAP RISE on Azure. I'll surround my data planes around that. A lot of my commerce is running on Google. You will start to see those landscape conversations coming in. It's going to be interesting.
Got it. Got it. The applicability slide that you had was quite interesting. Again, I don't know if I got it right, but just wanted to understand this better.
Some of the areas which already had been able to generate significant amount of efficiency using AI in the past, say BPO, workforce planning, infra, and data, they are the ones where I think there is a lot more efficiency being generated because of GenAI. That's what your expectation seems to be. Is that a correct assessment or you think that it's still early and some of those things could change?
Again, if you go back to my slide one, I always say applicability is a range, correct? I know it's rather fancy. Early, there is a lot of potential, correct? If you really see, there is a lot of potential, correct? In terms of using this technology. Again, there are scenarios, correct? Customers will want to, there are two sides to the story, Surendra.
Many times you are very caught up about saying that, how will it impact what you do? I think if you really look at, you have to start looking at what has the IT services industry done. I think I was having a conversation with, and a very interesting chat with Prateek and you. Like this whole concept, go back 20 years back, 22 years back, two-digit to four-digit conversion. It became offshoring. It became Lean Six Sigma. It became the whole opportunity of RIM, correct? This whole digital [inaudible] came in. It is about pivoting. The industry has always has to pivot. There are companies which will evolve and pivot and others who will not be able to pivot or will take more time to pivot.
I think the applicability is if you are delivering managed services to customers, so obviously your ability to deliver better outcomes to the customer isn't there, then we'll figure out what is the right mechanism of sharing those benefits with the customer. You could do more, your productivity of a developer could increase if you move into a better model.
I think it also has to be context-wise. Let's take an example. Let's say we are rolling out Copilots and we've been using Copilots for, by the way, Copilot is not new. July 2022 was GitHub Copilot launched. We've been adopting at least tens and thousands of our developers have been using it. In a software business, we've been using Pair Programmer for a while in many of the products, correct? Where it is contextually, we can use it in the newer programming languages.
Now, when I use GitHub Copilot, a lot of the code I can pull out of public GitHub repository and then I can do it on my own enterprise GitHub, correct? Which is my own implementation. Now the question which has started to come is that customers have to start building those DevOps toolchain environments with this capability because there is still that question which you talked about, correct? Who owns the copyright? Who owns the code if it is generated?
There's a lot of things which you have to answer. I think even the customers are trying to pick it up and do it in pockets and segments. It's going to be fairly, it's too early for us to just take a universal plate and say, I'm going to do this or that, correct?
I think in enterprises, the customers will adopt it and ingest it in a certain speed. That is what I think in that slide which I talked about, it will mirror pretty much the way cloud adoption happened, correct? With big new app dev, you can look at using this to build something quicker, correct? I also think there is a lot of opportunity in legacy modernization.
Trust me, I think there was a statement about what, six months back about that 15% of only a 7%, yeah, 15% of all enterprise workloads have just reached cloud. There is a huge amount of app mod which is needed, correct? Could this help accelerate the journey to cloud? The answer is yes because we are looking at a lot of adoption capability.
If you're building new apps, Surendra, you will start to use this capability or you'll also use a lot of those APIs which are available for generative AI to start building new applications. When you start to infuse this into existing applications, you have to open the code, make sure it is put in, tested, integrated, figure out it's working with everything else.
The big question comes in is, I have enterprise apps working on secure enterprise data. If I plug in GenAI, will it work on top of my data? Most of the GenAI systems do not use an app. They access data directly. You have to feed them data, correct? This whole data security, identity management of data is a big, big thing, correct? There are a lot of things the IT industry has to go through in the customer's landscape to make sure it is ready for adoption at scale.
Sure. Again, a couple of questions which I am kind of combining here. There is a question on accuracy of these models and how are customers thinking about it. Again, there is another question around, does the importance of testing go up with this kind of model? I'm just, maybe you could cover both of them together.
See, there's an old saying, correct? Garbage in, garbage out. This is going to be a bit different. You can input garbage, it will give you intelligent output, assuming it looks very clean, but it could be more intelligent garbage, right?
I would call it, because the generative AI, it is all about how you train, what data you put in, correct? Hallucination of models which you are seeing in the public world, correct? I'll give you a live example. Before this, I was discussing this with Prateek, correct? Our CFO, saying, okay, let's go and ask both the engines, correct? The Bard engine and the ChatGPT public engine. They tell me what's our stock price yesterday.
It's funny enough, you get two different answers, correct? If you ask a question on Monday, one engine thinks yesterday is Sunday and it knows that the stock market is closed. The other engine just picks up and generates an answer for you, which you think is theoristically near. Now the question is, how do you validate it?
You then go back to one of those market sites like Yahoo Finance or Google Finance or Money control and look up and say, okay, is this data matching with this data? Think if you're doing this in a public instance, if you want to test the veracity, quality of data, the prompt, prompt tuning and this thing is going to be a lot of work. I think it looks very appealing to start with, but when you want to start making decisions or judgment with these systems, the data engineering discipline becomes extremely important. That's one. I think there's a lot of opportunity there. The second question you had was, sorry, Ranvi, Surendra?
Importance of testing.
Oh, testing, see. One is people say test case generation is getting automated. It will help you automate. That is for the test case of the code. Now see the testing of the prompt itself, correct? The more data you create, you have to test it. There is a lot of opportunity. I think there are some interesting developments happening in this space where the testing of prompt, automated prompt engineering testing, some of the work which you are doing in our product PromptO.
It is basically, it's to help that iteration cycle so that you do not need to be a prompt engineer to do prompting so that we could help you take the road. Because the volume is going to be huge, Surendra. What we realize is we just pick one use case for a customer in a bank. Our prompts went to thousands and thousands for it to be tuned because the scenarios, the way you ask a question, I ask a question is so different, correct? I might get two different responses.
The tuning to a baseline model is extremely important. There is a lot of testing, but a different kind of testing. Again, if I do the traditional testing, the question that you asked before, correct? If you do not change the way you do things, then you might not be able to address the new opportunity. I think there is always an evolution here.
Sure. Again, there is a question, and you can answer it in the way you want to. Okay. This question is, while on one hand, you mentioned a lot of opportunities which are coming up because clients are excited and there are so many test cases and so on. Again, some of the examples you gave, right? You said there are a lot more examples. What are the kind of efficiencies that the coders or the programmers could generate? I understand it's early stage, so this is the question I'm reading out, but you could answer it whichever way you want to.
See, I think there is a range. See, I gave you a view, correct? It can give you, see, I don't, see, there is a lot of research reports out there which is saying it's going to give you X or Y. I think in all honesty, GitHub Copilot, which is not the Visual Studio Microsoft Copilot, which is the one which they have launched on Azure DevOps, that's a newer one. For certain parts, it is showing some significant value. I think, to be honest, it is too early to make a prediction, just a prediction. See, I don't want to be saying something which you can't, I think it's very early stages now, but it can be used.
What is still yet to be proven is if you take millions of lines of code of an existing system, you have to first train the system. See, again, this is not a Copilot use case. You have to first also understand the question of the whole functional specific. Take a very complex trading app, correct? It has so many logic rules, scenarios built in. You have to first understand the application, the way it has been written, the interfaces, the logic which is stored in there. By the way, not all logic is stored in the code. By the way, a lot of our applications have data logic stored in the database. We still have not reached a point where we have a data layer separation.
Many of them still, I think it looks very easy from outside of generating code because you're building some app on your desktop, doing some nice website design, a few things. Yeah, it can generate code, but we are really talking about enterprise systems. It's not working on one. How do you understand the whole system architecture? If it's not one product, if you deliver a trading system, you have 20 to 25 systems, correct? Which goes and processes the whole whisper. How do you build the dependency?
I think, in my opinion, if you pick modules, if you pick products, there are areas where it can give you. There are certain areas like if you want to do random test case generation, rather than you thinking about this, you can write some scenarios, you can generate something, but it can help you do certain things, correct?
I think generating code, it does, but I think you still have to validate. The developer cannot just take the code and deploy it even into a CI/CD environment without validating and testing. I think in my view, practically, if you have to give it some more time, we've got a lot of examples. I'm not sure. I don't want to put out something there. I think let's get it more validated, but that's where we'll be able to give a better output at some point.
Sure. Again, maybe we are already past the time. Maybe squeezing in last one or two questions. Could you talk about the people and the training aspect of it, how things need to really change?
A few things, correct? I think people have to, so there are two parts of this. There's a technical skill and there is this whole mindset. I think people have to start to learn how to work with a human-machine partnership. Today you do work with people, you have to start to think, how can I have a team or a squad which is called people and copilots or duets or assistive pieces? That's one big thing.
It's about collaborating. U nderstanding that these systems generate responses, but one has to validate and look at that. That's one. Second is that if the skill set needed to data engineering, correct? It is a very, very important skill. Are we building a skilling training? We are in the process of building a lot of capabilities around data engineering. It is very different than data analytics, correct? Data engineering is the whole data management lifecycle, correct?
From the ingestion to how we operate the data, correct? And how we even retire data. It's very important what data you want to get out of the system. You don't want to be idling around for some data which doesn't need to be used to train the model, whether it's ML model, whether it's a DNN model or it's a GenAI model, correct? That's a skill set.
The third thing is this whole area around integration and orchestration will become a very, very important skill. Educating people on responsible AI becomes very, very important because as developers, as data engineers, as DevOps engineers, correct? As prompt engineering, as functional people or users, correct? How do you make them aware of the responsibility of AI, both in what you put in and how do you take the response back and how do you use it in a more effective way?
That's going to be very, very important. There's a lot of soft skills along with technical skills which are needed to be able to use the system. I think there's a huge opportunity to train, just like we did, we train people over a period of time. The whole training engine in services has to now skill people on the newer skill sets which they need.
Thanks. Thanks a lot, KK. I can go on and on, but in the interest of time, I'll bring the session to a close. There are some pending questions. Maybe I'll reach out to you, via Sanjay. Okay. We can do that. Thanks a lot for taking time out and doing this session.
Obviously, there is a fair bit of confusion among investors, and I think your session was super insightful and helpful in that context. Thanks a lot for your time, and thanks to all the participants today.
Thank you, Surendra, for the opportunity. Look forward to talking to you. Thank you very much. Thanks, everyone. Have a great evening or a day ahead. Thank you.